kubeadm使用本地镜像安装kubernetes 1.15.1全过程

注意:因为本环节用于测试istio,所以没有采用集群环节部署,只使用一台虚拟机,但这个安装方式是集群就绪的

1. 环境准备

操作系统:Centos7.5 200g hdd, 8g mem
kubernetes: 1.15.1
docker: ce 19.03.1
ip:10.0.135.30
hostname:centos75

1.1 设置主机名和hosts

hostnamectl set-hostname centos75
echo "10.0.135.30">>/etc/hosts

1.2 关闭防火墙和selinux

systemctl stop firewalld    //关闭防火墙
systemctl disable firewalld //禁止防火墙自动启动

setenforce 0        //关闭selinux
sed -i "s/^SELINUX=enforcing/SELINUX=disabled/g" /etc/sysconfig/selinux
sed -i "s/^SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config
sed -i "s/^SELINUX=permissive/SELINUX=disabled/g" /etc/sysconfig/selinux
sed -i "s/^SELINUX=permissive/SELINUX=disabled/g" /etc/selinux/config

1.3关闭swap交换分区

swapoff -a
vi /etc/fstab
//注释掉swap分区
#/dev/mapper/centos_centos75-swap swap
//加载br_netfilter
modprobe br_netfilter

1.4配置内核参数

cat > /etc/sysctl.d/k8s.conf <<EOF 
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF

生效文件

sysctl -p /etc/sysctl.d/k8s.conf

修改linux资源配置文件,调高ulimit最大打开文件数和systemctl管理的服务文件打开最大数

echo "* soft nofile 655360" >> /etc/security/limits.conf
echo "* hard nofile 655360" >> /etc/security/limits.conf
echo "* soft nproc 655360" >> /etc/security/limits.conf
echo "* hard nproc 655360" >> /etc/security/limits.conf
echo "* soft memlock unlimited" >> /etc/security/limits.conf
echo "* hard memlock unlimited" >> /etc/security/limits.conf
echo "DefaultLimitNOFILE=1024000" >> /etc/systemd/system.conf
echo "DefaultLimitNPROC=1024000" >> /etc/systemd/system.conf

1.4配置yum源

首先备份原yum源文件

mv /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.backup
mv /etc/yum.repos.d/epel.repo /etc/yum.repos.d/epel.repo.backup
mv /etc/yum.repos.d/epel-testing.repo /etc/yum.repos.d/epel-testing.repo.backup

从aliyun网站下载最新yum源配置

wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo
yum clean all
yum makecache

1.5配置kubernetes安装下载源

cat < /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

1.6安装kubeadm的一些依赖包

yum install -y conntrack ipvsadm ipset jq sysstat curl iptables libseccomp bash-completion yum-utils device-mapper-persistent-data lvm2 net-tools conntrack-tools vim libtool-ltdl

##1.7安装时间同步服务器

yum install chrony –y 
systemctl enable chronyd.service 
systemctl start chronyd.service 
//查看chrony状态
systemctl status chronyd.service chronyc sources

1.8 配置节点之间的互信

配置节点互信后,是可以免密码ssh连接服务器

//使用默认配置就可以了
ssh-keygen -t rsa -C "scwang18@163.com"
//生成完成后,rsa文件拷贝到其他节点
ssh-copy-id 其他节点名

1.9 重启系统

reboot now

2 安装docker

2.1 配置docker yum

yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo

2.2 查看当前最新docker-ce版本

yum list docker-ce --showduplicates | sort -r

2.3 安装最新版本的docker-ce

yum install -y docker-ce-19.03.1-3.el7
systemctl start docker
systemctl enable docker.service

2.4 配置docker images镜像加速器

cat <<EOF > /etc/docker/daemon.json
{
  "registry-mirrors": ["https://heusyzko.mirror.aliyuncs.com"],
  "exec-opts": ["native.cgroupdriver=systemd"]
}
EOF

3 安装kubeadm、kubelet、kubectl文件

  • kubeadm是Kubernetes官方提供的快速部署集群生产环境的工具,可惜需要科学上网真正快速方便。
  • kubelet是Kubernetes集群所有节点都需要运行的组件,负责管理pos、容器的创建、监测、销毁等
  • kubectl是Kubernetes的集群管理命令行工具,基本上所有的集群管理工作都可以通过这个工具完成,实质上这个工具是一个kubernetes api server的客户端
yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes 
//启动kubelet 
systemctl enable kubelet && systemctl start kubelet

4 准备安装kubernetes的镜像

4.1 查看安装需要的镜像

kubeadm在安装时需要从k8s.io官网下载一些必要的docker images,因为科学上网原因,会一直卡在拉取images阶段,因此,我们需要先手动下载必要的Images。
从1.11开始,kubeadm可以列出images列表

kubeadm config images list

4.2 生成kubeadm配置文件,修改配置文件里的images下载源、是否允许Master允许业务Pod、网络配置信息

kubeadm config print init-defaults > kubeadm.conf

vi kubeadm.conf
//修改默认kubeadm初始化参数如下
imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers
kubernetesVersion: v1.15.1
nodeRegistration:
  taints:
  - effect: PreferNoSchedule
    key: node-role.kubernetes.io/master
localAPIEndpoint:
  advertiseAddress: 10.0.135.30
networking:
  dnsDomain: cluster.local
  podSubnet: 192.168.0.0/16
  serviceSubnet: 172.18.0.0/16
 

4.3 使用自定义配置拉取images

kubeadm config images pull --config kubeadm.conf

4.4 修改pull images的tag

从国内镜像网站下载的Images需要re tag到k8s.gcr.io

docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.15.1 k8s.gcr.io/kube-scheduler:v1.15.1
docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.15.1

docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.15.1 k8s.gcr.io/kube-proxy:v1.15.1
docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.15.1

docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.15.1 k8s.gcr.io/kube-controller-manager:v1.15.1
docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.15.1

docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.15.1 k8s.gcr.io/kube-apiserver:v1.15.1
docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.15.1

docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.3.1 k8s.gcr.io/coredns:1.3.1
docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.3.1

docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.3.10 k8s.gcr.io/etcd:3.3.10
docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.3.10

docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1 k8s.gcr.io/pause:3.1
docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1

确认retag成功

docker images

5 部署master节点

5.1 初始化kubeadm

kubeadm init --config /root/kubeadm.conf

5.2 初始化结果如下

[root@centos75 ~]# kubeadm init --config /root/kubeadm.conf
[init] Using Kubernetes version: v1.15.1
[preflight] Running pre-flight checks
	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 19.03.1. Latest validated version: 18.09
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [centos75 localhost] and IPs [10.0.135.30 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [centos75 localhost] and IPs [10.0.135.30 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [centos75 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [172.18.0.1 10.0.135.30]
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[apiclient] All control plane components are healthy after 40.503786 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.15" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node centos75 as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node centos75 as control-plane by adding the taints [node-role.kubernetes.io/master:PreferNoSchedule]
[bootstrap-token] Using token: abcdef.0123456789abcdef
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 10.0.135.30:6443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:65bcdce5c078644662e51c5238def0ee935060267a38a1365aaa82d2dfc8b5cc
[root@centos75 ~]#

5.3 查看是否生成了kubernetes文件

[root@centos75 ~]# ls /etc/kubernetes/ -l
总用量 36
-rw-------  1 root root 5451 8月  21 09:31 admin.conf
-rw-------  1 root root 5483 8月  21 09:31 controller-manager.conf
-rw-------  1 root root 5471 8月  21 09:31 kubelet.conf
drwxr-xr-x. 2 root root  113 8月  21 09:31 manifests
drwxr-xr-x  3 root root 4096 8月  21 09:31 pki
-rw-------  1 root root 5431 8月  21 09:31 scheduler.conf

5.4 根据上一部kubeadm结果提示配置kubectl配置文件

mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

5.5 验证kubeadm安装是否成功

[root@centos75 .kube]# kubectl get po --all-namespaces
NAMESPACE     NAME                               READY   STATUS    RESTARTS   AGE
kube-system   coredns-6967fb4995-c5jjw           0/1     Pending   0          13h
kube-system   coredns-6967fb4995-r24rh           0/1     Pending   0          13h
kube-system   etcd-centos75                      1/1     Running   0          12h
kube-system   kube-apiserver-centos75            1/1     Running   0          12h
kube-system   kube-controller-manager-centos75   1/1     Running   0          12h
kube-system   kube-proxy-mms7k                   1/1     Running   0          13h
kube-system   kube-scheduler-centos75            1/1     Running   0          12h
[root@centos75 .kube]#

验证集群是否成功

[root@centos75 .kube]# kubectl get cs
NAME                 STATUS    MESSAGE             ERROR
controller-manager   Healthy   ok
scheduler            Healthy   ok
etcd-0               Healthy   {"health":"true"}
[root@centos75 .kube]#

6 部署网络(Calico)

Calico是一个纯三层网络,通过linux 内核的L3 forwarding来实现vRouter功能,区别于fannel等需要封包解包的网络协议

6.1 下载calico images

docker pull calico/node:v3.8.2 
docker pull calico/cni:v3.8.2
docker pull calico/typha:v3.8.2

6.2 修改images的tag

docker tag calico/node:v3.8.2 quay.io/calico/node:v3.8.2
docker tag calico/cni:v3.8.2 quay.io/calico/cni:v3.8.2
docker tag calico/typha:v3.8.2 quay.io/calico/typha:v3.8.2

docker rmi calico/node:v3.8.2
docker rmi calico/cni:v3.8.2
docker rmi calico/typha:v3.8.2

6.3 安装calico

//新版的calico已将rbac配置信息整合在calico.yaml中
curl https://docs.projectcalico.org/v3.8/manifests/calico.yaml -O
POD_CIDR="192.168.0.0/16" 
sed -i -e "s?192.168.0.0/16?$POD_CIDR?g" calico.yaml
sed -i -e "s?typha_service_name: \"none\"?typha_service_name: \"calico-typha\"?g" calico.yaml
sed -i -e "s?replicas: 1?replicas: 2?g" calico.yaml
kubectl apply -f calico.yaml


6.4 确认calico安装完成

kubectl get pods --all-namespaces
kubectl get nodes
评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值