k8s安装步骤
三台机器
192.168.168.151 master
192.168.168.152 node
192.168.168.153 node
安装步骤
准备工作,首先安装docker
安装docker后才能安装k8s集群(这个 总觉得有点不对,k8s还支持rkt,不仅仅支持docker)
1.安装docker
yum -y install docker-engine
2.k8s集群安装
2.1 master安装工作
2.1.1 etcd服务
etcd做为Kubernetes集群的主要服务,在安装Kubernetes各服务前需要首先安装和启动
2.1.1.1 下载etcd并复制到/usr/bin
wget https://github.com/etcd-io/etcd/releases/download/v3.3.9/etcd-v3.3.9-linux-amd64.tar.gz
将将etcd和etcdctl文件复制到/usr/bin目录
cp yourpath/etcd yourpath/etcdctl /usr/bin
2.1.1.2配置systemd服务文件
vim /usr/lib/systemd/system/etcd.service
[Unit]
Description=Etcd Server
After=network.target
[Service]
Type=simple
EnvironmentFile=-/etc/etcd/etcd.conf
WorkingDirectory=/var/lib/etcd/
ExecStart=/usr/bin/etcd
Restart=on-failure
[Install]
WantedBy=multi-user.target
2.1.1.3启动与测试etcd服务
systemctl daemon-reload
systemctl enable etcd.service
mkdir -p /var/lib/etcd/
systemctl start etcd.service
etcdctl cluster-health
2.1.2kube-apiserver服务
2.1.2.1下载并复制
下载
wget https://dl.k8s.io/v1.19.0/kubernetes-server-linux-amd64.tar.gz
解压
tar -zxvf kubernetes-server-linux-amd64.tar.gz
解压后将kube-apiserver、kube-controller-manager、kube-scheduler以及管理要使用的kubectl二进制命令文件
放到/usr/bin目录,即完成这几个服务的安装。
cp kube-apiserver kube-controller-manager kube-scheduler kubectl /usr/bin/
2.1.2.2配置
下面是对kube-apiserver服务进行配置
编辑systemd服务文件
vim /usr/lib/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
After=etcd.service
Wants=etcd.service
[Service]
EnvironmentFile=/etc/kubernetes/apiserver
ExecStart=/usr/bin/kube-apiserver $KUBE_API_ARGS
Restart=on-failure
Type=notify
[Install]
WantedBy=multi-user.target
2.1.2.3配置文件
创建目录:mkdir /etc/kubernetes
vim /etc/kubernetes/apiserver
KUBE_API_ARGS="--storage-backend=etcd3 --etcd-servers=http://127.0.0.1:2379 --insecure-bind-address=0.0.0.0 --insecure-port=8080 --service-cluster-ip-range=169.169.0.0/16 --service-node-port-range=1-65535 --admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,DefaultStorageClass,ResourceQuota --logtostderr=true --log-dir=/var/log/kubernetes --v=2"
2.1.3kube-controller-manager服务
kube-controller-manager服务依赖于kube-apiserver服务:
2.1.3.1配置systemd服务文件:
vim /usr/lib/systemd/system/kube-controller-manager.service
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=kube-apiserver.service
Requires=kube-apiserver.service
[Service]
EnvironmentFile=-/etc/kubernetes/controller-manager
ExecStart=/usr/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_ARGS
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
2.1.3.2配置文件
vim /etc/kubernetes/controller-manager
KUBE_CONTROLLER_MANAGER_ARGS="--master=http://192.168.168.151:8080 --logtostderr=true --log-dir=/var/log/kubernetes --v=2"
2.1.4 kube-scheduler服务
kube-scheduler服务也依赖于kube-apiserver服务。
2.1.4.1配置systemd服务文件:
vim /usr/lib/systemd/system/kube-scheduler.service
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=kube-apiserver.service
Requires=kube-apiserver.service
[Service]
EnvironmentFile=-/etc/kubernetes/scheduler
ExecStart=/usr/bin/kube-scheduler $KUBE_SCHEDULER_ARGS
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
2.1.4.2配置文件:
vim /etc/kubernetes/scheduler
KUBE_SCHEDULER_ARGS="--master=http://192.168.168.151:8080 --logtostderr=true --log-dir=/var/log/kubernetes --v=2"
2.1.5启动,完成以上配置后,按顺序启动服务
systemctl daemon-reload
systemctl enable kube-apiserver.service
systemctl start kube-apiserver.service
systemctl enable kube-controller-manager.service
systemctl start kube-controller-manager.service
systemctl enable kube-scheduler.service
systemctl start kube-scheduler.service
检查每个服务的健康状态:
systemctl status kube-apiserver.service
systemctl status kube-controller-manager.service
systemctl status kube-scheduler.service
2.2 node安装工作(以192.168.168.152为例)
2.2.1准备工作
在Node1节点上,以同样的方式把从压缩包中解压出的二进制文件kubelet kube-proxy放到/usr/bin目录中。
在Node1节点上需要预先安装docker,请参考Master上Docker的安装,并启动Docker
2.2.2kubelet服务
配置systemd服务文件:
vim /usr/lib/systemd/system/kubelet.service
[Unit]
Description=Kubernetes Kubelet Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=docker.service
Requires=docker.service
[Service]
WorkingDirectory=/var/lib/kubelet
EnvironmentFile=-/etc/kubernetes/kubelet
ExecStart=/usr/bin/kubelet $KUBELET_ARGS
Restart=on-failure
KillMode=process
[Install]
WantedBy=multi-user.target
mkdir -p /var/lib/kubelet
配置文件:
vim /etc/kubernetes/kubelet
KUBELET_ARGS="--kubeconfig=/etc/kubernetes/kubeconfig --hostname-override=192.168.168.152 --logtostderr=false --log-dir=/var/log/kubernetes --v=2 --fail-swap-on=false"
用于kubelet连接Master Apiserver的配置文件
vim /etc/kubernetes/kubeconfig
apiVersion: v1
kind: Config
clusters:
- cluster:
server: http://192.168.168.151:8080
name: local
contexts:
- context:
cluster: local
name: mycontext
current-context: mycontext
2.2.3kube-proxy服务
配置systemd服务文件:
vim /usr/lib/systemd/system/kube-proxy.service
[Unit]
Description=Kubernetes Kube-proxy Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.service
Requires=network.service
[Service]
EnvironmentFile=/etc/kubernetes/proxy
ExecStart=/usr/bin/kube-proxy $KUBE_PROXY_ARGS
Restart=on-failure
LimitNOFILE=65536
KillMode=process
[Install]
WantedBy=multi-user.target
配置文件:
vim /etc/kubernetes/proxy
KUBE_PROXY_ARGS="--master=http://192.168.168.151:8080 --hostname-override=192.168.168.152 --logtostderr=true --log-dir=/var/log/kubernetes --v=2"
2.2.4配置另外一个节点
参考node1,注意修改ip
2.2.5启动
systemctl daemon-reload
systemctl enable kubelet
systemctl start kubelet
systemctl status kubelet
systemctl enable kube-proxy
systemctl start kube-proxy
systemctl status kube-proxy
至此,搭建完成
3.查看集群状态
kubectl get nodes
示例:
[root@mini151 opt]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
192.168.168.152 Ready <none> 60m v1.19.0
192.168.168.153 Ready <none> 66m v1.19.0
4.注意事项
虚拟机内存:2048M+
虚拟机cpu数量:2+
关闭swap