文章目录
前言
Kubernetes
用于协调高度可用的计算机集群,这些计算机群集被连接作为单个单元工作。Kubernetes
在一个集群上以更有效的方式自动分发和调度容器应用程序。Kubernetes
集群由两种类型的资源组成:
- Master是集群的调度节点
- Nodes是应用程序实际运行的工作节点
环境准备与规划
角色 | IP | 组建 |
---|---|---|
master | 192.168.56.119 | etcd、kube-apiserver、kube-controller-manager、 kubescheduler、docker |
node01 | 192.168.56.120 | kube-porxy、kubelet、docker |
node02 | 192.168.56.121 | kube-porxy、kubelet、docker |
如果不知道怎么搭建初始化centos7,可以参考我之前的博文【CI、CD专题】mac或win平台下VirtualBox安装centos7之配静态ip
- 查看防火墙状态
systemctl status firewalld.service
- 关闭防火墙
systemctl stop firewalld.service
- 禁止开机启动防火墙
systemctl stop firewalld.service
- 获取kubernetes的二进制安装包
https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.9.md
Master安装
安装docker
- 安装docker
yum install docker-engine
- 查看docker版本
docker -v
- 配置docker 镜像私服,可以查看下面链接文末
【CI、CD专题】mac或win平台下VirtualBox安装centos7之配静态ip
etcd服务
- 下载etcd二进制文件
https://github.com/etcd-io/etcd/releases - 上传到master,推荐mac的一个
FTP
客户端
https://xclient.info/s/transmit.html - 将etcd和etcdctl文件复制到/usr/bin目录
tar -xzvf etcd-v3.3.9-linux-amd64.tar.gz
cd etcd-v3.3.9-linux-amd64
cp etcd etcdctl /usr/bin/
- 配置systemd服务文件 /usr/lib/systemd/system/etcd.service
vim /usr/lib/systemd/system/etcd.service
[Unit]
Description=Etcd Server
After=network.target
[Service]
Type=simple
EnvironmentFile=-/etc/etcd/etcd.conf
WorkingDirectory=/var/lib/etcd/
ExecStart=/usr/bin/etcd
Restart=on-failure
[Install]
WantedBy=multi-user.target
- 启动与测试etcd服务
systemctl daemon-reload
systemctl enable etcd.service
mkdir -p /var/lib/etcd/
systemctl start etcd.service
etcdctl cluster-health
kube-apiserver服务
解压后将kube-apiserver
、kube-controller-manager
、kube-scheduler
以及管理要使用的kubectl
二进制命令文件放到/usr/bin
目录,即完成这几个服务的安装。
cp kube-apiserver kube-controller-manager kube-scheduler kubectl /usr/bin/
下面是对kube-apiserver
服务进行配置
编辑systemd
服务文件 vi /usr/lib/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
After=etcd.service
Wants=etcd.service
[Service]
EnvironmentFile=/etc/kubernetes/apiserver
ExecStart=/usr/bin/kube-apiserver $KUBE_API_ARGS
Restart=on-failure
Type=notify
[Install]
WantedBy=multi-user.target
配置文件
创建目录:mkdir /etc/kubernetes
vi /etc/kubernetes/apiserver
KUBE_API_ARGS="--storage-backend=etcd3 --etcd-servers=http://127.0.0.1:2379 --insecure-bind-address=0.0.0.0 --insecure-port=8080 --service-cluster-ip-range=169.169.0.0/16 --service-node-port-range=1-65535 --admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,DefaultStorageClass,ResourceQuota --logtostderr=true --log-dir=/var/log/kubernetes --v=2"
kube-controller-manager服务
kube-controller-manager
服务依赖于kube-apiserver
服务:
- 配置
systemd
服务文件:vi /usr/lib/systemd/system/kube-controller-manager.service
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=kube-apiserver.service
Requires=kube-apiserver.service
[Service]
EnvironmentFile=-/etc/kubernetes/controller-manager
ExecStart=/usr/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_ARGS
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
- 配置文件
vi /etc/kubernetes/controller-manager
,注意这个master
的ip修改成本地的ip。
KUBE_CONTROLLER_MANAGER_ARGS="--master=http://192.168.56.119:8080 --logtostderr=true --log-dir=/var/log/kubernetes --v=2"
kube-scheduler服务
kube-scheduler
服务也依赖于kube-apiserver
服务。
- 配置systemd服务文件:
vi /usr/lib/systemd/system/kube-scheduler.service
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=kube-apiserver.service
Requires=kube-apiserver.service
[Service]
EnvironmentFile=-/etc/kubernetes/scheduler
ExecStart=/usr/bin/kube-scheduler $KUBE_SCHEDULER_ARGS
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
- 配置文件:
vi /etc/kubernetes/scheduler
KUBE_SCHEDULER_ARGS="--master=http://192.168.56.119:8080 --logtostderr=true --logdir=/var/log/kubernetes --v=2"
启动
systemctl daemon-reload
systemctl enable kube-apiserver.service
systemctl start kube-apiserver.service
systemctl enable kube-controller-manager.service
systemctl start kube-controller-manager.service
systemctl enable kube-scheduler.service
systemctl start kube-scheduler.service
检查每个服务的健康状态:
systemctl status kube-apiserver.service
systemctl status kube-controller-manager.service
systemctl status kube-scheduler.service
Node1安装
在Node1节点上,以同样的方式把从压缩包中解压出的二进制文件kubelet
kube-proxy
放到/usr/bin
目录中。
cp kubelet kube-proxy /usr/bin/
在Node1节点上需要预先安装docker,请参考Master上Docker的安装,并启动Docker
kubelet服务
- 配置systemd服务文件:
vi /usr/lib/systemd/system/kubelet.service
[Unit]
Description=Kubernetes Kubelet Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=docker.service
Requires=docker.service
[Service]
WorkingDirectory=/var/lib/kubelet
EnvironmentFile=-/etc/kubernetes/kubelet
ExecStart=/usr/bin/kubelet $KUBELET_ARGS
Restart=on-failure
KillMode=process
[Install]
WantedBy=multi-user.target
-
mkdir -p /var/lib/kubelet
-
配置文件:
vi /etc/kubernetes/kubelet
KUBELET_ARGS="/usr/bin/kubelet --kubeconfig=/etc/kubernetes/kubeconfig --hostname-override=192.168.56.120 --logtostderr=true --log-dir=/var/log/kubernetes --v=2 --fail-swap-on=false --runtime-cgroups=/systemd/system.slice --kubelet-cgroups=/systemd/system.slice --require-kubeconfig=true"
用于kubelet
连接Master Apiserver
的配置文件vi /etc/kubernetes/kubeconfig
apiVersion: v1
clusters:
- cluster:
server: http://192.168.56.119:8080
name: local
contexts:
- context:
cluster: local
user: ""
name: mycontext
current-context: mycontext
kube-proxy服务
kube-proxy
服务依赖于network
服务,所以一定要保证network
服务正常,如果network
服务启动失败,常见解决方案有以下几中:
1.和 NetworkManager 服务有冲突,这个好解决,直接关闭 NetworkManger 服务就好了, service
NetworkManager stop,并且禁止开机启动 chkconfig NetworkManager off 。之后重启就好了
2.和配置文件的MAC地址不匹配,这个也好解决,使用ip addr(或ifconfig)查看mac地址,
将/etc/sysconfig/network-scripts/ifcfg-xxx中的HWADDR改为查看到的mac地址
3.设定开机启动一个名为NetworkManager-wait-online服务,命令为:
systemctl enable NetworkManager-wait-online.service
4.查看/etc/sysconfig/network-scripts下,将其余无关的网卡位置文件全删掉,避免不必要的影响,即只留一个以
ifcfg开头的文件
- 配置systemd服务文件:
vi /usr/lib/systemd/system/kube-proxy.service
[Unit]
Description=Kubernetes Kube-proxy Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.service
Requires=network.service
[Service]
EnvironmentFile=/etc/kubernetes/proxy
ExecStart=/usr/bin/kube-proxy $KUBE_PROXY_ARGS
Restart=on-failure
LimitNOFILE=65536
KillMode=process
[Install]
WantedBy=multi-user.target
- 配置文件:
vi /etc/kubernetes/proxy
KUBE_PROXY_ARGS="--master=http://192.168.56.119:8080 --hostname-override=192.168.56.120 --logtostderr=true --log-dir=/var/log/kubernetes --v=2"
启动
systemctl daemon-reload
systemctl enable kubelet
systemctl start kubelet
systemctl status kubelet
systemctl enable kube-proxy
systemctl start kube-proxy
systemctl status kube-proxy
Node2安装
请参考Node1安装,注意修改IP
健康检查与示例测试
- 查看集群状态
kubectl get nodes
- 查看master集群组件状态
[root@localhost kubernetes]# kubectl get cs
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-0 Healthy {"health":"true"}
-
nginx-rc.yaml
kubectl create -f nginx-rc.yaml
apiVersion: v1
kind: ReplicationController
metadata:
name: nginx
spec:
replicas: 3
selector:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
- nginx-svc.yaml
kubectl create -f nginx-svc.yaml
apiVersion: v1
kind: Service
metadata:
name: nginx
spec:
type: NodePort
ports:
- port: 80
nodePort: 33333
selector:
app: nginx
查看pod
以及查看pod
的详情
K8S集群搭建常见问题
-
解决 kubectl get pods时No resources found问题
vim /etc/kubernetes/apiserver
- 找到”KUBE_ADMISSION_CONTROL="- admission_control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota",去掉
ServiceAccount
,保存退出。 systemctl restart kube-apiserver
重启此服务
-
pull
失败
首先搭建一个docker私服仓库
# 私有仓库搭建
docker pull registry
docker run -di --name=registry -p 5000:5000 registry
修改daemon.json {"insecure-registries":["192.168.56.121:5000"]}
重启docker服务 systemctl restart docker
1、yum install *rhsm* -y
2、docker pull registry.access.redhat.com/rhel7/pod-infrastructure:latest
如果以上两步解决问题了,那么就不需要在执行下面操作
3、docker search pod-infrastructure
4、docker pull docker.io/tianyebj/pod-infrastructure
5、docker tag tianyebj/pod-infrastructure 192.168.56.121:5000/pod-infrastructure
6、docker push 192.168.56.121:5000/pod-infrastructure
7、vi /etc/kubernetes/kubelet
修改 KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=192.168.56.121:5000/pod- infrastructure:latest"
8、重启服务
systemctl restart kube-apiserver
systemctl restart kube-controller-manager
systemctl restart kube-scheduler
systemctl restart kubelet
systemctl restart kube-proxy
-
解决方案2
1、docker pull kubernetes/pause
2、docker tag docker.io/kubernetes/pause:latest 192.168.56.121:5000/google_containers/pause-amd64.3.0
3、docker push 192.168.56.121:5000/google_containers/pause-amd64.3.0
4、vi /etc/kubernetes/kubelet配置为
KUBELET_ARGS="–pod_infra_container_image=192.168.56.121:5000/google_containers/pause-amd64.3.0"
5、重启kubelet服务 systemctl restart kubelet
查看kubelet 日志
journalctl -u kubelet -n 1000