centos7安装Kubernetes集群

在这里插入图片描述

前言

Kubernetes用于协调高度可用的计算机集群,这些计算机群集被连接作为单个单元工作。Kubernetes 在一个集群上以更有效的方式自动分发和调度容器应用程序。Kubernetes集群由两种类型的资源组成:

  • Master是集群的调度节点
  • Nodes是应用程序实际运行的工作节点

环境准备与规划

角色IP组建
master192.168.56.119etcd、kube-apiserver、kube-controller-manager、 kubescheduler、docker
node01192.168.56.120kube-porxy、kubelet、docker
node02192.168.56.121kube-porxy、kubelet、docker

如果不知道怎么搭建初始化centos7,可以参考我之前的博文【CI、CD专题】mac或win平台下VirtualBox安装centos7之配静态ip

Master安装

安装docker

  • 安装docker
yum install docker-engine
  • 查看docker版本
docker -v

etcd服务

tar -xzvf etcd-v3.3.9-linux-amd64.tar.gz
cd etcd-v3.3.9-linux-amd64
cp etcd etcdctl /usr/bin/
  • 配置systemd服务文件 /usr/lib/systemd/system/etcd.service
vim /usr/lib/systemd/system/etcd.service
[Unit]
Description=Etcd Server
After=network.target
[Service]
Type=simple
EnvironmentFile=-/etc/etcd/etcd.conf
WorkingDirectory=/var/lib/etcd/
ExecStart=/usr/bin/etcd
Restart=on-failure
[Install]
WantedBy=multi-user.target
  • 启动与测试etcd服务
systemctl daemon-reload
systemctl enable etcd.service
mkdir -p /var/lib/etcd/
systemctl start etcd.service
etcdctl cluster-health

kube-apiserver服务

解压后将kube-apiserverkube-controller-managerkube-scheduler以及管理要使用的kubectl二进制命令文件放到/usr/bin目录,即完成这几个服务的安装。

cp kube-apiserver kube-controller-manager kube-scheduler kubectl /usr/bin/

下面是对kube-apiserver服务进行配置
编辑systemd服务文件 vi /usr/lib/systemd/system/kube-apiserver.service

[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
After=etcd.service
Wants=etcd.service
[Service]
EnvironmentFile=/etc/kubernetes/apiserver
ExecStart=/usr/bin/kube-apiserver $KUBE_API_ARGS
Restart=on-failure
Type=notify
[Install]
WantedBy=multi-user.target

配置文件

创建目录:mkdir /etc/kubernetes

vi /etc/kubernetes/apiserver

KUBE_API_ARGS="--storage-backend=etcd3 --etcd-servers=http://127.0.0.1:2379 --insecure-bind-address=0.0.0.0 --insecure-port=8080 --service-cluster-ip-range=169.169.0.0/16 --service-node-port-range=1-65535 --admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,DefaultStorageClass,ResourceQuota --logtostderr=true --log-dir=/var/log/kubernetes --v=2"

kube-controller-manager服务

kube-controller-manager服务依赖于kube-apiserver服务:

  • 配置systemd服务文件:vi /usr/lib/systemd/system/kube-controller-manager.service
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=kube-apiserver.service
Requires=kube-apiserver.service
[Service]
EnvironmentFile=-/etc/kubernetes/controller-manager
ExecStart=/usr/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_ARGS
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
  • 配置文件 vi /etc/kubernetes/controller-manager,注意这个master的ip修改成本地的ip。
KUBE_CONTROLLER_MANAGER_ARGS="--master=http://192.168.56.119:8080 --logtostderr=true --log-dir=/var/log/kubernetes --v=2"

kube-scheduler服务

kube-scheduler服务也依赖于kube-apiserver服务。

  • 配置systemd服务文件:vi /usr/lib/systemd/system/kube-scheduler.service
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=kube-apiserver.service
Requires=kube-apiserver.service
[Service]
EnvironmentFile=-/etc/kubernetes/scheduler
ExecStart=/usr/bin/kube-scheduler $KUBE_SCHEDULER_ARGS
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
  • 配置文件:vi /etc/kubernetes/scheduler
KUBE_SCHEDULER_ARGS="--master=http://192.168.56.119:8080 --logtostderr=true --logdir=/var/log/kubernetes --v=2"

启动

systemctl daemon-reload
systemctl enable kube-apiserver.service
systemctl start kube-apiserver.service
systemctl enable kube-controller-manager.service
systemctl start kube-controller-manager.service
systemctl enable kube-scheduler.service
systemctl start kube-scheduler.service
检查每个服务的健康状态:
systemctl status kube-apiserver.service
systemctl status kube-controller-manager.service
systemctl status kube-scheduler.service

Node1安装

在Node1节点上,以同样的方式把从压缩包中解压出的二进制文件kubelet kube-proxy放到/usr/bin目录中。

cp kubelet kube-proxy /usr/bin/

在Node1节点上需要预先安装docker,请参考Master上Docker的安装,并启动Docker

kubelet服务

  • 配置systemd服务文件:vi /usr/lib/systemd/system/kubelet.service
[Unit]
Description=Kubernetes Kubelet Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=docker.service
Requires=docker.service
[Service]
WorkingDirectory=/var/lib/kubelet
EnvironmentFile=-/etc/kubernetes/kubelet
ExecStart=/usr/bin/kubelet $KUBELET_ARGS
Restart=on-failure
KillMode=process
[Install]
WantedBy=multi-user.target
  • mkdir -p /var/lib/kubelet

  • 配置文件:vi /etc/kubernetes/kubelet

KUBELET_ARGS="/usr/bin/kubelet --kubeconfig=/etc/kubernetes/kubeconfig --hostname-override=192.168.56.120 --logtostderr=true --log-dir=/var/log/kubernetes --v=2 --fail-swap-on=false --runtime-cgroups=/systemd/system.slice --kubelet-cgroups=/systemd/system.slice  --require-kubeconfig=true"

用于kubelet连接Master Apiserver的配置文件vi /etc/kubernetes/kubeconfig

apiVersion: v1
clusters:
- cluster:
    server: http://192.168.56.119:8080
  name: local
contexts:
- context:
    cluster: local
    user: ""
  name: mycontext
current-context: mycontext

kube-proxy服务

kube-proxy服务依赖于network服务,所以一定要保证network服务正常,如果network服务启动失败,常见解决方案有以下几中:

1.和 NetworkManager 服务有冲突,这个好解决,直接关闭 NetworkManger 服务就好了, service
NetworkManager stop,并且禁止开机启动 chkconfig NetworkManager off 。之后重启就好了
2.和配置文件的MAC地址不匹配,这个也好解决,使用ip addr(或ifconfig)查看mac地址,
将/etc/sysconfig/network-scripts/ifcfg-xxx中的HWADDR改为查看到的mac地址
3.设定开机启动一个名为NetworkManager-wait-online服务,命令为:
systemctl enable NetworkManager-wait-online.service
4.查看/etc/sysconfig/network-scripts下,将其余无关的网卡位置文件全删掉,避免不必要的影响,即只留一个以
ifcfg开头的文件
  • 配置systemd服务文件:vi /usr/lib/systemd/system/kube-proxy.service
[Unit]
Description=Kubernetes Kube-proxy Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.service
Requires=network.service
[Service]
EnvironmentFile=/etc/kubernetes/proxy
ExecStart=/usr/bin/kube-proxy $KUBE_PROXY_ARGS
Restart=on-failure
LimitNOFILE=65536
KillMode=process
[Install]
WantedBy=multi-user.target
  • 配置文件:vi /etc/kubernetes/proxy
KUBE_PROXY_ARGS="--master=http://192.168.56.119:8080 --hostname-override=192.168.56.120 --logtostderr=true --log-dir=/var/log/kubernetes --v=2"

启动

systemctl daemon-reload
systemctl enable kubelet
systemctl start kubelet
systemctl status kubelet
systemctl enable kube-proxy
systemctl start kube-proxy
systemctl status kube-proxy

Node2安装

请参考Node1安装,注意修改IP

健康检查与示例测试

  • 查看集群状态
kubectl get nodes
  • 查看master集群组件状态
[root@localhost kubernetes]# kubectl get cs
NAME                 STATUS    MESSAGE             ERROR
scheduler            Healthy   ok
controller-manager   Healthy   ok
etcd-0               Healthy   {"health":"true"}
  • nginx-rc.yaml

    kubectl create -f nginx-rc.yaml

apiVersion: v1
kind: ReplicationController
metadata:
 name: nginx
spec:
 replicas: 3
 selector:
  app: nginx
 template:
  metadata:
   labels:
    app: nginx
  spec:
   containers:
   - name: nginx
     image: nginx
     ports:
     - containerPort: 80
  • nginx-svc.yaml

kubectl create -f nginx-svc.yaml

apiVersion: v1
kind: Service
metadata:
 name: nginx
spec:
 type: NodePort
 ports:
  - port: 80
    nodePort: 33333
 selector:
   app: nginx

查看pod以及查看pod的详情
在这里插入图片描述

K8S集群搭建常见问题

  • 解决 kubectl get pods时No resources found问题

    1. vim /etc/kubernetes/apiserver
    2. 找到”KUBE_ADMISSION_CONTROL="- admission_control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota",去掉ServiceAccount,保存退出。
    3. systemctl restart kube-apiserver 重启此服务
  • pull 失败

首先搭建一个docker私服仓库

# 私有仓库搭建
docker pull registry
docker run -di --name=registry -p 5000:5000 registry
修改daemon.json {"insecure-registries":["192.168.56.121:5000"]}
重启docker服务 systemctl restart docker
    1、yum install *rhsm* -y

    2、docker pull registry.access.redhat.com/rhel7/pod-infrastructure:latest

    如果以上两步解决问题了,那么就不需要在执行下面操作

    3、docker search pod-infrastructure

    4、docker pull docker.io/tianyebj/pod-infrastructure

    5、docker tag tianyebj/pod-infrastructure 192.168.56.121:5000/pod-infrastructure  

    6、docker push 192.168.56.121:5000/pod-infrastructure

    7、vi /etc/kubernetes/kubelet 

    修改 KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=192.168.56.121:5000/pod-	infrastructure:latest"

    8、重启服务

    systemctl restart kube-apiserver
    systemctl restart kube-controller-manager
    systemctl restart kube-scheduler
    systemctl restart kubelet
    systemctl restart kube-proxy
  • 解决方案2

    1、docker pull kubernetes/pause

    2、docker tag docker.io/kubernetes/pause:latest 192.168.56.121:5000/google_containers/pause-amd64.3.0

    3、docker push 192.168.56.121:5000/google_containers/pause-amd64.3.0

    4、vi /etc/kubernetes/kubelet配置为

    ​ KUBELET_ARGS="–pod_infra_container_image=192.168.56.121:5000/google_containers/pause-amd64.3.0"

    5、重启kubelet服务 systemctl restart kubelet

查看kubelet 日志

journalctl -u kubelet -n 1000
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值