部署k8s集群及技术点

一、部署Kubernetes Cluster
1、安装docker(所有节点)

安装必要的一些系统工具:

[root@master ~]# yum install -y yum-utils device-mapper-persistent-data lvm2

添加软件源信息:

[root@master ~]# yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

添加如下配置文件(否则没法儿下镜像):

[root@master docker]# vim /etc/docker/daemon.json 
{
  "registry-mirrors": ["https://e8v5l063.mirror.aliyuncs.com"]
}

更新并安装docker-ce:

[root@master ~]# yum makecache fast
[root@master ~]# yum install -y --nogpgcheck docker-ce

开户docker服务:

[root@master ~]# systemctl start  docker && systemctl enable docker
[root@master ~]# systemctl status docker
2、安装k8s(所有节点)

配置yum源:

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

禁用防火墙:

[root@master ~]# systemctl disable firewalld && systemctl stop firewalld
[root@master ~]# systemctl status  firewalld

禁用SELinux:

[root@master ~]# setenforce 0
setenforce: SELinux is disabled
安装kubelet、kubeadm、kubectl:

kubelet 运行在 Cluster 所有节点上,负责启动 Pod 和容器;
kubeadm 用于初始化 Cluster;
kubectl 是 Kubernetes 命令行工具。通过 kubectl 可以部署和管理应用,查看各种资源,创建、删除和更新各种组件。

[root@master ~]# yum install -y --nogpgcheck kubelet kubeadm kubectl

因为缺少配置,现在还不能启动kubelet,,仅可以设置开机自启动:

[root@master ~]# systemctl enable kubelet
3、用kubeadm创建cluster

CPU数量至少两个,否则会报名,用以下命令查看:

[root@master ~]# cat /proc/cpuinfo | grep name |cut -f2 -d: | uniq -c
      4  Intel(R) Core(TM) i5-6200U CPU @ 2.30GHz
[root@master ~]# cat /proc/cpuinfo | grep "processor" | sort |  wc -l	#查看数量
4

配置主机名解析(所有节点):

[root@master ~]# vim /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.0.106 master
192.168.0.107 node1
192.168.0.108 node2

要保证打开内置的桥功能,这个是借助于iptables来实现的。需要安装docker才会生成 /proc/sys/net/bridge/bridge-nf-call-iptables,bridge-nf-call-iptables初始值为0。(所有节点)

[root@master ~]# echo "1" >/proc/sys/net/bridge/bridge-nf-call-iptables

禁用swap,如果启用了swap,那么kubelet就无法启动:(所有节点)

[root@master ~]# vim /etc/fstab 
#
# /etc/fstab
# Created by anaconda on Mon Jan 13 14:02:49 2020
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
/dev/mapper/centos-root /                       xfs     defaults        0 0
UUID=91279967-40de-45d0-876f-33c582a30122 /boot                   xfs     defaults        0 0
#/dev/mapper/centos-swap swap                    swap    defaults        0 0
[root@master ~]# swapoff -a && sysctl -w vm.swappiness=0
vm.swappiness = 0
[root@master ~]# free -m
              total        used        free      shared  buff/cache   available
Mem:           3770        1554         289          18        1926        1951
Swap:             0           0           0
初始化master

查看kubeadm版本:

[root@master ~]# kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.3", GitCommit:"06ad960bfd03b39c8310aaf92d1e7c12ce618213", GitTreeState:"clean", BuildDate:"2020-02-11T18:12:12Z", GoVersion:"go1.13.6", Compiler:"gc", Platform:"linux/amd64"}

–image-repository:这个用于指定从什么位置来拉取镜像,默认值是k8s.gcr.io,我们将其指定为国内镜像地址:registry.aliyuncs.com/google_containers;

–kubernetes-version:指定kubenets版本号,默认值是stable-1,会导致从https://dl.k8s.io/release/stable-1.txt下载最新的版本号,我们可以将其指定为固定版本(v1.17.3)来跳过网络请求;

–apiserver-advertise-address: 指明用 Master 的哪个 interface 与 Cluster 的其他节点通信。如果 Master 有多个 interface,建议明确指定,如果不指定,kubeadm 会自动选择有默认网关的 interface。

–pod-network-cidr 指定 Pod 网络的范围。Kubernetes 支持多种网络方案,而且不同网络方案对 --pod-network-cidr有自己的要求,这里设置为10.244.0.0/16 是因为我们将使用 flannel 网络方案,必须设置成这个 cidr。

[root@master ~]# kubeadm init --image-repository registry.aliyuncs.com/google_containers --kubernetes-version v1.17.3 --apiserver-advertise-address 192.168.0.106 --pod-network-cidr=10.244.0.0/16

看到如下输入,说明集群创建成功,复制后半部分的输出,后面的步骤会用到:

......省略......
You can now join any number of machines by running the following on each node
as root:

  kubeadm join 192.168.0.106:6443 --token rn816q.zj0crlasganmrzsr --discovery-token-ca-cert-hash sha256:e339e4dbf6bd1323c13e794760fff3cbeb7a3f6f42b71d4cb3cffdde72179903

查看docker初始化下载的镜像:

[root@master ~]# docker images
REPOSITORY                                                        TAG                 IMAGE ID            CREATED             SIZE
registry.aliyuncs.com/google_containers/kube-proxy                v1.17.3             ae853e93800d        8 days ago          116MB
registry.aliyuncs.com/google_containers/kube-apiserver            v1.17.3             90d27391b780        8 days ago          171MB
registry.aliyuncs.com/google_containers/kube-controller-manager   v1.17.3             b0f1517c1f4b        8 days ago          161MB
registry.aliyuncs.com/google_containers/kube-scheduler            v1.17.3             d109c0821a2b        8 days ago          94.4MB
registry.aliyuncs.com/google_containers/coredns                   1.6.5               70f311871ae1        3 months ago        41.6MB
registry.aliyuncs.com/google_containers/etcd                      3.4.3-0             303ce5db0e90        3 months ago        288MB
registry.aliyuncs.com/google_containers/pause                     3.1                 da86e6ba6ca1        2 years ago         742kB
配置kubectl(所有节点)

这是从前面 kubeadm init 命令的输出信息中复制出来的,现在用到:

[root@master ~]# mkdir -p $HOME/.kube
[root@master ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@master ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config

启用kubectl命令的自动补全功能:

[root@master ~]# echo "source <(kubectl completion bash)" >> ~/.bashrc

现在kubectl可以用了,查看集群部件信息:

[root@master ~]# kubectl get cs
NAME                 STATUS    MESSAGE             ERROR
scheduler            Healthy   ok                  
controller-manager   Healthy   ok                  
etcd-0               Healthy   {"health":"true"} 
安装pod网络(所有节点)

先修改hosts文件,否则 raw.githubusercontent.com 连接不了:

[root@master ~]# vim /etc/hosts

127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.0.106 master
192.168.0.107 node1
192.168.0.108 node2
199.232.28.133 raw.githubusercontent.com

部署 flannel:

[root@master ~]# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

重启kubelet:

[root@master ~]# systemctl restart kubelet

等镜像下载完成以后,看到node的状态是ready了,此是只能看到master:

[root@master ~]# kubectl get nodes
NAME     STATUS   ROLES    AGE    VERSION
master   Ready    master   7h6m   v1.17.3

查看pod信息:

如果有pod的状态不对,是因为缺镜像,先跳过这一步。通过如下命令查看pod具体信息:

[root@master ~]# kubectl describe po kube-flannel-ds-amd64-bm4n6 -n kube-system
[root@master ~]# kubectl get po --all-namespaces
NAMESPACE     NAME                             READY   STATUS    RESTARTS   AGE
kube-system   coredns-9d85f5447-rpxtn          1/1     Running   0          7h7m
kube-system   coredns-9d85f5447-vm5sz          1/1     Running   0          7h7m
kube-system   etcd-master                      1/1     Running   0          7h7m
kube-system   kube-apiserver-master            1/1     Running   0          7h7m
kube-system   kube-controller-manager-master   1/1     Running   1          7h7m
kube-system   kube-flannel-ds-amd64-bm4n6      1/1     Running   0          4h3m
kube-system   kube-proxy-hzb5w                 1/1     Running   0          7h7m
kube-system   kube-scheduler-master            1/1     Running   1          7h7m
添加node1和node2

以下命令来自前面 kubeadm init 的输出提示,如果当时没有记录下来可以通过 kubeadm token list 查看。

在node1和node2上分别执行如下命令,将其注册到Cluster中:

[root@node1 ~]# kubeadm join 192.168.0.106:6443 --token a7xj3i.1843223n9atovprf   --discovery-token-ca-cert-hash sha256:3895f17a90b8a5bd7533ff40c51f1af993313e3d8cc821494aee83c453c30567 

输出如下信息,说明成功:

......省略......
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the master to see this node join the cluster.

查看nodes:

这里要等一会,因为node节点需要下载四个镜像 flannel、coredns、kube-proxy、pause。

[root@master ~]# kubectl get nodes
NAME     STATUS   ROLES    AGE     VERSION
master   Ready    master   7h20m   v1.17.3
node1    NotReady <none>   176m    v1.17.0
node2    NotReady <none>   4h15m   v1.17.0

查看node1、pod、image 的描述:

#master上查看
[root@master ~]# kubectl describe node node1
[root@master ~]# kubectl get po --all-namespaces
#node1 上查看
[root@node1 ~]# docker images

发现flannel镜像下载失败,做如下解决:

发现需要的是这个镜像:quay.io/coreos/flannel:v0.11.0-amd64

[root@node1 ~]# docker pull quay-mirror.qiniu.com/coreos/flannel:v0.11.0-amd64
[root@node1 ~]# docker tag quay-mirror.qiniu.com/coreos/flannel:v0.11.0-amd64 quay.io/coreos/flannel:v0.11.0-amd64

再查看pod状态,应该是ok的了。

至此,cluster集群部署完成。

4、补充:移除node节点

先将节点设置为维护模式(host1是节点名称)

[root@ken ~]# kubectl drain host1 --delete-local-data --force --ignore-daemonsets
node/host1 cordoned
WARNING: Ignoring DaemonSet-managed pods: kube-flannel-ds-amd64-ssqcl, kube-proxy-7cnsr
node/host1 drained

然后删除节点

[root@ken ~]# kubectl delete node host1
node "host1" deleted

查看节点,发现host1节点已经被删除了

[root@ken ~]# kubectl get nodes
NAME    STATUS   ROLES    AGE   VERSION
host2   Ready    <none>   13m   v1.13.2
ken     Ready    master   49m   v1.13.2

如果这个时候再想添加进来这个node,需要执行以下操作

停掉kubelet(host1上操作操作)

[root@host1 ~]# systemctl stop kubelet

删除相关文件

[root@host1 ~]# rm -rf /etc/kubernetes/*

添加节点

[root@host1 ~]# kubeadm join 172.20.10.2:6443 --token rn816q.zj0crlasganmrzsr --discovery-token-ca-cert-hash sha256:e339e4dbf6bd1323c13e794760fff3cbeb7a3f6f42b71d4cb3cffdde72179903

查看节点

[root@ken ~]# kubectl get nodes
NAME    STATUS   ROLES    AGE   VERSION
host1   Ready    <none>   13s   v1.13.2
host2   Ready    <none>   17m   v1.13.2
ken     Ready    master   53m   v1.13.2
二、用到的命令
#修改node上--max-pods=220
[l.he@hermes-2-0 system]$ vim /etc/systemd/system/kubelet.service
#pod副本增减:
[root@master ~]# kubectl scale deploy/nginx --replicas=5
deployment.apps/nginx scaled
#查看参数缩写(Print the supported API resources on the server):
[root@master ~]# kubectl api-resources 
#Print the supported API versions on the server, in the form of "group/version"
[root@master manifests]# kubectl api-versions | grep batch
batch/v1
batch/v1beta1
#设置和去除污点(参数:NoSchedule, PreferNoSchedule or NoExecute):
[root@master ~]# kubectl taint node master node-role.kubernetes.io/master=:NoExecute
node/master tainted
[root@master ~]# kubectl taint node master node-role.kubernetes.io/master-		#去除
node/master untainted
#管理标签标签:
[root@master ~]# kubectl label node node1 disktype=ssd		#增加
node/node1 labeled
[root@master ~]# kubectl get node --show-labels
[root@master ~]# kubectl label node node1 disktype-		#删除
node/node1 labeled
#部署应用:
[root@master ~]# kubectl run redis --image=redis --replicas=2
kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
deployment.apps/redis created
[root@master ~]# kubectl get deploy -n default
NAME    READY   UP-TO-DATE   AVAILABLE   AGE
redis   2/2     2            2           3m37s
[root@master ~]# kubectl get po -n default 
NAME                     READY   STATUS        RESTARTS   AGE
redis-5c7c978f78-drbjs   1/1     Running       0          2m36s
redis-5c7c978f78-mp4l7   1/1     Running       0          2m36s
#端口映射,为了让外网访问pod:
[root@master ~]# kubectl expose deploy/redis --type="NodePort" --port=1234
service/redis exposed
[root@master ~]# kubectl get svc -n default
NAME         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)          AGE
redis        NodePort    10.102.151.253   <none>        1234:30765/TCP   13s
#运行busybox
[root@master tmp]# kubectl run busybox --rm -ti --image=busybox /bin/sh
#查看端口
[o.l.he@hermes-5-5 ~]$ netstat -lanpt | grep 31898
#查看进程:
[o.l.he@hermes-5-5 ~]$ ps aux | grep live555
#进入pod
[root@master ~]# kubectl exec -it etcd-master -n kube-system -c xxx sh
#加解密
[root@master /]# echo -n admin | base64
YWRtaW4=
[root@master /]# echo -n YWRtaW4= | base64 --decode
admin
1、Rolling Update
#使用yaml文件部署应用:
[root@master httpd]# kubectl apply -f httpd-v1.yaml --record	#部署并记录版本信息
#查看版本历史信息:
[root@master httpd]# kubectl rollout history deploy httpd
deployment.apps/httpd 
REVISION  CHANGE-CAUSE
1         kubectl apply --filename=httpd-v1.yaml --record=true
2         kubectl apply --filename=httpd-v2.yaml --record=true
3         kubectl apply --filename=httpd-v3.yaml --record=true
#版本回滚:
[root@master httpd]# kubectl rollout undo deploy httpd --to-revision=1
deployment.apps/httpd rolled back
2、helm
外网安装helm:
[root@master /]# curl https://raw.githubusercontent.com/helm/helm/master/scripts/get | bas
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  7150  100  7150    0     0   2294      0  0:00:03  0:00:03 --:--:--  2295
Helm v2.16.3 is available. Changing from version .
Downloading https://get.helm.sh/helm-v2.16.3-linux-amd64.tar.gz
Preparing to install helm and tiller into /usr/local/bin
helm installed into /usr/local/bin/helm
tiller installed into /usr/local/bin/tiller
Run 'helm init' to configure helm.
内网安装helm,下载地址:https://github.com/helm/helm/releases
版本:/helm-v2.16.3-linux-amd64.tar.gz
[root@master helm]# tar -zxvf helm-v2.16.3-linux-amd64.tar.gz
[root@master linux-amd64]# pwd
/root/Downloads/software/helm/linux-amd64
[root@master linux-amd64]# mv helm /usr/local/bin/
[root@master linux-amd64]# mv tiller /usr/local/bin/
验证helm Client是否安装成功(此时tiller还未安装):
[root@master bin]# helm version
Client: &version.Version{SemVer:"v2.16.3", GitCommit:"1ee0254c86d4ed6887327dabed7aa7da29d7eb0d", GitTreeState:"clean"}
Error: could not find tiller
#安装helm的bash命令补全脚本:
[root@master ~]# echo "source <(helm completion bash)" >> ~/.bashrc
#helm添加azure仓库
[root@master ~]# helm repo add azure  http://mirror.azure.cn/kubernetes/charts/
"azure" has been added to your repositories
[root@master ~]# helm repo list
NAME  	URL                                             
stable	https://kubernetes-charts.storage.googleapis.com
local 	http://127.0.0.1:8879/charts                    
azure 	http://mirror.azure.cn/kubernetes/charts/
#定制化安装chart,通过--values 或--set应用新的配置:
[root@master /]# helm inspect stable/mysql > /root/.helm/cache/mysql-values.yaml
[root@master /]# helm install stable/mysql --values=/root/.helm/cache/mysql-values.yaml
#helm回滚
[o.l.he@hermes-5-5 ~]$ helm history  v40-ariel-platform
REVISION UPDATED                 	STATUS  	CHART               	DESCRIPTION     
1        Sun Mar 15 17:34:57 2020	DEPLOYED	ariel-platform-3.2.0	Install complete
[o.l.he@hermes-5-5 ~]$ helm rollback v40-ariel-platform 1
#调试chart
[root@master ~]# helm lint /root/.helm/cache/hl-test-chart/values.yaml	#检测chart的语法
#模拟安装chart
[root@master ~]# helm install --dry-run /root/.helm/cache/hl-test-chart/ --debug
安装chart
#Helm支持四种chart安装方法:
(1)安装仓库中的chart,例如:helm install stable/nginx
也可多加几个参数:
/root/tmp/o.l.he/carrier
[root@hermes-5-5 carrier]# ll -h
drwxrwxr-x 4 o.hz.cai o.hz.cai 4.0K Mar 31 17:35 carrier-inspector
[root@hermes-5-5 carrier]# helm install carrier-inspector -n helei-carrier --namespace v40-carrier
(2)通过tar包安装,例如:helm install ./nginx-1.2.3.tgz
(3)通过chart本地目录安装,例如:helm install ./nginx
(4)通过URL安装,例如:helm install https://example.com/charts/nginx-1.2.3.tgz
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值