k8s的搭建和简单使用

环境

三台centos7系统
192.168.72.128 master
192.168.72.129 node1
192.168.72.132 node2

master和node节点上的操作

	1 时间同步
		ntpdate pool.ntp.date
	2 关闭防火前和selinux
		systemctl stop firewalld
		setenforce 0
	3 更改主机名称
		hostname master(master上操作)
		hostname node1(node1上操作)
		hostname node2(node2上操作)
	4 更改 /etc/hosts文件(每个机器都得做,内容一样)
		vim /etc/hosts/
		192.168.72.128 master
		192.168.72.129 node1
		192.168.72.132 node2
	5 安装docker
		上传docker.repo或者
		yum -y install yum-utils device-mapper-persistent-data lvm2
		yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo 然后再进行安装
		安装指定版本
		yum -y install docker-ce-18.09.7 docker-ce-cli-18.09.7 containerd.io
		查看版本 docker --version
		启动并设置开机自启
		systemctl start docker
		sysetmctl enable docker
		设置加速
		vim /etc/docker/daemon.json
			{
  			"exec-opts": ["native.cgroupdriver=systemd"]
			}
		重启docker systemctl restart docker
		docker info |grep Cgroup
			Cgroup Driver: systemd
	6 安装kubeadm
		(1)配置kubenetes的yum仓库(这里使用阿里云仓库)
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-	x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
        	https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF



# yum makecache
		(2)安装kubelat、kubectl、kubeadm
			yum -y install kubelet-1.15.2 kubeadm-1.15.2 kubectl-1.15.2
			查看安装是否成功 rpm -aq kubelet kubectl kubeadm
				kubectl-1.15.2-0.x86_64
				kubelet-1.15.2-0.x86_64
				kubeadm-1.15.2-0.x86_64
		(3)将kubelet加入开机启动,
		   这里刚安装完成不能直接启动。(因为目前还没有集群还没有建立)
			 systemctl enable kubelet



初始化Master(在master节点执行)

通过kubeadm --help帮助手册,可以看到可以通过kubeadm init初始化一个master节点
,然后再通过kubeadm join将一个node节点加入到集群中。
[root@master ~]# cat /etc/sysconfig/kubelet 
	KUBELET_EXTRA_ARGS="--fail-swap-on=false"


## 开始进行初始化
[root@master ~]# kubeadm init --kubernetes-version=v1.15.2 --image-repository registry.aliyuncs.com/google_containers --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12 --ignore-preflight-errors=Swap

		......
	Your Kubernetes control-plane has initialized successfully!

	To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.72.128:6443 --token damed3.0ow9o2c5sbw4xa3d \
    --discovery-token-ca-cert-hash sha256:6609879475365b92ee1d98d2cf187ff4add2e6ae9e0b3eed869036cb51ff17ce 
看到此界面,初始化成功,开始进行下一步操作


操作都可已在上边的初始化结果中找到,直接复制粘贴即可
  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config
 [root@master ~]# docker image ls   #初始化完成后可以看到所需镜像也拉取下来了
	 	REPOSITORY                                                        TAG                 					IMAGE ID            CREATED             SIZE
	registry.aliyuncs.com/google_containers/kube-apiserver            v1.15.2             34a53be6c9a7        15 months ago       207MB
	registry.aliyuncs.com/google_containers/kube-controller-manager   v1.15.2             9f5df470155d        15 months ago       159MB
	registry.aliyuncs.com/google_containers/kube-scheduler            v1.15.2             88fa9cb27bd2        15 months ago       81.1MB
	registry.aliyuncs.com/google_containers/kube-proxy                v1.15.2             167bbf6c9338        15 months ago       82.4MB
	registry.aliyuncs.com/google_containers/coredns                   1.3.1               eb516548c180        22 months ago       40.3MB
	registry.aliyuncs.com/google_containers/etcd                      3.3.10              2c4adeb21b4f        23 months ago       258MB
	registry.aliyuncs.com/google_containers/pause                     3.1                 da86e6ba6ca1        2 years ago         742kB


添加flannel网络组件,把kube-flannel.yml文件放到/root目录下
kubectl apply -f  kube-flannel.yml

//验证flannel网络插件是否部署成功(Running即为成功)
[root@master ~]# kubectl get pods -n kube-system |grep flannel
kube-flannel-ds-765qc            1/1     Running   0          2m34s
	

加入node节点

向集群中添加新节点,执行在kubeadm init 输出的
kubeadm join命令,再在后面同样添加忽略swap报错参数。
1)配置忽略swap报错
[root@node1 ~]# vim /etc/sysconfig/kubelet
KUBELET_EXTRA_ARGS="--fail-swap-on=false"

[root@node2 ~]# vim /etc/sysconfig/kubelet
KUBELET_EXTRA_ARGS="--fail-swap-on=false"
2)加入node1节点
kubeadm join 192.168.72.128:6443 --token damed3.0ow9o2c5sbw4xa3d \
    --discovery-token-ca-cert-hash sha256:6609879475365b92ee1d98d2cf187ff4add2e6ae9e0b3eed869036cb51ff17ce 
(master初始化中的内容直接复制过来)



   [preflight] Running pre-flight checks
    [WARNING Swap]: running with swap on is not supported. Please disable swap
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.15" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Activating the kubelet service
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...


This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.


Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
# 报错处理
error execution phase preflight: [preflight] Some fatal errors occurred:
	[ERROR FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables contents are not set to 1
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
# 解决
[root@node1 ~]# echo "1" > /proc/sys/net/bridge/bridge-nf-call-iptables

加入node2节点和node1一样



检查集群状态

在master节点输入命令检查集群状态,返回如下结果则集群状态正常
root@master ~]# kubectl get nodes
NAME         STATUS     ROLES    AGE     VERSION
k8s-master   Ready      master   9m40s   v1.15.2
k8s-node1    NotReady   <none>   28s     v1.15.2
k8s-node2    NotReady   <none>   13s     v1.15.2
重点查看STATUS内容为Ready时,则说明集群状态正常。

2)查看集群客户端和服务端程序版本信息
[root@master ~]# kubectl version --short=true
Client Version: v1.15.2
Server Version: v1.15.2


3)查看集群信息
[root@master ~]# kubectl cluster-info
Kubernetes master is running at https://192.168.72.128:6443
KubeDNS is running at https://192.168.72.128:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy


To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.



删除节点
有时节点出现故障,需要删除节点,方法如下
1)在master节点上执行
# kubectl drain <NODE-NAME> --delete-local-data --force --ignore-daemonsets
# kubectl delete node <NODE-NAME>
2)在需要移除的节点上执行
# kubeadm reset


创建一个名字叫做nginx的deployment控制器,并指定pod镜像使用nginx:1.12版本,并暴露容器内的80端口,并指定副本数量为1个,并先通过--dry-run测试命令是否错误。
[root@master ~]# kubectl run nginx --image=nginx:1.12 --port=80 --replicas=1 --dry-run=true
[root@master ~]# kubectl run nginx --image=nginx:1.12 --port=80 --replicas=1
deployment.apps/nginx created


[root@kmaster ~]# kubectl get pods    #查看所有pod对象
NAME                     READY   STATUS    RESTARTS   AGE
nginx-685cc95cd4-9z4f4   1/1     Running   0          89s
###参数说明:
--image      指定需要使用到的镜像。
--port       指定容器需要暴露的端口。
--replicas   指定目标控制器对象要自动创建Pod对象的副本数量。




3)访问Pod对象
这里部署的是pod是运行的为nginx程序,所以我们可以访问是否ok,在kubernetes集群中的任意一个节点上都可以直接访问Pod的IP地址。
[root@master ~]# kubectl get pods -o wide    #查看pod详细信息
NAME                     READY   STATUS    RESTARTS   AGE   IP            NODE        NOMINATED NODE   READINESS GATES
nginx-685cc95cd4-9z4f4   1/1     Running   0          88m   10.244.1.12   k8s-node1   <none>           <none>


[root@master ~]# curl 10.244.1.12    #kubernetes集群的master节点上访问
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>


<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>


<p><em>Thank you for using nginx.</em></p>
</body>
</html>




[root@node2 ~]# curl 10.244.1.12    #kubernetes集群的node节点上访问
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>


<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>


<p><em>Thank you for using nginx.</em></p>
</body>
</html>

上面访问是基于一个pod的情况下,但是,当这个pod由于某种原因意外挂掉了,或者所在的节点挂掉了,那么deployment控制器会立即创建一个新的pod,这时候再去访问这个IP就访问不到了,而我们不可能每次去到节点上看到IP再进行访问。测试如下
[root@master ~]# kubectl get pods -o wide
NAME                     READY   STATUS    RESTARTS   AGE   IP            NODE        NOMINATED NODE   READINESS GATES
nginx-685cc95cd4-9z4f4   1/1     Running   0          99m   10.244.1.12   k8s-node1   <none>           <none>


[root@master ~]# kubectl delete pods nginx-685cc95cd4-9z4f4    #删除上面的pod
pod "nginx-685cc95cd4-9z4f4" deleted


[root@master ~]# kubectl get pods -o wide    #可以看出,当上面pod刚删除,接着deployment控制器又马上创建了一个新的pod,且这次分配在k8s-node2节点上了。
NAME                     READY   STATUS    RESTARTS   AGE   IP            NODE        NOMINATED NODE   READINESS GATES
nginx-685cc95cd4-z5z9p   1/1     Running   0          89s   10.244.2.14   k8s-node2   <none>           <none>




[root@master ~]# curl 10.244.1.12    #访问之前的pod,可以看到已经不能访问
curl: (7) Failed connect to 10.244.1.12:80; 没有到主机的路由
[root@k8s-master ~]#
[root@k8s-master ~]# curl 10.244.2.14    #访问新的pod,可以正常访问
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>


<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>


<p><em>Thank you for using nginx.</em></p>
</body>
</html>




评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值