Kubernetes安装手册(Ubuntu非高可用版-CNI-calico)

CKA考试环境搭建–CalicoKubernetes 安装手册(Ubuntu非高可用版)

安装前准备工作

1. 设置hosts解析

操作节点:所有节点(k8s-master)均需执行

cat >>/etc/hosts<<EOF
172.21.51.5 k8s-master
172.21.51.6 k8s-node-1
172.21.51.7 k8s-node-2
EOF
  • 修改hostname
    hostname必须只能包含小写字母、数字、","、"-",且开头结尾必须是小写字母或数字
# 在master节点
hostnamectl set-hostname k8s-master && bash #设置master节点的hostname
hostnamectl set-hostname k8s-node-1 && bash
hostnamectl set-hostname k8s-node-2 && bash

2. 调整系统配置

操作节点: 所有的master和slave节点(k8s-master,k8s-slave)需要执行

本章下述操作均以k8s-master为例,其他节点均是相同的操作(ip和hostname的值换成对应机器的真实值)

设置iptables

iptables -P FORWARD ACCEPT
/etc/init.d/ufw stop
  • 关闭swap
swapoff -a
# 防止开机自动挂载 swap 分区
sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab

  • 修改内核参数
cat <<EOF >  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-iptables = 1
vm.max_map_count=262144
EOF

modprobe br_netfilter
sysctl -p /etc/sysctl.d/k8s.conf
  • 设置apt源
#替换apt源
$ vi /etc/apt/sources.list
deb http://mirrors.aliyun.com/ubuntu/ bionic main restricted universe multiverse
deb-src http://mirrors.aliyun.com/ubuntu/ bionic main restricted universe multiverse
deb http://mirrors.aliyun.com/ubuntu/ bionic-security main restricted universe multiverse
deb-src http://mirrors.aliyun.com/ubuntu/ bionic-security main restricted universe multiverse
deb http://mirrors.aliyun.com/ubuntu/ bionic-updates main restricted universe multiverse
deb-src http://mirrors.aliyun.com/ubuntu/ bionic-updates main restricted universe multiverse
deb http://mirrors.aliyun.com/ubuntu/ bionic-proposed main restricted universe multiverse
deb-src http://mirrors.aliyun.com/ubuntu/ bionic-proposed main restricted universe multiverse
deb http://mirrors.aliyun.com/ubuntu/ bionic-backports main restricted universe multiverse
deb-src http://mirrors.aliyun.com/ubuntu/ bionic-backports main restricted universe multiverse

$ apt-get update && apt-get install -y apt-transport-https ca-certificates software-properties-common 
$ curl https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | apt-key add - 
$ curl -fsSL https://mirrors.ustc.edu.cn/docker-ce/linux/ubuntu/gpg | sudo apt-key add 
$ add-apt-repository "deb [arch=amd64] https://mirrors.ustc.edu.cn/docker-ce/linux/ubuntu $(lsb_release -cs) stable"
$ tee /etc/apt/sources.list.d/kubernetes.list <<-'EOF'
deb https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial main
EOF

$ apt-get update   
#若上步出现NO_PUBLICKEY问题,参考https://www.cnblogs.com/jiangzuo/p/13667011.html


-----------------------------------------------------------------------------------------------------
注:
# 若出现:资源锁被别的进程占用
$ apt-get update && apt-get install -y apt-transport-https ca-certificates software-properties-common 
Reading package lists... Done
E: Could not get lock /var/lib/apt/lists/lock - open (11: Resource temporarily unavailable)
E: Unable to lock directory /var/lib/apt/lists/

# 则查看是哪些进程占用, kill掉这三个进程即可
$  ps -e | grep apt
  3271 ?        00:00:00 apt.systemd.dai
  3295 ?        00:00:00 apt.systemd.dai


$  kill -9 `ps -e | grep apt | awk '{print $1}'`

3. 安装docker

操作节点: 所有节点

$ apt-get install docker-ce=5:19.03.9~3-0~ubuntu-bionic
## 启动docker
$ systemctl enable docker && systemctl start docker
$ ps aux | grep docker

部署kubernetes

1. 安装 kubeadm, kubelet 和 kubectl

操作节点: 所有的master和slave节点(k8s-master,k8s-slave) 需要执行

$ apt-get install kubelet=1.18.8-00 kubectl=1.18.8-00 kubeadm=1.18.8-00
## 查看kubeadm 版本
$ kubeadm version
## 设置kubelet开机启动
$ systemctl enable kubelet 

2. 初始化配置文件

操作节点: 只在master节点(k8s-master)执行

$ kubeadm config print init-defaults > kubeadm.yaml
$ vim kubeadm.yaml
apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 172.21.51.5   # 修改成master主机ip
  bindPort: 6443
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  name: k8s-master
  taints:
  - effect: NoSchedule
    key: node-role.kubernetes.io/master
---
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns:
  type: CoreDNS
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: registry.aliyuncs.com/google_containers   # 修改为国内源
kind: ClusterConfiguration
kubernetesVersion: v1.18.8    # 修改为v1.18.8
networking:
  dnsDomain: cluster.local
  podSubnet: 10.244.0.0/16			# 添加pod网段
  serviceSubnet: 10.96.0.0/12
scheduler: {}

3. 提前下载镜像

操作节点:只在master节点(k8s-master)执行

  # 提前下载镜像到本地
$ kubeadm config images pull --config kubeadm.yaml
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-apiserver:v1.18.8
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-controller-manager:v1.18.8
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-scheduler:v1.18.8
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-proxy:v1.18.8
[config/images] Pulled registry.aliyuncs.com/google_containers/pause:3.2
[config/images] Pulled registry.aliyuncs.com/google_containers/etcd:3.4.3-0
[config/images] Pulled registry.aliyuncs.com/google_containers/coredns:1.6.7

4. 初始化master节点

操作节点:只在master节点(k8s-master)执行

$ kubeadm init --config kubeadm.yaml

若初始化成功后,最后会提示如下信息:


Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 172.21.51.5:6443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:1688f8550a8104fd31c3db80e825ab04b4d69f255728364c67fe0866bd90f279 

接下来按照上述提示信息操作,配置kubectl客户端的认证

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

**⚠️注意:**此时使用 kubectl get nodes查看节点应该处于notReady状态,因为还未配置网络插件

若执行初始化过程中出错,根据错误信息调整后,执行kubeadm reset后再次执行init操作即可

5. 安装 Calico 网络插件

操作节点:只在master节点(k8s-master)执行

  • 下载Calico的yaml文件
curl https://docs.projectcalico.org/v3.10/manifests/canal.yaml -O
  • 创建资源
kubectl apply -f canal.yaml

6.工作节点加入集群

#安装 Calico 网络插件,在三个 node 节点操作。
# 生成节点的kubeadm-config.yaml文件,在node节点操作。
$ kubeadm config print join-defaults > kubeadm-config.yaml

# 修改清单文件
$ vim kubeadm-config.yaml 
apiVersion: kubeadm.k8s.io/v1beta2
caCertPath: /etc/kubernetes/pki/ca.crt
discovery:
  bootstrapToken:
    apiServerEndpoint: 172.21.51.5:6443 #修改为 api-server 的地址
    token: abcdef.0123456789abcdef
    unsafeSkipCAVerification: true
  timeout: 5m0s
  tlsBootstrapToken: abcdef.0123456789abcdef
kind: JoinConfiguration
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  name: k8s-node-1 #这里是每一个节点的 hostname
  taints: null
  
# 加入集群
 kubeadm join --config kubeadm-config.yaml
 
# 查看
root@k8s-master:~# kubectl get no
NAME         STATUS   ROLES    AGE   VERSION
k8s-master   Ready    master   31m   v1.18.8
k8s-node-1   Ready    <none>   30m   v1.18.8
k8s-node-2   Ready    <none>   30m   v1.18.8

7. 设置master节点为可调度

操作节点:k8s-master

默认部署成功后,master节点无法调度业务pod,如需设置master节点也可以参与pod的调度,需执行:

$ kubectl taint node k8s-master node-role.kubernetes.io/master:NoSchedule-
# 取消
kubectl taint node k8s-master node-role.kubernetes.io/master=:NoSchedule

8. 验证集群

操作节点: 在master节点(k8s-master)执行

$ kubectl get nodes  #观察集群节点是否全部Ready

创建测试nginx服务

$ kubectl run  test-nginx --image=nginx:alpine

查看pod是否创建成功,并访问pod ip测试是否可用

root@k8s-master:~# kubectl get po -o wide -w
NAME         READY   STATUS    RESTARTS   AGE   IP           NODE         NOMINATED NODE   READINESS GATES
test-nginx   1/1     Running   0          84s   10.244.2.2   k8s-node-2   <none>           <none>


$ curl 10.244.2.2
...
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

10. 清理环境

如果你的集群安装过程中遇到了其他问题,我们可以使用下面的命令来进行重置:

$ kubeadm reset
$ ifconfig cni0 down && ip link delete cni0
$ ifconfig flannel.1 down && ip link delete flannel.1
$ rm -rf /var/lib/cni/
考试环境准备
# 系统版本
$ cat /proc/version
Linux version 4.15.0-112-generic (buildd@lcy01-amd64-027) (gcc version 7.5.0 (Ubuntu 7.5.0-3ubuntu1~18.04)) #113-Ubuntu SMP Thu Jul 9 23:41:39 UTC 2020


# 用户准备,没指明则是所有节点
$ useradd -m student -s /bin/bash
$ passwd student



# 配置sudo权限
$ grep "^root" /etc/sudoers -n
20:root	ALL=(ALL:ALL) ALL
$ sed -i '20a   student  ALL=(ALL:ALL) NOPASSWD:ALL' /etc/sudoers

# 检查下
$ grep "^root"  /etc/sudoers -A 2
root	ALL=(ALL:ALL) ALL
student  ALL=(ALL:ALL) NOPASSWD:ALL


# 如果还需要输入密码,则修改如下
# User privilege specification
root    ALL=(ALL:ALL) ALL
student  ALL=(ALL:ALL) NOPASSWD:ALL

# Members of the admin group may gain root privileges
%admin ALL=(ALL) NOPASSWD:ALL
注释:
# 第一个ALL是指网络中的主机,第二个括号里的ALL是指目标用户,也就是以谁的身份去执行命令。最后一个ALL就是指命令名了。

# 配置ssh免密登陆
# 先配置hosts解析
cat >>/etc/hosts<<EOF
172.21.51.5 k8s-master
172.21.51.6 k8s-node-1
172.21.51.7 k8s-node-2
EOF
# 再在k8s-master上生成公钥,拷贝到各slave节点
 su - student
 ssh-keygen -t rsa
 ssh-copy-id k8s-node-1
 ssh-copy-id k8s-node-2
# 方便后面题目演示登陆到k8s-master-0
cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys

# 安装k8s集群,参考Ubuntu非高可用安装手册


# k8s版本
$ kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.8", GitCommit:"9f2892aab98fe339f3bd70e3c470144299398ace", GitTreeState:"clean", BuildDate:"2020-08-13T16:10:16Z", GoVersion:"go1.13.15", Compiler:"gc", Platform:"linux/amd64"}

# 拷贝k8s认证
  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config
  
 # 验证 
student@k8s-master:~$ kubectl get no
NAME         STATUS   ROLES    AGE   VERSION
k8s-master   Ready    master   51m   v1.18.8
k8s-node-1   Ready    <none>   50m   v1.18.8
k8s-node-2   Ready    <none>   50m   v1.18.8

sudo 命令
以其他身份来执行命令

$ sudo su -
# env | grep -E '(HOME|SHELL|USER|LOGNAME|^PATH|PWD|TEST_ETC|TEST_ZSH|TEST_PRO|TEST_BASH|TEST_HOME|SUDO)'
这个命令相当于使用root超级用户重新登录一次shell,只不过密码是使用的当前用户的密码。而且重要是,该命令会 重新加载/etc/profile文件以及/etc/bashrc文件等系统配置文件,并且还会重新加载root用户的$SHELL环境变量所对应的配置文件 ,比如:root超级用户的$SHELL是/bin/bash,则会加载/root/.bashrc等配置。如果是/bin/zsh,则会加载/root/.zshrc等配置,执行后是完全的root环境。

$ sudo -i
# env | grep -E '(HOME|SHELL|USER|LOGNAME|^PATH|PWD|TEST_ETC|TEST_ZSH|TEST_PRO|TEST_BASH|TEST_HOME|SUDO)'
这个命令基本与 sudo su - 相同,执行后也是root超级用户的环境,只不过是多了一些当前用户的信息。

$ sudo -s
# env|grep -E '(HOME|SHELL|USER|LOGNAME|^PATH|PWD|TEST_ETC|TEST_ZSH|TEST_PRO|TEST_BASH|TEST_HOME|SUDO)'  --color
这个命令相当于 以当前用户的$SHELL开启了一个root超级用户的no-login的shell,不会加载/etc/profile等系统配置 。所以/etc/profile文件中定义的TEST_ETC环境变量就看不到了,但是会加载root用户对应的配置文件,比如root用户的$SHELL是/bin/zsh,那么会加载/root/.zshrc配置文件,执行完后,不会切换当前用户的目录。

配置sudo必须通过编辑/etc/sudoers文件,而且只有超级用户才可以修改它,还必须使用visudo编辑。之所以使用visudo有两个原因,一是它能够防止两个用户同时修改它;二是它也能进行有限的语法检查。
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值