国内环境安装k8s1.21.1------ubuntu20.04宿主机生成三台VM,基于KVM-Libvirt-qemu-img技术

国内环境安装k8s1.21.1

参考自:https://www.huaweicloud.com/articles/4abee03caddab87496e78af2ecd35e9b.html

0、准备开始

要遵循本指南,你需要:

一台或多台运行兼容 deb/rpm 的 Linux 操作系统的计算机;例如:Ubuntu 或 CentOS。
每台机器 2 GB 以上的内存,内存不足时应用会受限制。
用作控制平面节点的计算机上至少有2个 CPU。
集群中所有计算机之间具有完全的网络连接。你可以使用公共网络或专用网络。
你还需要使用可以在新集群中部署特定 Kubernetes 版本对应的 kubeadm。

Kubernetes 版本及版本倾斜支持策略 适用于 kubeadm 以及整个 Kubernetes。 查阅该策略以了解支持哪些版本的 Kubernetes 和 kubeadm。 该页面是为 Kubernetes v1.21 编写的。

kubeadm 工具的整体功能状态为一般可用性(GA)。一些子功能仍在积极开发中。 随着工具的发展,创建集群的实现可能会略有变化,但总体实现应相当稳定。

都是走的国内镜像源

1、环境

宿主机 Ubuntu 20.04.2 LTS 8C16G 60GB(笔记本双系统,特好玩;有能力的建议玩下)

KVM的虚拟机三台 VM1 VM2  VM3,三台的配置相同, 如下
2C2GB  centos7.9  三台VM磁盘最多能30G(宿主机的磁盘还剩下30G了)
双网卡; 
网卡1:NAT网络(通过无线网卡连接互联网),DHCP
网卡2:基于linux bridge, 基于静态IP

LinuxBridge通过virt-manager创建,详细方法见: 在virt-manager中桥接虚拟机
创建两个虚拟网卡后,见下图

[root@k8s1 tmp]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 52:54:00:dd:86:a0 brd ff:ff:ff:ff:ff:ff
    inet 192.168.122.109/24 brd 192.168.122.255 scope global noprefixroute dynamic eth0
       valid_lft 2685sec preferred_lft 2685sec
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 52:54:00:be:3c:36 brd ff:ff:ff:ff:ff:ff
    inet 192.168.100.131/24 brd 192.168.100.255 scope global noprefixroute eth1
       valid_lft forever preferred_lft forever

eth0 为默认的网卡,通过NAT转无线网卡,实现外网通信。
eth1为新见的网卡,建立方法如下:
通过virt-manager

2、创建网络/网卡

cirt-manager的截图

添加硬件—》 网络 》》 从下拉选项卡中,选择刚创建的br0(刚创建的LinuxBridge)
新建虚拟网卡,并连入刚创建的br0网络
在VM1/VM2/VM3中增加ifcfg-eth0的配置文件(DHCP

[root@k8s1 tmp]# cat /etc/sysconfig/network-scripts/ifcfg-eth0
TYPE="Ethernet"
PROXY_METHOD="none"
BROWSER_ONLY="no"
BOOTPROTO="dhcp"
DEFROUTE="yes"
IPV4_FAILURE_FATAL="no"
#IPV6INIT="yes"
#IPV6_AUTOCONF="yes"
#IPV6_DEFROUTE="yes"
#IPV6_FAILURE_FATAL="no"
#IPV6_ADDR_GEN_MODE="stable-privacy"
NAME="eth0"
DEVICE="eth0"
ONBOOT="yes"

在VM1/VM2/VM3中增加ifcfg-eth1的配置文件(静态IP

[root@k8s1 tmp]# cat /etc/sysconfig/network-scripts/ifcfg-eth1 
TYPE="Ethernet"
PROXY_METHOD="none"
BROWSER_ONLY="no"
BOOTPROTO="static"
DEFROUTE="yes"
IPV4_FAILURE_FATAL="no"
#IPV6INIT="yes"
#IPV6_AUTOCONF="yes"
#IPV6_DEFROUTE="yes"
#IPV6_FAILURE_FATAL="no"
#IPV6_ADDR_GEN_MODE="stable-privacy"
NAME="eth1"
DEVICE="eth1"
IPADDR=192.168.100.131
MASK=255.255.255.0
ONBOOT="yes"

3、创建三台VM1/2/3

基于libvirt/kvm/virt-manager技术

在宿主机执行如下:
另,如何基于KVM (qemu-img),链接克隆三台虚拟机,以占用最小的磁盘空间,方法如下:
1、首先,基于一台已经创建好的KVM虚拟机的磁盘景象vol.qcow2, 创建磁盘backing


   80  qemu-img rebase -f qcow2 -F qcow2 -b /var/lib/libvirt/images/vol.qcow2 /var/lib/libvirt/images/vm1.qcow2 
   81  qemu-img rebase -f qcow2 -F qcow2 -b /var/lib/libvirt/images/vol.qcow2 /var/lib/libvirt/images/vm2.qcow2 
   92  qemu-img create -f qcow2 -F qcow2 -b /var/lib/libvirt/images/vol.qcow2 /var/lib/libvirt/images/vm3.qcow2
   93  qemu-img create -f qcow2 -F qcow2 -b /var/lib/libvirt/images/vol.qcow2 /var/lib/libvirt/images/vm4.qcow2
   94  qemu-img create -f qcow2 -F qcow2 -b /var/lib/libvirt/images/vol.qcow2 /var/lib/libvirt/images/vm5.qcow2

vol.qcow2 即为正常的虚拟机磁盘镜像;vm1~5.qcow2为链接克隆镜像。
为防止vol.qcow2改动后,vm1~5.qcow2变得不可用,需要执行如下命令

chattr +i /var/lib/libvirt/images/vol.qcow2 

基础/后端镜像vol.qcow2不能改!不能改!不能改!重要的话说三边

若基础镜像变动后,则所有的衍生镜像(vm1~5)全部失效!!!
注意: 执行该命令后,原来的虚拟机"centos7.0"则变得不可用了。
在这里插入图片描述
在宿主机执行:
移除初始镜像的VM
建议如下操作: 备份该VM‘centos7.0’的xml文件,执行virsh undefine移除该虚拟机
(仅移除了VM的xml配置文件,镜像还在;做好xml的备份后,被干掉的VM还能回来)

root@top5402:/home/shrek# ls /etc/libvirt/qemu
centos7.0-2.xml  centos7.0-3.xml  centos7.0-4.xml  centos7.0.xml  networks

# 备份centos7.0的配置文件(即vm0)
root@top5402:/home/shrek# cp /etc/libvirt/qemu/centos7.0.xml /etc/libvirt/qemu/centos7.0.xml.bak
# 移除操作
root@top5402:/home/shrek# virsh undefine centos7.0 
Domain centos7.0 has been undefined

#验证,发现原来的文件centos7.0.xml已消失
root@top5402:/home/shrek# ls /etc/libvirt/qemu
centos7.0-2.xml  centos7.0-3.xml  centos7.0-4.xml  centos7.0.xml.bak  networks

操作截图如下
移除原来的VM
创建三台新的虚拟机的方法(宿主机执行)
基于已有的VM镜像,导入到linux宿主机中
选择刚建立的几块链接磁盘之一
选的导入的VM镜像的OS系统和版本
导入VM镜像的OS和版本

选择cpu和内存
选择cpu和内存

选择网络
在这里插入图片描述
额外添加一台网卡(方法见上图)

至此,三台VM1/2/3 的配置已经完成。

虚拟机分别执行(建议通过ansible,pssh等自动化工具)

4、三台VM系统初始化

关闭 selinux

setenforce 0 #实时动态关闭 selinux
sed -i 's/SELINUX=enforcing/SELINUX=disabled/' /etc/selinux/config #禁止重启后自动开启

关闭交换分区

swapoff -a #实时动态关闭交换分区
sed -i '/ swap / s/^/#/' /etc/fstab *#禁止重启后自动开启

网络配置文件*

cat < /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
vm.swappiness=0
EOF

sudo modprobe br_netfilter  

modprobe br_netfilter  #执行该命令 如果不执行就会在应用k8s.conf时出现加载错误
sysctl -p /etc/sysctl.d/k8s.conf #应用配置文件

yum换国内源

cd /etc/yum.repos.d  && \
sudo mv CentOS-Base.repo CentOS-Base.repo.bak && \
sudo wget -O CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo && \
yum clean all && \
yum makecache

配置k8s资源的下载地址

安装依赖
yum install -y docker kubelet kubeadm kubectl
docker换源
mkdir -p /etc/docker
sudo tee /etc/docker/daemon.json <<-'EOF'
{
"registry-mirrors": ["https://registry.docker-cn.com"]
}
EOF

service docker restart

开机启动

systemctl disable firewalld.service  && systemctl stop firewalld.service 
systemctl enable docker && systemctl start docker
systemctl enable kubelet && systemctl start kubelet

下载k8s依赖镜像
获取依赖的镜像

kubeadm config images list

国内用户通过阿里云镜像下载k8s依赖组件

kubeadm config images list |sed -e 's/^/docker pull /g' -e 's#k8s.gcr.io#registry.cn-hangzhou.aliyuncs.com/google_containers#g' |sh -x

docker images |grep registry.cn-hangzhou.aliyuncs.com/google_containers |awk '{print "docker tag ",$1":"$2,$1":"$2}' |sed -e 's#registry.cn-hangzhou.aliyuncs.com/google_containers#k8s.gcr.io#2' |sh -x

docker images |grep registry.cn-hangzhou.aliyuncs.com/google_containers |awk '{print "docker rmi ", $1":"$2}' |sh -x

其中下载k8s.gcr.io/coredns/coredns:v1.8.0报错,提示阿里云无此镜像
需要自己想办法下载来

这里通过境外服务器下载镜像,并回传到本地。

Docker中的Cgroup Driver:Cgroupfs 与 Systemd
在安装kubernetes的过程中,会出现

failed to create kubelet: misconfiguration: kubelet cgroup driver: "cgroupfs" is different from docker cgroup driver: "systemd"

文件驱动默认由systemd改成cgroupfs, 而我们安装的docker使用的文件驱动是systemd, 造成不一致, 导致镜像无法启动

docker info |grep Cgroup

出现如下

 Cgroup Driver: cgroupfs
 Cgroup Version: 1

现在有两种方式, 一种是修改docker, 另一种是修改kubelet,

修改docker:#
修改或创建/etc/docker/daemon.json,加入下面的内容:

{
  "exec-opts": ["native.cgroupdriver=systemd"]
}
验证

```bash
 Cgroup Driver: systemd
 Cgroup Version: 1

5、主节点 K8S初始化

提示:
1、由于有双网卡,kubeadim 会默认取通网关的网卡作为apiserver-advertise-address!在本环境中,由于通网关的eth0为DHCP配置,IP会变动,所以必须通过手动指定apiserver-advertise-address的IP为eth1的IP(该IP为静态IP,仅能在集群中通信;无法访问外网,因为没有配置路由IP)
2、配置pod-network-cidr的默认网段为192.168.0.0/16! 而这与我的VM1/2/3的网段冲突,所以必须更改其他的172.16~31.0.0/16; 10.[1-255].0.0/16 等网段。

Kubernetes v1.21.1

 kubeadm init   --apiserver-advertise-address=192.168.100.131  --pod-network-cidr=172.16.0.0/16

执行成功后出现

eck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 13.004339 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.21" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node k8s1 as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node k8s1 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: 3crujf.gi4mgoo4u2qr7zjw
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.100.131:6443 --token 3crujf.gi4mgoo4u2qr7zjw \
	--discovery-token-ca-cert-hash sha256:67d219b97bf929dc4dc7c6611fe3c52e0852c74691a1defa622fe8ca932887d8

以上是初始化后的最后一条命令

主节点执行:

[root@localhost ~]# kubectl get nodes

NAME STATUS ROLES AGE VERSION
localhost.localdomain   NotReady   master   40m v1.14.3
miwifi-r3-srv NotReady      3m48s   v1.14.3

状态还是notReady

网络模型 查看文档 https://kubernetes.io/zh/docs/concepts/cluster-administration/networking/#how-to-implement-the-kubernetes-networking-model

这里选了 weave 插件文档: https://www.weave.works/docs/…
执行命令

kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"

k8s集群会pull镜像,等pod init成功后,node就显示为ready了
稍微等几分钟就可以看到正常了

[root@localhost ~]# kubectl get nodes

NAME STATUS ROLES AGE   VERSION
localhost.localdomain   Ready master   49m   v1.14.3
miwifi-r3-srv Ready    12m   v1.14.3

##6、 添加K8S节点

[root@k8s1 tmp]# kubectl get nodes
NAME   STATUS   ROLES                  AGE    VERSION
k8s1   Ready    control-plane,master   149m   v1.21.1
k8s2   Ready    node                   145m   v1.21.1
k8s3   Ready    node                   115m   v1.21.1
[root@k8s1 tmp]# 

至此在自己的笔记本上,以搭建成功k8s集群了。

6、其他

kubeadm token 过期的情况

kubeadm join 用到的token有效期是24h

生成 token, 查看token

$ kubeadm token create

rugi2c.bb97e7ney91bogbg

$ kubeadm token list

TOKEN TTL EXPIRES USAGES DESCRIPTION   EXTRA GROUPS
rugi2c.bb97e7ney91bogbg   23h 2019-06-18T22:28:11+08:00   authentication,signing    system:bootstrappers:kubeadm:default-node-token

生成证书

openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'

新token加入

kubeadm join 192.168.31.120:6443 --token rugi2c.bb97e7ney91bogbg \ --discovery-token-ca-cert-hash sha256:c55a113114d664133685430a86f2e39f40e9df6b12ad3f4d65462fd372079e97

部署仪表盘
主节点操作

wget https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml
这里只需要修改image的地址为国内阿里云的不然翻墙不了会下载不成功 registry.cn-beijing.aliyuncs.com/minminmsn/kubernetes-dashboard:v1.10.1

NodePort模式需要修改镜像地址和type: NodePort

vim kubernetes-dashboard.yaml
spec: containers: - name: kubernetes-dashboard image: registry.cn-beijing.aliyuncs.com/minminmsn/kubernetes-dashboard:v1.10.1

spec:
type: NodePort #增加type: NodePort
ports: - port: 443 targetPort: 8443 nodePort: 31620 #增加nodePort: 31620
selector: k8s-app: kubernetes-dashboard
这里把官方的改成阿里云的镜像地址registry.cn-beijing.aliyuncs.com/minminmsn/kubernetes-dashboard:v1.10.1
修改如上文件,增加如下配置:

type: NodePort # 添加Service的type为NodePort
nodePort: 31000 # 添加映射到虚拟机的端口,k8s只支持30000以上的端口
访问dashboard有以下几种方式访问dashboard:

Nodport方式访问dashboard,service类型改为NodePort
loadbalacer方式,service类型改为loadbalacer
Ingress方式访问dashboard
API server方式访问 dashboard
kubectl proxy方式访问dashboard
官方参考文档:
https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/#accessing-the-dashboard-ui
修改完成创建服务pod

[root@node03 bin]# kubectl create -f kubernetes-dashboard.yaml
secret/kubernetes-dashboard-certs created
serviceaccount/kubernetes-dashboard created
role.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created
deployment.apps/kubernetes-dashboard created
service/kubernetes-dashboard created
查看运行状态

[root@node03 bin]# kubectl get pods --all-namespaces -o wide | grep dashboard
kube-system kubernetes-dashboard-77fd78f978-bkm9r 1/1 Running 0 37m 10.244.1.4 node04
常见异常处理:
Terminating或者Pending时删除当前pod

kubectl delete pod kubernetes-dashboard-57df4db6b-lcj24 -n kube-system
如下异常时

Error from server (AlreadyExists): error when creating “kubernetes-dashboard.yaml”: secrets “kubernetes-dashboard-certs” already exists
Error from server (AlreadyExists): error when creating “kubernetes-dashboard.yaml”: serviceaccounts “kubernetes-dashboard” already exists
Error from server (AlreadyExists): error when creating “kubernetes-dashboard.yaml”: roles.rbac.authorization.k8s.io “kubernetes-dashboard-minimal” already exists
Error from server (AlreadyExists): error when creating “kubernetes-dashboard.yaml”: rolebindings.rbac.authorization.k8s.io “kubernetes-dashboard-minimal” already exists
处理如下,卸载之前安装的内容

kubectl delete -f kubernetes-dashboard.yaml
继续进行,查看service,TYPE类型已经变为NodePort,端口为31000

kubectl get service -n kube-system | grep dashboard
kubernetes-dashboard NodePort 10.98.190.246 443:31000/TCP 99s
https://192.168.111.128:31620/
如访问提示了证书错误NET::ERR_CERT_INVALID
原因是由于物理机的浏览器证书不可用。我们可以生成一个私有证书或者使用公有证书,下面开始配置证书。

查看kubernetes-dashboard 容器跑在哪台node节点上
kubectl get pod -n kube-system -o wide
查看kubernetes-dashboard容器ID
docker ps | grep dashboard
查看kubernetes-dashboard容器certs所挂载的宿主主机目录
docker inspect -f {{.Mounts}} 384d9dc0170b
私有证书配置,生成dashboard证书
openssl genrsa -des3 -passout pass:x -out dashboard.pass.key 2048
openssl rsa -passin pass:x -in dashboard.pass.key -out dashboard.key
openssl req -new -key dashboard.key -out dashboard.csr
openssl x509 -req -sha256 -days 365 -in dashboard.csr -signkey dashboard.key -out dashboard.crt
将生成的dashboard.crt dashboard.key放到certs对应的宿主主机souce目录如:

/var/lib/kubelet/pods/966bda12-95f2-4605-b295-e9ac0e3294dc/volumes/kubernetes.io~secret/kubernetes-dashboard-certs
重启kubernetes-dashboard容器
docker restart xxxxx
获取登陆令牌
kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk ‘{print $1}’)
获取输出的token粘贴复制到kubernetes-dashboard登陆页面获取授权

国内 centOS7 搭建k8s - kubeadm 单机1

命令

kubeadm init --kubernetes-version=1.16.2

kubectl get nodes

kubectl create -f kubernetes-dashboard.yaml

kubectl apply -f hack/kubernetes --clusterrole=cluster-admin --group=system:serviceaccounts

kubectl delete -f hack/kubernetes

kubectl get pods --all-namespaces -o wide | grep dashboard

kubectl get service -n default | grep wayne*

kubectl get services --all-namespaces

kubectl describe pod mysql-wayne-77bbcf9bf9-ngpqd -n default

kubectl get svc -n kube-system dashboard 相关

docker ps | grep dashboard

kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}') kubectl get secret -n kube-system |grep dashboard-serviceaccount-token

kubectl describe secret dashboard-serviceaccount-token-6z42h -n kube-system

```查看kubelet的输出日志信息:

tail -f /var/log/messages

journalctl -f -u kubelet


  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 1
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值