借助kubekey极速安装Kubernetes

硬件要求

  • 官方建议
    • all-in-one(基于Linux系统,部署Kubernetes集群以及KubeSphere平台)
      • 最小化:2C4G
      • 开启全部组件:8C16G
    • 基于Kubernetes集群
      • 最小化1C2G(kubernetes集群,master节点是需要2C2G的配置要求,node节点是可以1C2G的)

插件性能需求

https://kubesphere.io/zh/docs/pluggable-components/overview/

尽管可以使用kuberadm来安装kubernetes,但是仍旧需要kubeadm init和kubeadm join等操作;作为开发人员,多数时候,我们并不想将时间浪费在繁琐的安装操作上,可以借助于kubesphere提供的安装工具KubeKey来迅速搭建kubernetes集群。

  • 注意:实际上为了开发和学习的便利、可以直接移步到minikube上。没有必要向下看了;

下面将要安装的是单机版  kubernetes v1.21.5,适合开发工作

基本操作

#修改主机名
[root@VM-4-9-centos ~]# hostnamectl set-hostname master
#配置主机间的免密
[root@VM-4-9-centos ~]# ssh-keygen
[root@VM-4-9-centos ~]# ssh-copy-id 192.168.72.78
[root@VM-4-9-centos ~]# ssh-copy-id 192.168.72.79
[root@VM-4-9-centos ~]# systemctl disable firewalld
[root@VM-4-9-centos ~]# systemctl stop firewalld
[root@VM-4-9-centos ~]# sudo setenforce 0
setenforce: SELinux is disabled
[root@VM-4-9-centos ~]# sudo sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
[root@VM-4-9-centos ~]# getenforce
Disabled
[root@VM-4-9-centos ~]# swapoff -a
[root@VM-4-9-centos ~]# 
[root@VM-4-9-centos ~]# yum -y install chrony
[root@VM-4-9-centos ~]# #修改时间同步服务器
[root@VM-4-9-centos ~]# sed -i.bak '3,6d' /etc/chrony.conf && sed -i '3cserver ntp1.aliyun.com iburst' /etc/chrony.conf 
[root@VM-4-9-centos ~]# systemctl start chronyd && systemctl enable chronyd
[root@VM-4-9-centos ~]# #查看同步结果
[root@VM-4-9-centos ~]# chronyc sources
[root@VM-4-9-centos ~]# cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
> br_netfilter
> EOF

[root@VM-4-9-centos ~]# 
[root@VM-4-9-centos ~]# cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
> net.bridge.bridge-nf-call-ip6tables = 1
> net.bridge.bridge-nf-call-iptables = 1
> EOF

[root@VM-4-9-centos ~]# sudo sysctl --system
[root@VM-4-9-centos ~]# yum -y install socat
[root@VM-4-9-centos ~]# yum -y install conntrack
[root@VM-4-9-centos ~]# yum -y install ebtables
[root@VM-4-9-centos ~]# yum -y install ipset
[root@VM-4-9-centos ~]# yum remove docker*
[root@VM-4-9-centos ~]# yum install -y yum-utils
[root@VM-4-9-centos ~]# #配置docker的yum地址
[root@VM-4-9-centos ~]# wget -O /etc/yum.repos.d/docker-ce.repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

[root@VM-4-9-centos ~]# 
[root@VM-4-9-centos ~]# #安装指定版本
[root@VM-4-9-centos ~]# yum install -y docker-ce-20.10.12 docker-ce-cli-20.10.12 containerd.io-1.4.12

[root@VM-4-9-centos ~]# #启动&开机启动docker
[root@VM-4-9-centos ~]# systemctl enable docker --now
[root@VM-4-9-centos ~]# 
[root@VM-4-9-centos ~]# # docker加速配置
[root@VM-4-9-centos ~]# sudo mkdir -p /etc/docker
[root@VM-4-9-centos ~]# sudo tee /etc/docker/daemon.json <<-'EOF'
> {
>   "registry-mirrors": ["https://ke9h1pt4.mirror.aliyuncs.com"],
>   "exec-opts": ["native.cgroupdriver=systemd"],
>   "log-driver": "json-file",
>   "log-opts": {
>     "max-size": "100m"
>   },
>   "storage-driver": "overlay2"
> }
> EOF

[root@VM-4-9-centos ~]# sudo systemctl daemon-reload
[root@VM-4-9-centos ~]# sudo systemctl restart docker
[root@VM-4-9-centos ~]# cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
> [kubernetes]
> name=Kubernetes
> baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
> enabled=1
> gpgcheck=0
> repo_gpgcheck=0
> gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
>    http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
> EOF

折叠

下载Kubekey

[root@VM-4-9-centos ~]# export KKZONE=cn
[root@VM-4-9-centos ~]# curl -sfL https://get-kk.kubesphere.io | VERSION=v1.2.1 sh -

[root@VM-4-9-centos ~]# 
[root@VM-4-9-centos ~]# ls
[root@VM-4-9-centos ~]# ls -l
total 0
[root@VM-4-9-centos ~]# curl -sfL https://get-kk.kubesphere.io | VERSION=v1.2.1 sh -

安装kubernetes v1.21.5

[root@VM-4-9-centos ~]# chmod +x kk
[root@VM-4-9-centos ~]# ./kk create cluster --with-kubernetes v1.21.5
+--------+------+------+---------+----------+-------+-------+-----------+----------+------------+-------------+------------------+--------------+
| name   | sudo | curl | openssl | ebtables | socat | ipset | conntrack | docker   | nfs client | ceph client | glusterfs client | time         |
+--------+------+------+---------+----------+-------+-------+-----------+----------+------------+-------------+------------------+--------------+
| master | y    | y    | y       | y        | y     | y     | y         | 20.10.12 |            |             |                  | CST 13:26:10 |
+--------+------+------+---------+----------+-------+-------+-----------+----------+------------+-------------+------------------+--------------+

This is a simple check of your environment.
Before installation, you should ensure that your machines meet all requirements specified at
https://github.com/kubesphere/kubekey#requirements-and-recommendations

Continue this installation? [yes/no]: yes
INFO[13:26:15 CST] Downloading Installation Files               
INFO[13:26:15 CST] Downloading kubeadm ...                      
INFO[13:26:57 CST] Downloading kubelet ...                      
INFO[13:28:49 CST] Downloading kubectl ...                      
INFO[13:29:31 CST] Downloading helm ...                         
INFO[13:30:13 CST] Downloading kubecni ...                      
INFO[13:30:50 CST] Downloading etcd ...                         
INFO[13:31:04 CST] Downloading docker ...                       
INFO[13:31:09 CST] Downloading crictl ...                       
INFO[13:31:28 CST] Configuring operating system ...             
[master 10.0.4.9] MSG:
net.ipv4.ip_forward = 1
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.default.accept_source_route = 0
kernel.core_uses_pid = 1
net.ipv4.tcp_syncookies = 1
kernel.msgmnb = 65536
kernel.msgmax = 65536
net.ipv4.conf.all.promote_secondaries = 1
net.ipv4.conf.default.promote_secondaries = 1
net.ipv6.neigh.default.gc_thresh3 = 4096
net.ipv4.neigh.default.gc_thresh3 = 4096
kernel.softlockup_panic = 1
net.ipv6.conf.all.disable_ipv6 = 0
net.ipv6.conf.default.disable_ipv6 = 0
net.ipv6.conf.lo.disable_ipv6 = 0
kernel.shmmax = 68719476736
kernel.printk = 5
kernel.sysrq = 1
kernel.numa_balancing = 0
net.bridge.bridge-nf-call-arptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_local_reserved_ports = 30000-32767
vm.max_map_count = 262144
vm.swappiness = 1
fs.inotify.max_user_instances = 524288
INFO[13:31:30 CST] Get cluster status                           
INFO[13:31:30 CST] Installing Container Runtime ...             
INFO[13:31:30 CST] Start to download images on all nodes        
[master] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/pause:3.4.1
[master] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/kube-apiserver:v1.21.5
[master] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controller-manager:v1.21.5
[master] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/kube-scheduler:v1.21.5
[master] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/kube-proxy:v1.21.5
[master] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/coredns:1.8.0
[master] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/k8s-dns-node-cache:1.15.12
[master] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controllers:v3.20.0
[master] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/cni:v3.20.0
[master] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/node:v3.20.0
[master] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/pod2daemon-flexvol:v3.20.0
INFO[13:32:13 CST] Getting etcd status                          
[master 10.0.4.9] MSG:
Configuration file will be created
INFO[13:32:14 CST] Generating etcd certs                        
INFO[13:32:14 CST] Synchronizing etcd certs                     
INFO[13:32:14 CST] Creating etcd service                        
Push /root/kubekey/v1.21.5/amd64/etcd-v3.4.13-linux-amd64.tar.gz to 10.0.4.9:/tmp/kubekey/etcd-v3.4.13-linux-amd64.tar.gz   Done
INFO[13:32:15 CST] Starting etcd cluster                        
INFO[13:32:15 CST] Refreshing etcd configuration                
[master 10.0.4.9] MSG:
Created symlink /etc/systemd/system/multi-user.target.wants/etcd.service → /etc/systemd/system/etcd.service.
INFO[13:32:18 CST] Backup etcd data regularly                   
INFO[13:32:24 CST] Installing kube binaries                     
Push /root/kubekey/v1.21.5/amd64/kubeadm to 10.0.4.9:/tmp/kubekey/kubeadm   Done
Push /root/kubekey/v1.21.5/amd64/kubelet to 10.0.4.9:/tmp/kubekey/kubelet   Done
Push /root/kubekey/v1.21.5/amd64/kubectl to 10.0.4.9:/tmp/kubekey/kubectl   Done
Push /root/kubekey/v1.21.5/amd64/helm to 10.0.4.9:/tmp/kubekey/helm   Done
Push /root/kubekey/v1.21.5/amd64/cni-plugins-linux-amd64-v0.9.1.tgz to 10.0.4.9:/tmp/kubekey/cni-plugins-linux-amd64-v0.9.1.tgz   Done
INFO[13:32:27 CST] Initializing kubernetes cluster              
[master 10.0.4.9] MSG:
W0209 13:32:28.885629   19527 utils.go:69] The recommended value for "clusterDNS" in "KubeletConfiguration" is: [10.233.0.10]; the provided value is: [169.254.25.10]
[init] Using Kubernetes version: v1.21.5
[preflight] Running pre-flight checks
        [WARNING FileExisting-tc]: tc not found in system path
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local lb.kubesphere.local localhost master master.cluster.local] and IPs [10.233.0.1 10.0.4.9 127.0.0.1]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] External etcd mode: Skipping etcd/ca certificate authority generation
[certs] External etcd mode: Skipping etcd/server certificate generation
[certs] External etcd mode: Skipping etcd/peer certificate generation
[certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation
[certs] External etcd mode: Skipping apiserver-etcd-client certificate generation
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 14.501817 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.21" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node master as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: wborow.hx6d7x0dib4zafi7
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:

  kubeadm join lb.kubesphere.local:6443 --token wborow.hx6d7x0dib4zafi7 \
        --discovery-token-ca-cert-hash sha256:ec78c9a060dcf45a6fb8c67886491b9d05dd240df315b93ac8321b59e54c1e06 \
        --control-plane 

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join lb.kubesphere.local:6443 --token wborow.hx6d7x0dib4zafi7 \
        --discovery-token-ca-cert-hash sha256:ec78c9a060dcf45a6fb8c67886491b9d05dd240df315b93ac8321b59e54c1e06
[master 10.0.4.9] MSG:
node/master untainted
[master 10.0.4.9] MSG:
node/master labeled
[master 10.0.4.9] MSG:
service "kube-dns" deleted
[master 10.0.4.9] MSG:
service/coredns created
Warning: resource clusterroles/system:coredns is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.
clusterrole.rbac.authorization.k8s.io/system:coredns configured
[master 10.0.4.9] MSG:
serviceaccount/nodelocaldns created
daemonset.apps/nodelocaldns created
[master 10.0.4.9] MSG:
configmap/nodelocaldns created
INFO[13:33:09 CST] Get cluster status                           
INFO[13:33:10 CST] Joining nodes to cluster                     
INFO[13:33:10 CST] Deploying network plugin ...                 
[master 10.0.4.9] MSG:
configmap/calico-config created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
daemonset.apps/calico-node created
serviceaccount/calico-node created
deployment.apps/calico-kube-controllers created
serviceaccount/calico-kube-controllers created
Warning: policy/v1beta1 PodDisruptionBudget is deprecated in v1.21+, unavailable in v1.25+; use policy/v1 PodDisruptionBudget
poddisruptionbudget.policy/calico-kube-controllers created
INFO[13:33:11 CST] Congratulations! Installation is successful. 
[root@VM-4-9-centos ~]# 

关于一些会出现的报错,以及解决方法

如果一直有报错:The connection to the server localhost:8080 was refused - did you specify the right host or port?: Process exited with status 1

可以尝试下面的命令,清除集群,重新创建集群

# ./kk delete cluster -f config-sample.yaml

如果镜像已经下载好了才报错的,清除集群后,再次创建集群即可

# ./kk create cluster --with-kubernetes v1.20.4 --with-kubesphere v3.1.0 -f config-sample.yaml -y

如果看到这个报错,就看一下,是不是报错的节点,缺少依赖

Failed to join node: interrupted by error

到所有节点,再次执行下面的这个命令,确保返回Nothing to do

# yum install -y socat conntrack ebtables ipset

验证

[root@VM-4-9-centos ~]# kubectl get nodes
NAME     STATUS   ROLES                         AGE   VERSION
master   Ready    control-plane,master,worker   17m   v1.21.5
[root@master ~]# kubectl cluster-info
Kubernetes control plane is running at https://lb.kubesphere.local:6443
coredns is running at https://lb.kubesphere.local:6443/api/v1/namespaces/kube-system/services/coredns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

k8s安装其他应用:简单

k8s install ingress-nginx. Installation with Manifests | NGINX Ingress Controller 

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.3.0/deploy/static/provider/cloud/deploy.yaml

k8s install argocd 

kubectl create namespace argocd
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值