简单的Kubernetes安装

2 篇文章 0 订阅

1、添加阿里云repo
kubelet kubeadm kubectl 如果版本不一致,集群会有很多问题,建议重装,保持版本一直。。。

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
setenforce 0
yum install -y kubelet kubeadm kubectl
(yum install -y --nogpgcheck kubelet kubeadm kubectl)
systemctl daemon-reload
systemctl enable kubelet && systemctl start kubelet

2、安装docker-ce

yum install -y yum-utils \
  device-mapper-persistent-data \
  lvm2

yum-config-manager \
    --add-repo \
    https://download.docker.com/linux/centos/docker-ce.repo


yum install docker-ce docker-ce-cli containerd.io

systemctl start docker

3、Docker 镜像源

Azure 中国 提供了 gcr.io 及k8s.gcr.io容器仓库的镜像代理服务。
测试了下,服务很稳定,速度也不错,基本>1MB/s(杭州电信),推荐。
docker pull gcr.azk8s.cn/google_containers/<imagename>:<version>

Azure开源镜像站点提供了很多镜像服务。运营主体是Azure中国,也就是世纪互联,应该能持续维护下去。

4、测试需要什么版本的image

[root@iZ2ze0l6m7p7elorvlav2lZ ~]# kubeadm config images list --kubernetes-version v1.16.2
k8s.gcr.io/kube-apiserver:v1.14.0
k8s.gcr.io/kube-controller-manager:v1.14.0
k8s.gcr.io/kube-scheduler:v1.14.0
k8s.gcr.io/kube-proxy:v1.14.0
k8s.gcr.io/pause:3.1
k8s.gcr.io/etcd:3.3.10
k8s.gcr.io/coredns:1.3.1
查看错误日志
journalctl -xeu kubelet
tail /var/log/messages
kubeadm reset 可以重新执行下面这个命令

5、启动k8s 在启动k8s的时候需要指定network add on, 不然的话,节点状态都是 Not Ready, 在这里使用Flannel
这是在master节点执行的init命令,在其他节点不需要执行

[root@iZ2ze0l6m7p7elorvlav2lZ ~]# kubeadm init --pod-network-cidr=10.244.0.0/16 --image-repository registry.aliyuncs.com/google_containers



[init] Using Kubernetes version: v1.14.0
[preflight] Running pre-flight checks
    [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [iz2ze0l6m7p7elorvlav2lz kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 172.17.7.140]
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [iz2ze0l6m7p7elorvlav2lz localhost] and IPs [172.17.7.140 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [iz2ze0l6m7p7elorvlav2lz localhost] and IPs [172.17.7.140 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 19.002246 seconds
[upload-config] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.14" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --experimental-upload-certs
[mark-control-plane] Marking the node iz2ze0l6m7p7elorvlav2lz as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node iz2ze0l6m7p7elorvlav2lz as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: wqte6n.02rt3z1sofenowyr
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy


Your Kubernetes control-plane has initialized successfully!


To start using your cluster, you need to run the following as a regular user:


  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config


You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/


Then you can join any number of worker nodes by running the following on each as root:


kubeadm join 172.17.7.140:6443 --token wqte6n.02rt3z1sofenowyr \
    --discovery-token-ca-cert-hash sha256:ad4257975ac5fd8986188308ee84fec6e0a7873797bf12c443146fcf6ffdf304

6、kubectl

如果直接执行 kubectl get nodes 会报错:The connection to the server localhost:8080 was refused - did you specify the right host or port?
需要先执行
export KUBECONFIG=/etc/kubernetes/admin.conf

7、如果是普通用户:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

8、deploy the flannel network to the kubernetes cluster using the kubectl command.

[root@iZ2ze0l6m7p7elorvlav2lZ ~]# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
podsecuritypolicy.extensions/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.extensions/kube-flannel-ds-amd64 created
daemonset.extensions/kube-flannel-ds-arm64 created
daemonset.extensions/kube-flannel-ds-arm created
daemonset.extensions/kube-flannel-ds-ppc64le created
daemonset.extensions/kube-flannel-ds-s390x created

9、添加节点, 在任意机器上执行如下操作添加节点

kubeadm join 172.17.7.140:6443 --token ipxvq2.1w30tnha1komnxzi \
    --discovery-token-ca-cert-hash sha256:22a3b9da1c824ecfadd5c6a4e4b8a08b042424173ae0cd1c8c2c9dee394b7849

查看GPU资源

kubectl get nodes "-o=custom-columns=NAME:.metadata.name,GPU:.status.allocatable.nvidia\.com/gpu"

添加node

1.  获取token
Kubeadm token list
2.  如果token都过期了,创建token
Kubeadm token create
3. 获取kubernetes认证的SHA256 加密串
openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //‘
4. 在要添加的节点上执行如下命令
kubeadm join --token [TOKEN] [kubernetes master node ip]:6443 --discovery-token-ca-cert-hash sha256:[SHA256]
重装k8s
kubeadm reset
systemctl stop kubelet

systemctl stop docker
rm -rf /var/lib/cni/
rm -rf /var/lib/kubelet/*
rm -rf /etc/cni/
ifconfig cni0 down
ifconfig flannel.1 down
ifconfig docker0 down
ip link delete cni0
ip link delete flannel.1
systemctl start docker
端口转发
iptables -t nat -A POSTROUTING -p tcp -d 10.244.13.32 --dport 7001 -j DNAT --to-destination 10.218.0.65:32347
iptables -t nat -A POSTROUTING -p tcp -d 10.244.13.32 --dport 7007 -j DNAT --to-destination 10.218.0.65:32348
默认的情况下,集群的master节点是不参与scheduled的, 可以使用kubectl taint nodes <master_node> node-role.kubernetes.io/master-   来开启master参与scheduled
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值