docker 安装的一个

https://blog.csdn.net/judyjie/article/details/85008996

 

centos7环境下kubeadm方式安装kubernates1.13

2018年12月14日 21:11:00 三月泡 阅读数:999

 

安装说明:

kubadm
1,master,node:安装kubelet,kubeadm,docker
2,master:kubeadm init
3,nodes:kubeadm join

1、kubernate通过kubeadm安装有2种方式:

一种是从google网站下载离线安装包

另一种是通过设置阿里云镜像安装。本文通过这一种。

2、设置Kubernetes仓库

 
  1. vi /etc/yum.repos.d/kubernetes.repo

  2. [kubernetes]

  3. name=Kubernetes Repo

  4. baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/

  5. gpgcheck=0

  6. gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg

  7. enabled=1

建立docker-ee仓库,已经安装过的,可忽略

wget -P /etc/yum.repos.d/ https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

3、禁用swap

swapoff -a

同时

vi /etc/fstab

注释掉swap那一行

#UUID=7dac6afd-57ad-432c-8736-5a3ba67340ad swap                    swap    defaults        0 0

free m 查看swap使用

4、主节点和子节点安装

yum install docker-ce kubelet kubeadm kubectl

如果已经安装过docker的,只需执行yum install  kubelet kubeadm kubectl

5、仅主节点安装

kubeadm init --kubernetes-version=v1.13.0 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12

如果报如下错误

 
  1. [root@yanfa2 bridge]# kubeadm init --kubernetes-version=v1.13.0 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12

  2. [init] Using Kubernetes version: v1.13.0

  3. [preflight] Running pre-flight checks

  4. [preflight] Pulling images required for setting up a Kubernetes cluster

  5. [preflight] This might take a minute or two, depending on the speed of your internet connection

  6. [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'

  7. error execution phase preflight: [preflight] Some fatal errors occurred:

  8. [ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-apiserver:v1.13.0: output: Trying to pull repository k8s.gcr.io/kube-apiserver ...

  9. Get https://k8s.gcr.io/v1/_ping: dial tcp 108.177.125.82:443: i/o timeout

  10. , error: exit status 1

  11. [ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-controller-manager:v1.13.0: output: Trying to pull repository k8s.gcr.io/kube-controller-manager ...

  12. Get https://k8s.gcr.io/v1/_ping: dial tcp 108.177.125.82:443: i/o timeout

  13. , error: exit status 1

  14. [ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-scheduler:v1.13.0: output: Trying to pull repository k8s.gcr.io/kube-scheduler ...

  15. Get https://k8s.gcr.io/v1/_ping: dial tcp 108.177.125.82:443: i/o timeout

  16. , error: exit status 1

  17. [ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-proxy:v1.13.0: output: Trying to pull repository k8s.gcr.io/kube-proxy ...

  18. Get https://k8s.gcr.io/v1/_ping: dial tcp 108.177.97.82:443: i/o timeout

  19. , error: exit status 1

  20. [ERROR ImagePull]: failed to pull image k8s.gcr.io/pause:3.1: output: Trying to pull repository k8s.gcr.io/pause ...

  21. Get https://k8s.gcr.io/v1/_ping: dial tcp 108.177.97.82:443: i/o timeout

  22. , error: exit status 1

  23. [ERROR ImagePull]: failed to pull image k8s.gcr.io/etcd:3.2.24: output: Trying to pull repository k8s.gcr.io/etcd ...

  24. Get https://k8s.gcr.io/v1/_ping: dial tcp 108.177.97.82:443: i/o timeout

  25. , error: exit status 1

  26. [ERROR ImagePull]: failed to pull image k8s.gcr.io/coredns:1.2.6: output: Trying to pull repository k8s.gcr.io/coredns ...

  27. Get https://k8s.gcr.io/v1/_ping: dial tcp 108.177.97.82:443: i/o timeout

  28. , error: exit status 1

  29. [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`

是因为从k8s.gcr.io下载不了镜像,执行如下命令:注意版本号,根据自己的报错提示修改

docker pull mirrorgooglecontainers/kube-apiserver-amd64:v1.13.0
docker pull mirrorgooglecontainers/kube-controller-manager-amd64:v1.13.0
docker pull mirrorgooglecontainers/kube-scheduler-amd64:v1.13.0
docker pull mirrorgooglecontainers/kube-proxy-amd64:v1.13.0
docker pull mirrorgooglecontainers/pause:3.1
docker pull mirrorgooglecontainers/etcd-amd64:3.2.24
docker pull coredns/coredns:1.2.6

通过docker tag命令来修改镜像的标签 
docker tag  docker.io/mirrorgooglecontainers/kube-apiserver-amd64:v1.13.0 k8s.gcr.io/kube-apiserver:v1.13.0
docker tag  docker.io/mirrorgooglecontainers/kube-controller-manager-amd64:v1.13.0 k8s.gcr.io/kube-controller-manager:v1.13.0
docker tag  docker.io/mirrorgooglecontainers/kube-scheduler-amd64:v1.13.0 k8s.gcr.io/kube-scheduler:v1.13.0
docker tag  docker.io/mirrorgooglecontainers/kube-proxy-amd64:v1.13.0 k8s.gcr.io/kube-proxy:v1.13.0
docker tag  docker.io/mirrorgooglecontainers/pause:3.1 k8s.gcr.io/pause:3.1
docker tag  docker.io/mirrorgooglecontainers/etcd-amd64:3.2.24 k8s.gcr.io/etcd:3.2.24
docker tag  docker.io/coredns/coredns:1.2.6 k8s.gcr.io/coredns:1.2.6

6、重新执行命令,主节点安装成功

执行

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

  export KUBECONFIG=/etc/kubernetes/admin.conf

安装网络插件

  kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/v0.10.0/Documentation/kube-flannel.yml

 
  1. Your Kubernetes master has initialized successfully!

  2.  
  3. To start using your cluster, you need to run the following as a regular user:

  4.  
  5. mkdir -p $HOME/.kube

  6. sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

  7. sudo chown $(id -u):$(id -g) $HOME/.kube/config

  8.  
  9. You should now deploy a pod network to the cluster.

  10. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:

  11. https://kubernetes.io/docs/concepts/cluster-administration/addons/

  12.  
  13. You can now join any number of machines by running the following on each node

  14. as root:

  15.  
  16. kubeadm join 192.168.7.216:6443 --token 16lrz8.amk86wpd1yd3mqqg --discovery-token-ca-cert-hash sha256:241adaf533f030e95ae606ddeaa71b4f7f93b443bb12e2470ae918a62e9cf214

7.子节点安装

执行2.3.4步骤

7.1子节点加入集群

执行

systemctl enable kubelet.service

echo 1 > /proc/sys/net/bridge/bridge-nf-call-iptables

echo 1 > /proc/sys/net/ipv4/ip_forward

kubeadm join 192.168.7.216:6443 --token 16lrz8.amk86wpd1yd3mqqg --discovery-token-ca-cert-hash sha256:241adaf533f030e95ae606ddeaa71b4f7f93b443bb12e2470ae918a62e9cf214

错误提示:

 
  1. [join] Reading configuration from the cluster...

  2. [join] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'

  3. unable to fetch the kubeadm-config ConfigMap: failed to get config map: Unauthorized

是因为我装完主节点后,没有当天加入子节点,导致token在24小时后过期,需要重新生成

kubeadm token create  生成

kubeadm token list  查看

重新执行

kubeadm join 192.168.7.216:6443 --token 37dday.nnlp7wwq7ac2enjy --discovery-token-ca-cert-hash sha256:241adaf533f030e95ae606ddeaa71b4f7f93b443bb12e2470ae918a62e9cf214

出现notready状态解决方式:

执行脚本:kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

如果是子节点出现上面问题,则需要下载镜像

docker pull mirrorgooglecontainers/kube-apiserver-amd64:v1.13.0
docker pull mirrorgooglecontainers/kube-controller-manager-amd64:v1.13.0
docker pull mirrorgooglecontainers/kube-scheduler-amd64:v1.13.0
docker pull mirrorgooglecontainers/kube-proxy-amd64:v1.13.0
docker pull mirrorgooglecontainers/pause:3.1
docker pull mirrorgooglecontainers/etcd-amd64:3.2.24
docker pull coredns/coredns:1.2.6

通过docker tag命令来修改镜像的标签 
docker tag  docker.io/mirrorgooglecontainers/kube-apiserver-amd64:v1.13.0 k8s.gcr.io/kube-apiserver:v1.13.0
docker tag  docker.io/mirrorgooglecontainers/kube-controller-manager-amd64:v1.13.0 k8s.gcr.io/kube-controller-manager:v1.13.0
docker tag  docker.io/mirrorgooglecontainers/kube-scheduler-amd64:v1.13.0 k8s.gcr.io/kube-scheduler:v1.13.0
docker tag  docker.io/mirrorgooglecontainers/kube-proxy-amd64:v1.13.0 k8s.gcr.io/kube-proxy:v1.13.0
docker tag  docker.io/mirrorgooglecontainers/pause:3.1 k8s.gcr.io/pause:3.1
docker tag  docker.io/mirrorgooglecontainers/etcd-amd64:3.2.24 k8s.gcr.io/etcd:3.2.24
docker tag  docker.io/coredns/coredns:1.2.6 k8s.gcr.io/coredns:1.2.6

查看kubectl问题

kubectl describe nodes yanfa2

journalctl -f -u kubelet.service

确认是否重启是否成功

kubectl get node

kubectl get pod --all-namespaces

转载于:https://my.oschina.net/innovation/blog/3045803

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值