Kubenetes 1.9 部署记录

部分组件下载本地安装,Vision:1.9 ,部署过程参考众达人博客,感谢分享!

 

一.系统环境准备:

环境

版本

Centos

CentOS Linux release 7.3

Kernel

Linux 3.10.0-514.26.2.el7.x86_64

yum base repo

http://mirrors.aliyun.com/repo/Centos-7.repo

yum epel repo

http://mirrors.aliyun.com/repo/epel-7.repo

kubectl

v1.9.2

kubeadmin

v1.9.2

docker

1.12 or ce vision

 

服务器名称

主机名称

相关信息

备注

jnclustertest-01

etcd-host1

 10.0.0.35

 

jnclustertest-02

etcd-host2

 10.0.0.36

 

jnclustertest-03

etcd-host3

 10.0.0.37

 

 

1. 关闭firewall、selinux(重启生效)、swap(重启生效)

systemctl disable firewalld&&systemctl stop firewalld&&setenforce 0&&swapoff -a

 

永久关闭swap,selinux

sed 's/.*swap.*/#&/' /etc/fstab

sed -i "s/^SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config

sed -i "s/^SELINUX=permissive/SELINUX=disabled/g" /etc/selinux/config

2. 设置系统参数

cat <<EOF >  /etc/sysctl.d/k8s.conf

net.bridge.bridge-nf-call-ip6tables = 1

net.bridge.bridge-nf-call-iptables = 1

EOF

sysctl --system

3. Docker 安装

cat <<EOF >  /etc/yum.repos.d/docker.repo

[dockerrepo]

name=Docker Repository

baseurl=https://yum.dockerproject.org/repo/main/centos/7/

enabled=1

gpgcheck=1

gpgkey=https://yum.dockerproject.org/gpg

EOF

[root@jnclustertest-03 ]#yum -y remove docker docker-common container-selinux

[root@jnclustertest-03 ]#yum install docker -y&&systemctl docker enable&&systemctl docker start

 

二.Master部署步骤:

1. Images/rpm下载到本地(master+node节点)

put -r c:/kubeadm/.

 

链接:https://pan.baidu.com/s/1yKeeuq-uKTYfzxySobtcbQ 密码:vesf

 

2. 解压

 

[root@jnclustertest-03 images]#

tar -xzvf cni.tar.gz&&tar -xzvf etcd.tar.gz&&tar -xzvf etcd-amd64.tar.gz&&tar -xzvf flannel.tar.gz&&tar -xzvf k8s-dns-dnsmasq-nanny-amd64.tar.gz&&tar -xzvf k8s-dns-kube-dns-amd64.tar.gz&&tar -xzvf k8s-dns-sidecar-amd64.tar.gz&&tar -xzvf kube-apiserver-amd64.tar.gz&&tar -xzvf kube-controller-manager-amd64.tar.gz&&tar -xzvf kube-controllers.tar.gz&&tar -xzvf kube-proxy-amd64.tar.gz&&tar -xzvf kubernetes-dashboard-amd64.tar.gz&&tar -xzvf kube-scheduler-amd64.tar.gz&&tar -xzvf node.tar.gz&&tar -xzvf pause-amd64.tar.gz

2. 导入本地images:

[root@jnclustertest-03 rpm]#

docker load -i cni.tar&&docker load -i etcd.tar&&docker load -i etcd-amd64.tar&&docker load -i flannel.tar&&docker load -i k8s-dns-dnsmasq-nanny-amd64.tar&&docker load -i k8s-dns-kube-dns-amd64.tar&&docker load -i k8s-dns-sidecar-amd64.tar&&docker load -i kube-apiserver-amd64.tar&&docker load -i kube-controller-manager-amd64.tar&&docker load -i kube-controllers.tar&&docker load -i kube-proxy-amd64.tar&&docker load -i kubernetes-dashboard-amd64.tar&&docker load -i kube-scheduler-amd64.tar&&docker load -i node.tar&&docker load -i pause-amd64.tar

3. 安装 rpm(master+node节点)

[root@jnclustertest-03 rpm]#yum install -y *.rpm&&systemctl enable kubelet&&systemctl start kubelet&&systemctl status kubelet

删除脚本:yum -y erase kubelet kubectl kubeadm kubernetes-cni socat

 

4. 初始化master节点

[root@jnclustertest-03 rpm]#

kubeadm init --kubernetes-version=v1.9.2 --pod-network-cidr=172.16.0.0/16

--apiserver-advertise-address=10.0.0.37

[init] Using Kubernetes version: v1.9.2

[init] Using Authorization modes: [Node RBAC]

[preflight] Running pre-flight checks.

        [WARNING FileExisting-crictl]: crictl not found in system path

[certificates] Generated ca certificate and key.

[certificates] Generated apiserver certificate and key.

[certificates] apiserver serving cert is signed for DNS names [jnclustertest-03 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.0.0.37]

[certificates] Generated apiserver-kubelet-client certificate and key.

[certificates] Generated sa key and public key.

[certificates] Generated front-proxy-ca certificate and key.

[certificates] Generated front-proxy-client certificate and key.

[certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"

[kubeconfig] Wrote KubeConfig file to disk: "admin.conf"

[kubeconfig] Wrote KubeConfig file to disk: "kubelet.conf"

[kubeconfig] Wrote KubeConfig file to disk: "controller-manager.conf"

[kubeconfig] Wrote KubeConfig file to disk: "scheduler.conf"

[controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"

[controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"

[controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"

[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"

[init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests".

[init] This might take a minute or longer if the control plane images have to be pulled.

[apiclient] All control plane components are healthy after 24.503227 seconds

[uploadconfig]聽Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace

[markmaster] Will mark node jnclustertest-03 as master by adding a label and a taint

[markmaster] Master jnclustertest-03 tainted and labelled with key/value: node-role.kubernetes.io/master=""

[bootstraptoken] Using token: 3580e6.4acf5504e230ffc9

[bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials

[bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token

[bootstraptoken] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster

[bootstraptoken] Creating the "cluster-info" ConfigMap in the "kube-public" namespace

[addons] Applied essential addon: kube-dns

[addons] Applied essential addon: kube-proxy

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

 

  mkdir -p $HOME/.kube

  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

  sudo chown $(id -u):$(id -g) $HOME/.kube/config

 

You should now deploy a pod network to the cluster.

Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:

  https://kubernetes.io/docs/concepts/cluster-administration/addons/

 

You can now join any number of machines by running the following on each node as root:

  kubeadm join --token 3580e6.4acf5504e230ffc9 10.0.0.37:6443 --discovery-token-ca-cert-hash sha256:d5e10c6667bf622711788287d6db16c04b04f1d0b6aecca716a8fb1bcf538bfb

5. 设置环境变量

[root@jnclustertest-03 rpm]# HOME=~

[root@jnclustertest-03 rpm]# mkdir -p $HOME/.kube

[root@jnclustertest-03 rpm]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config [root@jnclustertest-03 rpm]# sudo chown $(id -u):$(id -g) $HOME/.kube/config

6. 检查master节点kubernetes组建运行状态

[root@jnclustertest-03 rpm]# kubectl get cs

NAME                 STATUS    MESSAGE              ERROR

scheduler            Healthy   ok                   

etcd-0               Healthy   {"health": "true"}   

controller-manager   Healthy   ok

7. 安装calico

[root@jnclustertest-03 yml]#

Wget https://docs.projectcalico.org/v2.6/getting-started/kubernetes/installation/hosted/kubeadm/1.6/calico.yaml

 

[root@jnclustertest-03 yml]# kubectl apply -f calico.yaml

configmap "calico-config" created

daemonset "calico-etcd" created

service "calico-etcd" created

daemonset "calico-node" created

deployment "calico-kube-controllers" created

deployment "calico-policy-controller" created

clusterrolebinding "calico-cni-plugin" created

clusterrole "calico-cni-plugin" created

serviceaccount "calico-cni-plugin" created

clusterrolebinding "calico-kube-controllers" created

clusterrole "calico-kube-controllers" created

serviceaccount "calico-kube-controllers" created

 

[root@jnclustertest-03 yml]# kubectl get pods -n kube-system

NAME                                       READY     STATUS    RESTARTS   AGE

calico-etcd-gf6sc                          1/1       Running   0          4m

calico-kube-controllers-d554689d5-vsrhf    1/1       Running   0          4m

calico-node-bx7zn                          2/2       Running   0          4m

etcd-jnclustertest-03                      1/1       Running   0          51m

kube-apiserver-jnclustertest-03            1/1       Running   0          51m

kube-controller-manager-jnclustertest-03   1/1       Running   0          51m

kube-dns-6f4fd4bdf-rp59r                   3/3       Running   0          52m

kube-proxy-9vjxz                           1/1       Running   0          52m

kube-scheduler-jnclustertest-03            1/1       Running   0          51m

 

备注:全部启动事件预计5分钟左右,dns会相对较长

三.Node节点部署步骤:

1. 加入节点

[root@jnclustertest-03 yml]kubeadm reset

[root@jnclustertest-03 yml]kubeadm join --token 3580e6.4acf5504e230ffc9 10.0.0.37:6443 --discovery-token-ca-cert-hash sha256:d5e10c6667bf622711788287d6db16c04b04f1d0b6aecca716a8fb1bcf538bfb

2. 验证加入

[root@jnclustertest-03 yml]# kubectl get nodes

NAME               STATUS    ROLES     AGE       VERSION

jnclustertest-01   Ready     <none>    4m        v1.9.2

jnclustertest-02   Ready     <none>    1h        v1.9.2

jnclustertest-03   Ready     master    2h        v1.9.2

                                                                                                                                      #Time:20180303  GGJSYYZ

 

附录 《部署dashboard》:供参考

部署dashboard

[root@jnclustertest-03 yml]# wget

https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml

[root@jnclustertest-03 yml]# wget https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard-rbac-admin.yml

[root@jnclustertest-03 yml]# vim kubernetes-dashboard-rbac-admin.yml

[root@jnclustertest-03 yml]# kubectl apply kubernetes-dashboard.yml

[root@jnclustertest-03 yml]# kubectl apply -f kubernetes-dashboard-rbac-admin.yml

 

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值