(一) kubeadm工具快速部署Kubernates集群

前期准备

环境概览

准备了3台机器,有一台master,两台node,主机名及IP如下:

主机名

IP地址

k8s-master

192.168.88.121

k8s-node1

192.168.88.122

系统设置

1. 修改三台机器的主机名

# hostnamectl set-hostname XXXX

2. 设置本地解析

编辑三台机器的 hosts 文件加入以下内容

# vim /etc/hosts

192.168.88.121 k8s-master
192.168.88.122 k8s-node1

3. 关闭防火墙

# systemctl disable firewalld

4. 关闭selinux

# sed -i s/SELINUX=enforcing/SELINUX=disabled/g /etc/selinux/config

 

关闭防火墙

# systemctl stop firewalld

# systemctl disable firewalld

关闭swapoff   # 临时

swapoff -a

 

5. 关闭NetworkManager

# systemctl disable NetworkManager

6. 设置时间同步

# cp /usr/share/zoneinfo/Asia/Shanghai /etc/localtime

7. 重启所有主机

# reboot

部署 kubernetes

安装 docker-ce(所有主机 )

1. 下载 docker-ce 源

# yum install wget

# wget -O /etc/yum.repos.d/docker-ce.repo https://download.docker.com/linux/centos/docker-ce.repo

2. 配置 docker-ce 使用国内源

# sed -i 's+download.docker.com+mirrors.tuna.tsinghua.edu.cn/docker-ce+' /etc/yum.repos.d/docker-ce.repo

3. 安装docker-ce

由于 kubernetes 1.13.1 只在 docker-ce 18.06以下测试过,所以指定安装的 docker-ce 版本

# yum install docker-ce-18.06.1.ce-3.el7

4. 启动并设置开机自启

# systemctl enable docker.service && systemctl start docker.service

安装 kubeadm, kubelet and kubectl(所有主机)

1. 配置 kubeadm 的源

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

2. 安装 kubeadm, kubelet and kubectl

# yum install -y kubelet-1.13.1
# yum install -y kubeadm-1.13.1
# yum install -y kubectl-1.13.1

3. 启动并设置开机自启

# systemctl enable kubelet && systemctl start kubelet

4. 调整内核参数

# cat <<EOF >  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF

# sysctl --system

初始化 k8s (master 节点)

1. 导入镜像包

由于不可描述的原因,无法拉取k8s的镜像,所以我准备了一份离线的数据,需要在所有节点导入,下载地址:kube.tar

# docker load -i kube.tar

查看导入后的镜像

# docker images

REPOSITORY                           TAG                 IMAGE ID            CREATED             SIZE
k8s.gcr.io/kube-proxy                v1.13.1             fdb321fd30a0        2 weeks ago         80.2MB
k8s.gcr.io/kube-controller-manager   v1.13.1             26e6f1db2a52        2 weeks ago         146MB
k8s.gcr.io/kube-apiserver            v1.13.1             40a63db91ef8        2 weeks ago         181MB
k8s.gcr.io/kube-scheduler            v1.13.1             ab81d7360408        2 weeks ago         79.6MB
k8s.gcr.io/coredns                   1.2.6               f59dcacceff4        7 weeks ago         40MB
k8s.gcr.io/etcd                      3.2.24              3cab8e1b9802        3 months ago        220MB
quay.io/coreos/flannel               v0.10.0-amd64       f0fad859c909        11 months ago       44.6MB
k8s.gcr.io/pause                     3.1                 da86e6ba6ca1        12 months ago       742kB

2. 初始化 master 节点

# kubeadm init --pod-network-cidr=10.244.0.0/16

**********

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of machines by running the following on each node
as root:

  kubeadm join 172.20.6.116:6443 --token lyycbq.uogsx4a9h7ponmg5 --discovery-token-ca-cert-hash sha256:60d0338c4927907cf56d9697bcdb261cd2fe2dac0f36a9901b254253516177ed

master节点初始化成功,注意保存最后的 kubeadmin join 的内容

3. 加载 k8s 环境变量

# export KUBECONFIG=/etc/kubernetes/admin.conf
# echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile

4. 安装 network addon

要docker之间能互相通信需要做些配置,这里用Flannel来实现

# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/bc79dd1505b0c8681ece4de4c0d86c5cd2643275/Documentation/kube-flannel.yml

clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.extensions/kube-flannel-ds-amd64 created
daemonset.extensions/kube-flannel-ds-arm64 created
daemonset.extensions/kube-flannel-ds-arm created
daemonset.extensions/kube-flannel-ds-ppc64le created
daemonset.extensions/kube-flannel-ds-s390x created

5. 确认集群状态

# kubectl get pods --all-namespaces
NAMESPACE     NAME                                 READY   STATUS    RESTARTS   AGE
kube-system   coredns-86c58d9df4-lhb7w             1/1     Running   0          95m
kube-system   coredns-86c58d9df4-zprwr             1/1     Running   0          95m
kube-system   etcd-k8s-master                      1/1     Running   0          100m
kube-system   kube-apiserver-k8s-master            1/1     Running   0          100m
kube-system   kube-controller-manager-k8s-master   1/1     Running   0          100m
kube-system   kube-flannel-ds-amd64-jjdmz          1/1     Running   0          91m
kube-system   kube-proxy-lfhbs                     1/1     Running   0          101m
kube-system   kube-scheduler-k8s-master            1/1     Running   0          100m

确认 CoreDNS pod 为运行状态

加入集群(node节点)

1. 配置node节点加入集群

在 k8s-node1 和 k8s-node2 执行以下命令(初始化中保存的 join 命令)

# kubeadm join 172.20.6.116:6443 --token lyycbq.uogsx4a9h7ponmg5 --discovery-token-ca-cert-hash sha256:60d0338c4927907cf56d9697bcdb261cd2fe2dac0f36a9901b254253516177ed

******

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the master to see this node join the cluster.

 

master 节点执行

mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

 

2. 检查集群

# kubectl get nodes

NAME         STATUS   ROLES    AGE     VERSION
k8s-master   Ready    master   2m23s   v1.13.1
k8s-node1    Ready    <none>   39s     v1.13.1
k8s-node2    Ready    <none>   16s     v1.13.1

可以看到两个节点都已经加入了,并且是正常的ready状态。 至此,整个集群的配置完成,可以开始使用了。

配置 dashboard

服务配置

默认没有web页面,可以通过以下步骤部署 dashboard

1. 导入 dashboard-ui(所有节点)

下载地址:dashboard-ui.tar

# docker load -i dashboard-ui.tar

2. 下载配置文件

# wget https://raw.githubusercontent.com/kubernetes/dashboard/master/aio/deploy/recommended/kubernetes-dashboard.yaml

编辑kubernetes-dashboard.yaml文件,添加type: NodePort,暴露Dashboard服务,便于从外部访问dashboard。注意这里只添加行type: NodePort即可,其他配置不用改。

spec:
  type: NodePort
  ports:
    - port: 443
      targetPort: 8443

3. 部署 Dashboard UI

# kubectl create -f  kubernetes-dashboard.yaml

4. 查看 dashboard 服务张台

# kubectl get pods --all-namespaces
NAMESPACE     NAME                                   READY   STATUS    RESTARTS   AGE
kube-system   coredns-86c58d9df4-8zhr5               1/1     Running   0          2d22h
kube-system   coredns-86c58d9df4-jqn7r               1/1     Running   0          2d22h
kube-system   etcd-k8s-master                        1/1     Running   0          2d22h
kube-system   kube-apiserver-k8s-master              1/1     Running   0          2d22h
kube-system   kube-controller-manager-k8s-master     1/1     Running   0          2d22h
kube-system   kube-flannel-ds-amd64-krf6t            1/1     Running   0          2d22h
kube-system   kube-flannel-ds-amd64-tkftg            1/1     Running   0          2d22h
kube-system   kube-flannel-ds-amd64-zxzld            1/1     Running   0          2d22h
kube-system   kube-proxy-5znt7                       1/1     Running   0          2d22h
kube-system   kube-proxy-gl9sl                       1/1     Running   0          2d22h
kube-system   kube-proxy-q7j7m                       1/1     Running   0          2d22h
kube-system   kube-scheduler-k8s-master              1/1     Running   0          2d22h
kube-system   kubernetes-dashboard-57df4db6b-pghk8   1/1     Running   0          19h

kubernetes-dashboard 为运行状态

创建简单用户

1. 创建服务账号和集群角色绑定配置文件

创建 dashboard-adminuser.yaml 文件,加入以下内容:

# vim dashboard-adminuser.yaml
---
apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-admin
  namespace: kube-system

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: kubernetes-dashboard-admin
  labels:
    k8s-app: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: kubernetes-dashboard-admin
  namespace: kube-system

2. 创建用户和角色绑定

# kubectl apply -f dashboard-adminuser.yaml

3. 查看 Token

# kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep kubernetes-dashboard-admin-token | awk '{print $1}')

Name:         kubernetes-dashboard-admin-token-dzkk6
Namespace:    kube-system
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: kubernetes-dashboard-admin
              kubernetes.io/service-account.uid: cd2f4f37-60ea-11e9-9889-000c291ecd86

Type:  kubernetes.io/service-account-token

Data
====
ca.crt:     1025 bytes
namespace:  11 bytes
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZC1hZG1pbi10b2tlbi1kemtrNiIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZC1hZG1pbiIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImNkMmY0ZjM3LTYwZWEtMTFlOS05ODg5LTAwMGMyOTFlY2Q4NiIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTprdWJlcm5ldGVzLWRhc2hib2FyZC1hZG1pbiJ9.4ClqBysAyWAPwHZVsqc9fiw1zE6rGlkMI_UjPpaDruPBPETnxwxyHwJUBtFC5Smb7HVyJXIoaUj-ADSft7faPN_UPPQ9agQ91HqkFZC0eNnY6Rt8OeaDrEN-_AaT3cTHUP4CpdTZdhN1TQ8OWtoimQg0M3M1jlYmoQr3DSKuMoR8x_QUIwo-LOXerrrwRTVxaU_vfzzXE8h23csJt1h-PbvatRPzS0uKjf66MM5VhkYzyI2OrxqJ8dNRax5jCX5VhTFbA8q5N828050T1p3vTND9uhMYW8H2tliGGztud97-PDpQBuU4tgSQzmtvtl9f1qfVSWUxF77FLu3DYGDZkA

保存 token 部分的内容

部署 Metrics Server

Heapter 将在 Kubernetes 1.13 版本中移除(https://github.com/kubernetes/heapster/blob/master/docs/deprecation.md),推荐使用 metrics-server 与 Prometheus。

1. 导入 metrics-server 镜像

下载地址:metrics-server.tar

# docker load -i metrics-server.tar

2. 保存配置文件

# mkdir metrics-server
# cd metrics-server
# wget https://github.com/kubernetes-incubator/metrics-server/raw/master/deploy/1.8%2B/aggregated-metrics-reader.yaml
# wget https://github.com/kubernetes-incubator/metrics-server/raw/master/deploy/1.8%2B/auth-delegator.yaml
# wget https://github.com/kubernetes-incubator/metrics-server/raw/master/deploy/1.8%2B/auth-reader.yaml
# wget https://github.com/kubernetes-incubator/metrics-server/raw/master/deploy/1.8%2B/metrics-apiservice.yaml
# wget https://github.com/kubernetes-incubator/metrics-server/raw/master/deploy/1.8%2B/metrics-server-deployment.yaml
# wget https://github.com/kubernetes-incubator/metrics-server/raw/master/deploy/1.8%2B/metrics-server-service.yaml
# wget https://github.com/kubernetes-incubator/metrics-server/raw/master/deploy/1.8%2B/resource-reader.yaml

修改 metrics-server-deployment.yaml 文件修改镜像默认拉去策略为 IfNotPresent

# vim metrics-server-deployment.yaml

containers:
      - name: metrics-server
        image: k8s.gcr.io/metrics-server-amd64:v0.3.1
        #imagePullPolicy: Always
        imagePullPolicy: IfNotPresent
        volumeMounts:
        - name: tmp-dir
          mountPath: /tmp

修改使用 IP 连接并且不验证证书

# vim metrics-server-deployment.yaml

containers:
      - name: metrics-server
        image: k8s.gcr.io/metrics-server-amd64:v0.3.1
        imagePullPolicy: IfNotPresent
        command:
        - /metrics-server
        - --kubelet-insecure-tls
        - --kubelet-preferred-address-types=InternalIP
        volumeMounts:
        - name: tmp-dir
          mountPath: /tmp

3. 执行部署

# kubectl apply -f ./

4. 查看监控数据

# kubectl top nodes
NAME         CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%   
k8s-master   196m         4%     1101Mi          14%       
k8s-node1    44m          1%     2426Mi          31%       
k8s-node2    38m          0%     2198Mi          28% 

# kubectl top pod --all-namespaces
NAMESPACE     NAME                                   CPU(cores)   MEMORY(bytes)   
kube-system   coredns-86c58d9df4-8zhr5               3m           13Mi            
kube-system   coredns-86c58d9df4-jqn7r               2m           13Mi            
kube-system   etcd-k8s-master                        17m          76Mi            
kube-system   kube-apiserver-k8s-master              30m          402Mi           
kube-system   kube-controller-manager-k8s-master     36m          63Mi            
kube-system   kube-flannel-ds-amd64-krf6t            2m           13Mi            
kube-system   kube-flannel-ds-amd64-tkftg            3m           15Mi            
kube-system   kube-flannel-ds-amd64-zxzld            2m           12Mi            
kube-system   kube-proxy-5znt7                       2m           14Mi            
kube-system   kube-proxy-gl9sl                       2m           18Mi            
kube-system   kube-proxy-q7j7m                       2m           16Mi            
kube-system   kube-scheduler-k8s-master              9m           16Mi            
kube-system   kubernetes-dashboard-57df4db6b-wtmkt   1m           16Mi            
kube-system   metrics-server-879f5ff6d-9q5xw         1m           13Mi

查看 Dashboard

1. 查找 dashboard 服务端口

# kubectl get svc -n kube-system
NAME                   TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)         AGE
kube-dns               ClusterIP   10.96.0.10      <none>        53/UDP,53/TCP   3d19h
kubernetes-dashboard   NodePort    10.109.196.92   <none>        443:30678/TCP   17h
metrics-server         ClusterIP   10.109.23.19    <none>        443/TCP         6m16s

端口为: 30678

2. 访问 dashboard

如出现 kubernetes-dashboard NET::ERR_CERT_INVALID

mkdir certs
openssl req -nodes -newkey rsa:2048 -keyout certs/dashboard.key -out certs/dashboard.csr -subj "/C=/ST=/L=/O=/OU=/CN=kubernetes-dashboard"
openssl x509 -req -sha256 -days 365 -in certs/dashboard.csr -signkey certs/dashboard.key -out certs/dashboard.crt
kubectl delete secret kubernetes-dashboard-certs -n kube-system
kubectl create secret generic kubernetes-dashboard-certs --from-file=certs -n kube-system
kubectl delete pods $(kubectl get pods -n kube-system|grep kubernetes-dashboard|awk '{print $1}') -n kube-system

刷新浏览器之后点击高级,选择跳过即可打开页面

 

访问地址为: https://192.168.88.121:30550,选择令牌,输入之前保存的 token 即可进入

 

 


参考文章

kubeadm 部署 kube1.10 Creating a single master cluster with kubeadm 使用 Kubeadm 安装部署 Kubernetes 1.12.1 集群 kubeadm快速部署Kubernetes(1.13.1,HA)

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值