Kubernete-- 利用kubeadm 搭建一个kubernate集群

Kubernete-- 利用kubeadm 搭建一个kubernate集群

目标

  • 利用 kubeadm 搭建一个四节点的k8s测试集群
  • 利用harbor搭建一个单节点的私有镜像仓库
  • k8s集群与私有镜像仓库整合
  • 部署dashboard
  • 部署Heapster 监控与统计

前期准备

准备以下4个节点,一个为k8s的master节点,2个为node节点,最后一个作为私有仓库镜像,系统为centos7.2:

hostnameip说明
k8s.master192.168.2.130k8s master节点
k8s.node1192.168.2.131k8s node节点
k8s.node2192.168.2.132k8s node节点
k8s.harbor192.168.2.139k8s 仓库镜像节点

K8S 安装

1. 所有节点的操作

更新yum

sudo yum update -y

执行以下命令关闭防火墙

systemctl stop firewalld && systemctl disable firewalld
setenforce 0

将SELINUX的值改成disabled

vim /etc/selinux/config
SELINUX=disabled

为了使上面的改动生效,需要重启CentOS;

安装docker
官方文档:https://docs.docker.com/engine/installation/linux/docker-ce/centos/

#1.配置仓库
sudo yum install -y yum-utils device-mapper-persistent-data lvm2
sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
 
#2.可以选择是否开启edge和test仓库
sudo yum-config-manager --enable docker-ce-edge
sudo yum-config-manager --enable docker-ce-test
sudo yum-config-manager --disable docker-ce-edge
sudo yum-config-manager --disable docker-ce-test
 
#3.安装docker-ce
sudo yum install docker-ce     #由于repo中默认只开启stable仓库,故这里安装的是最新稳定版17.09
 
#4.可以查看所有仓库中所有docker版本,并选择特定版本安装
yum list docker-ce --showduplicates | sort -r
 
docker-ce.x86_64            17.09.0.ce-1.el7.centos            docker-ce-stable
docker-ce.x86_64            17.09.0.ce-1.el7.centos            @docker-ce-stable
docker-ce.x86_64            17.06.2.ce-1.el7.centos            docker-ce-stable
docker-ce.x86_64            17.06.1.ce-1.el7.centos            docker-ce-stable
docker-ce.x86_64            17.06.0.ce-1.el7.centos            docker-ce-stable
docker-ce.x86_64            17.03.2.ce-1.el7.centos            docker-ce-stable
docker-ce.x86_64            17.03.1.ce-1.el7.centos            docker-ce-stable
docker-ce.x86_64            17.03.0.ce-1.el7.centos            docker-ce-stable
 
sudo yum install <FQPN>  例如:sudo yum install docker-ce-17.09.0.ce
 
#5.启动并加入开机启动
sudo systemctl start docker
sudo systemctl enable docker
 
#6.关闭docker-daemon
sudo systemctl stop docker
sudo systemctl disable docker

设置系统参数

cat <<EOF >  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system

关闭swap

swapoff -a

临时关闭,重启后失效

修改 /etc/fstab 文件,注释掉 SWAP 的自动挂载,使用free -m确认swap已经关闭。 > swappiness参数调整,修改/etc/sysctl.d/k8s.conf添加下面一行:

vm.swappiness=0
执行sysctl -p /etc/sysctl.d/k8s.conf使修改生效。

如果服务器可以翻墙,可以直接通过yum命令安装,内容如下:
vim /etc/yum.repos.d/kubernetes.repo

[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
        https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg

安装kubernetes相关的应用

yum update
yum install -y kubelet kubeadm kubectl

如果不能翻墙,只能先下载下来,然后安装,需要安装的rpm包url地址可以在这个网页中找到:

curl https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64/repodata/primary.xml

1.9.2百度网盘下载地址

https://pan.baidu.com/s/1i6Qbo4d

我们这里只需要安装四个rpm包,kubectl,kubeadm, kubelet以及kubernetes-cni,可以直接搜索上面的网页然后找到合适版本的rpm包。我们这里安装最新版本1.9.3,对应的地址如下:

kubeadm-1.9.2-0.x86_64.rpm
kubelet-1.9.3-0.x86_64.rpm
kubernetes-cni-0.6.0-0.x86_64.rpm
kubectl-1.9.2-0.x86_64.rpm

socat-1.7.3.2-2.el7.x86_64.rpm

yum install -y *.rpm
systemctl enable kubelet && systemctl start kubelet

journalctl -xeu kubelet查看错误信息

修改kubernetes的网络配置:
修改文件/etc/systemd/system/kubelet.service.d/10-kubeadm.conf,找到下面这一行:

Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=systemd"

把systemd改成cgroupfs;

然后执行以下命令重启kubelet:

systemctl restart kubelet
systemctl status kubelet

接下来开始基于Kubeadm 创建k8s集群,不过在开始之前,我们先准备下需要用到的镜像,因为kubeadm创建的k8s集群中的kub-api, kube-scheduler, kube-proxy, kube-controller-manager,etcd等服务都是直接拉取镜像跑在k8s集群中,为了避免安装过程中下载镜像浪费太多时间,这里先把镜像下载好。
我们直接用1.9.2,如果服务器可以翻墙,直接拉取镜像:

#必须镜像
gcr.io/google_containers/kube-apiserver-amd64:v1.9.2
gcr.io/google_containers/kube-proxy-amd64:v1.9.2
gcr.io/google_containers/kube-controller-manager-amd64:v1.9.2
gcr.io/google_containers/kube-scheduler-amd64:v1.9.2
gcr.io/google_containers/etcd-amd64:3.1.11
gcr.io/google_containers/k8s-dns-sidecar-amd64:1.14.7
gcr.io/google_containers/k8s-dns-kube-dns-amd641.14.7
gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.7
gcr.io/google_containers/pause-amd64:3.0

#calico
quay.io/calico/node:v2.6.7
quay.io/calico/kube-controllers:v1.0.3
quay.io/coreos/etcd:v3.1.10

#flannel
quay.io/coreos/flannel:v0.9.1-amd64

#Dashboard
k8s.gcr.io/kubernetes-dashboard-amd64

导入镜像

### 导入镜像 ###
[root@k8s-master images]# docker load -i cni.tar
                  .
                  .
                  .
[root@k8s-master images]# docker images
REPOSITORY                                                       TAG                 IMAGE ID            CREATED             SIZE
quay.io/calico/node                                              v2.6.7              7c694b9cac81        8 days ago          281.6 MB
gcr.io/google_containers/kube-controller-manager-amd64           v1.9.2              769d889083b6        3 weeks ago         137.8 MB
gcr.io/google_containers/kube-proxy-amd64                        v1.9.2              e6754bb0a529        3 weeks ago         109.1 MB
gcr.io/google_containers/kube-apiserver-amd64                    v1.9.2              7109112be2c7        3 weeks ago         210.4 MB
gcr.io/google_containers/kube-scheduler-amd64                    v1.9.2              2bf081517538        3 weeks ago         62.71 MB
quay.io/calico/kube-controllers                                  v1.0.3              34aebe64326d        3 weeks ago         52.25 MB
k8s.gcr.io/kubernetes-dashboard-amd64                            v1.8.2              c87ea0497294        3 weeks ago         102.3 MB
quay.io/calico/cni                                               v1.11.2             6f0a76fc7dd2        7 weeks ago         70.78 MB
gcr.io/google_containers/etcd-amd64                              3.1.11              59d36f27cceb        9 weeks ago         193.9 MB
quay.io/coreos/flannel                                           v0.9.1-amd64        2b736d06ca4c        12 weeks ago        51.31 MB
gcr.io/google_containers/k8s-dns-sidecar-amd64                   1.14.7              db76ee297b85        3 months ago        42.03 MB
gcr.io/google_containers/k8s-dns-kube-dns-amd64                  1.14.7              5d049a8c4eec        3 months ago        50.27 MB
gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64             1.14.7              5feec37454f4        3 months ago        40.95 MB
quay.io/coreos/etcd                                              v3.1.10             47bb9dd99916        6 months ago        34.56 MB
gcr.io/google_containers/pause-amd64                             3.0                 99e59f495ffa        21 months ago       746.9 kB

在各个节点上运行ip_vs 因为可能遇到

[root@k8s-master /etc/modprobe.d]$ docker logs 71a413c7b015
I0307 03:32:57.394549       1 feature_gate.go:184] feature gates: map[]
time="2018-03-07T03:32:57Z" level=warning msg="Running modprobe ip_vs failed with message: `modprobe: ERROR: could not insert 'ip_vs': Exec format error\ninsmod /lib/modules/3.10.0-693.2.2.el7.x86_64/kernel/net/netfilter/ipvs/ip_vs.ko.xz`, error: exit status 1"
time="2018-03-07T03:32:57Z" level=error msg="Could not get ipvs family information from the kernel. It is possible that ipvs is not enabled in your kernel. Native loadbalancing will not work until this is fixed."

在kube-proxy 运行之前先运行ip_vs

[root@k8s-master ~]$ modprobe -r ip_vs
[root@k8s-master ~]$ lsmod|grep ip_vs
[root@k8s-master ~]$ modprobe ip_vs
[root@k8s-master ~]$ lsmod|grep ip_vs
ip_vs                 141092  0
nf_conntrack          133387  9 ip_vs,nf_nat,nf_nat_ipv4,nf_nat_ipv6,xt_conntrack,nf_nat_masquerade_ipv4,nf_conntrack_netlink,nf_conntrack_ipv4,nf_conntrack_ipv6
libcrc32c              12644  3 ip_vs,nf_nat,nf_conntrack
[root@k8s-master ~]$ modprobe -r ip_vs
[root@k8s-master ~]$ insmod /lib/modules/3.10.0-693.2.2.el7.x86_64/kernel/net/netfilter/ipvs/ip_vs.ko.xz
[root@k8s-master ~]$ lsmod|grep ip_vs
ip_vs                 141092  0
nf_conntrack          133387  9 ip_vs,nf_nat,nf_nat_ipv4,nf_nat_ipv6,xt_conntrack,nf_nat_masquerade_ipv4,nf_conntrack_netlink,nf_conntrack_ipv4,nf_conntrack_ipv6
libcrc32c              12644  3 ip_vs,nf_nat,nf_conntrack
[root@k8s-master ~]$

2. master节点安装

使用kubeadm初始化master节点

kubeadm init --kubernetes-version=v1.9.2 --pod-network-cidr=172.16.0.0/16 --apiserver-advertise-address=192.168.2.130

注意命令执行后的最后一句kubeadm join --token d8b040.0aa73933666c2865 192.168.2.130:6443 --discovery-token-ca-cert-hash sha256:c24a3a8404036a9e96e30dcada132eceb35e5db3e036c9e4ea8809cb9c623531,在node加入的时候我们就是用这一句来将node加入到集群的

kubeadm会自动检查当前环境是否有上次命令执行的"残留"。如果有,必须清理后再行执行init。我们可以通过"kubeadm reset"来清理环境,以备重来。

创建kube目录,添加kubectl配置

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

检查k8s组件状态

kubectl get cs

添加Calico(网络组件)

注意修改CALICO_IPV4POOL_CIDR参数

wget https://docs.projectcalico.org/v2.6/getting-started/kubernetes/installation/hosted/kubeadm/1.6/calico.yaml
kubectl apply -f calico.yaml 

2. node节点安装

kubeadm join --token d8b040.0aa73933666c2865 192.168.2.130:6443 --discovery-token-ca-cert-hash sha256:c24a3a8404036a9e96e30dcada132eceb35e5db3e036c9e4ea8809cb9c623531

在master上查看

kubectl get nodes
kubectl get pods --all-namespaces

kubectl describe node ******

节点重置/移除节点

### 驱离k8s-node-1节点上的pod ###
[root@k8s-master ~]# kubectl drain k8s.node1 --delete-local-data --force --ignore-daemonsets

### 删除节点 ###
[root@k8s-master ~]# kubectl delete node k8s.node1

### 重置节点 ###
[root@k8s-node-1 ~]# kubeadm reset

token过期后重新生成token

#重新生成新的token
[root@k8s-master ~]# kubeadm token create
[root@k8s-master ~]# kubeadm token list
TOKEN                     TTL       EXPIRES                     USAGES                   DESCRIPTION   EXTRA GROUPS
aa78f6.8b4cafc8ed26c34f   23h       2018-03-26T16:36:29+08:00   authentication,signing   <none>        system:bootstrappers:kubeadm:default-node-token

#获取ca证书sha256编码hash值
[root@k8s-master ~]# openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'
0fd95a9bc67a7bf0ef42da968a0d55d92e52898ec37c971bd77ee501d845b538

#节点加入集群
kubeadm join --token aa78f6.8b4cafc8ed26c34f 192.168.2.130:6443 --discovery-token-ca-cert-hash sha256:0fd95a9bc67a7bf0ef42da968a0d55d92e52898ec37c971bd77ee501d845b538 --skip-preflight-checks

–skip-preflight-checks,可以防止每次初始化都去检查配置文件,否则可能在多次init后报错[etcd在使用、kubelet在使

3. 部署Dashboard
在master节点上执行:

wget https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml

kubectl apply -f kubernetes-dashboard.yaml

默认配置部署的dashboard账号kubernetes-dashboard绑定的角色为kubernetes-dashboard-minimal,我们需要为其配置一个权限更高的用户
Dashboard的服务方式已经被修改为nodeport

vi kubernetes-dashboard-rbac-admin.yml

apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-admin
  namespace: kube-system

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: kubernetes-dashboard-admin
  labels:
    k8s-app: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: kubernetes-dashboard-admin
  namespace: kube-system
[root@k8s-master yml]# kubectl apply -f kubernetes-dashboard-rbac-admin.yml 

我们使用自己创建的kubernetes-dashboard-admin用户登录dashboard,并且使用token方式登录

### 找到kubernetes-dashboard-admin的secret记录
[root@k8s-master yml]# kubectl get secret --all-namespaces | grep kubernetes-dashboard-admin
kube-system   kubernetes-dashboard-admin-token-2w8mg           kubernetes.io/service-account-token   3         22h

## 查看secret的token值
[root@master images]# kubectl get secret --all-namespaces | grep kubernetes-dashboard-admin
kube-system   kubernetes-dashboard-admin-token-2w8mg           kubernetes.io/service-account-token   3         18h
[root@master images]# kubectl describe secret kubernetes-dashboard-admin-token-2w8mg -n kube-system
Name:         kubernetes-dashboard-admin-token-2w8mg
Namespace:    kube-system
Labels:       <none>
Annotations:  kubernetes.io/service-account.name=kubernetes-dashboard-admin
              kubernetes.io/service-account.uid=b89b863f-1d24-11e8-9bec-080027fdd465

Type:  kubernetes.io/service-account-token

Data
====
ca.crt:     1025 bytes
namespace:  11 bytes
token:      eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZC1hZG1pbi10b2tlbi0ydzhtZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZC1hZG1pbiIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImI4OWI4NjNmLTFkMjQtMTFlOC05YmVjLTA4MDAyN2ZkZDQ2NSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTprdWJlcm5ldGVzLWRhc2hib2FyZC1hZG1pbiJ9.VNxhEYW_QXUMJC-M4H4Mbe_ziDz8g-SbmYtDzCGAN-n6Cpbu0xsle9ZN-uAzMaZIE35aPGhs3AncnnuhBNzdXp3PDYtZCuepaskLQXc73hcXYoRTmNjHM2y23oQQliatBl_ICsujiWI7cbmkvZKU0YVmicn3TYgHKVODFxexvxN5kS9Pirv-vT6g03SnvFCktAteV2nO6pQele0Qyt6UGPZX1lnJDrmAGr3_jzo9YGSDdcZePchiJnBOVh4P0ufRqPylNij5-Upxi3uCU10sOuySeaUuPMerHffW0rafrRPeD_6cyB0cAqawj3CTwFl1IbR4-UOsTZlId1LkCoaCCg

查看dashboard的端口,并使用上面的token登录

kubectl get svc --all-namespaces

kubectl describe svc kubernetes-dashboard-admin

https://192.168.2.130:31879/

4. 部署Heapster 监控与统计

wget https://github.com/kubernetes/heapster/archive/v1.3.0.tar.gz

tar -zxvf v1.3.0.tar.gz
cd heapster-1.3.0/deploy/kube-config/influxdb
kubectl create -f ./*

5. k8s 添加私有镜像
利用kubectl创建docker-registry的secret

kubectl create secret docker-registry myregistrykey --docker-server=DOCKER_REGISTRY_SERVER --docker-username=DOCKER_USER --docker-password=DOCKER_PASSWORD --docker-email=DOCKER_EMAIL
kubectl get secret --all-namespaces   #查看创建的secret
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值