k8s安装

环境:

使用virtualbox建了两台虚拟机:

节点系统内核数内存存储
mastercentos:724G20G
node1centos:724G20G

版本:

  • Kubernetes v1.15.1
  • Docker 18.09.7

安装docker

卸载旧版本

在 master 节点和 worker 节点都要执行
sudo yum remove -y docker \
docker-client \
docker-client-latest \
docker-common \
docker-latest \
docker-latest-logrotate \
docker-logrotate \
docker-selinux \
docker-engine-selinux \
docker-engine

设置 yum repository

#在 master 节点和 worker 节点都要执行
sudo yum install -y yum-utils \
device-mapper-persistent-data \
lvm2
sudo yum-config-manager \
--add-repo \
https://download.docker.com/linux/centos/docker-ce.repo

安装并启动 docker

#在 master 节点和 worker 节点都要执行
sudo yum install -y docker-ce-18.09.7 docker-ce-cli-18.09.7 containerd.io
sudo systemctl enable docker
sudo systemctl start docker

检查 docker 版本

#在 master 节点和 worker 节点都要执行
docker version

安装 nfs-utils

执行安装命令

#在 master 节点和 worker 节点都要执行
sudo yum install -y nfs-utils

K8S基本配置

配置K8S的yum源

#在 master 节点和 worker 节点都要执行
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
       http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

关闭 防火墙、SeLinux、swap

#在 master 节点和 worker 节点都要执行
systemctl stop firewalld
systemctl disable firewalld

setenforce 0
sed -i "s/SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config

swapoff -a
yes | cp /etc/fstab /etc/fstab_bak
cat /etc/fstab_bak |grep -v swap > /etc/fstab

修改 /etc/sysctl.conf

#在 master 节点和 worker 节点都要执行
vi /etc/sysctl.conf

向其中添加

net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1

示例:

# sysctl settings are defined through files in
# /usr/lib/sysctl.d/, /run/sysctl.d/, and /etc/sysctl.d/.
#
# Vendors settings live in /usr/lib/sysctl.d/.
# To override a whole file, create a new file with the same in
# /etc/sysctl.d/ and put new settings there. To override
# only specific settings, add a file with a lexically later
# name in /etc/sysctl.d/ and put new settings there.
#
# For more information, see sysctl.conf(5) and sysctl.d(5).
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
                                                                                                                                                                                                       

执行命令以应用

#在 master 节点和 worker 节点都要执行
sysctl -p

安装kubelet、kubeadm、kubectl

#在 master 节点和 worker 节点都要执行
yum install -y kubelet-1.15.1 kubeadm-1.15.1 kubectl-1.15.1

修改docker Cgroup Driver为systemd

#在 master 节点和 worker 节点都要执行

vi /usr/lib/systemd/system/docker.service

向其中添加

--exec-opt native.cgroupdriver=systemd

如下图:
在这里插入图片描述设置 docker 镜像

执行以下命令使用 docker 国内镜像,提高 docker 镜像下载速度和稳定性

#在 master 节点和 worker 节点都要执行
curl -sSL https://get.daocloud.io/daotools/set_mirror.sh | sh -s http://f1361db2.m.daocloud.io

重启 docker,并启动 kubelet

#在 master 节点和 worker 节点都要执行
systemctl daemon-reload
systemctl restart docker
systemctl enable kubelet && systemctl start kubelet

#初始化 master 节点

以 root 身份在 demo-master-a-1 机器上执行

配置 apiserver.demo 的域名

#只在 master 节点执行
echo “x.x.x.x apiserver.dem0” >> /etc/hosts

请替换其中的 x.x.x.x 为您的 demo-master-a-1 的实际 ip 地址

创建 ./kubeadm-config.yaml

#只在 master 节点执行
cat <<EOF > ./kubeadm-config.yaml
apiVersion: kubeadm.k8s.io/v1beta1
kind: ClusterConfiguration
kubernetesVersion: v1.15.1
imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers
controlPlaneEndpoint: "apiserver.demo:6443"
networking:
  podSubnet: "10.100.0.1/20"
EOF

podSubnet 所使用的网段不能与节点所在的网段重叠

初始化 apiserver

#只在 master 节点执行
kubeadm init --config=kubeadm-config.yaml --upload-certs

执行结果如下所示:

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of the control-plane node running the following command on each as root:

  kubeadm join apiserver.demo:6443 --token u0xlp4.h495np8khukryo4y \
    --discovery-token-ca-cert-hash sha256:ac8ed3cd588f0ffa2ff4a6724b97545f6c551cae5f7102f0e5ed99d5488b224b \
    --control-plane --certificate-key 791cb26f5adef97ed50917a96c18c227a6a02d823184a2024510bf13f31555b9

Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use 
"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join apiserver.demo:6443 --token u0xlp4.h495np8khukryo4y \
    --discovery-token-ca-cert-hash sha256:ac8ed3cd588f0ffa2ff4a6724b97545f6c551cae5f7102f0e5ed99d5488b224b 

初始化 root 用户的 kubectl 配置

#只在 master 节点执行
rm -rf /root/.kube/
mkdir /root/.kube/
cp -i /etc/kubernetes/admin.conf /root/.kube/config

安装 calico

# 只在 master 节点执行
kubectl apply -f https://docs.projectcalico.org/v3.6/getting-started/kubernetes/installation/hosted/kubernetes-datastore/calico-networking/1.7/calico.yaml

等待calico安装就绪:

执行如下命令,等待 3-10 分钟,直到所有的容器组处于 Running 状态

# 只在 master 节点执行
watch kubectl get pod -n kube-system

在这里插入图片描述
Ctrl +c退出
或者:

[root@apiserver ~]# kubectl get pods -A
NAMESPACE     NAME                                       READY   STATUS    RESTARTS   AGE
kube-system   calico-kube-controllers-7b4657785d-vfgfr   1/1     Running   0          2m58s
kube-system   calico-node-bh6dl                          1/1     Running   0          2m58s
kube-system   coredns-6967fb4995-5fg4q                   1/1     Running   0          5m17s
kube-system   coredns-6967fb4995-8v5kb                   1/1     Running   0          5m17s
kube-system   etcd-apiserver.demo                        1/1     Running   0          4m27s
kube-system   kube-apiserver-apiserver.demo              1/1     Running   0          4m20s
kube-system   kube-controller-manager-apiserver.demo     1/1     Running   0          4m19s
kube-system   kube-proxy-nhqd7                           1/1     Running   0          5m16s
kube-system   kube-scheduler-apiserver.demo              1/1     Running   0          4m23s

检查 master 初始化结果

在 master 节点 demo-master-a-1 上执行

#只在 master 节点执行
kubectl get nodes
[root@apiserver ~]# kubectl get nodes
NAME             STATUS   ROLES    AGE     VERSION
apiserver.demo   Ready    master   3m35s   v1.15.1

初始化 worker节点

#获得 join命令参数

在 master 节点 demo-master-a-1 节点执行

# 只在 master 节点执行
kubeadm token create --print-join-command
[root@apiserver ~]# kubeadm token create --print-join-command
kubeadm join apiserver.demo:6443 --token 7fzhfs.0erm02gl2ycf5wuw     --discovery-token-ca-cert-hash sha256:ac8ed3cd588f0ffa2ff4a6724b97545f6c551cae5f7102f0e5ed99d5488b224b 

#初始化worker

针对所有的 worker 节点执行

# 只在 worker 节点执行
echo "x.x.x.x  apiserver.demo" >> /etc/host
kubeadm join apiserver.demo:6443 --token 7fzhfs.0erm02gl2ycf5wuw     --discovery-token-ca-cert-hash sha256:ac8ed3cd588f0ffa2ff4a6724b97545f6c551cae5f7102f0e5ed99d5488b224b 
  • 将 x.x.x.x 替换为 demo-master-a-1 的实际 ip

  • 将 kubeadm join 命令后的参数替换为上一个步骤中实际从 demo-master-a-1 节点获得的参数

#检查初始化结果

在 master 节点 demo-master-a-1 上执行

# 只在 master 节点执行
kubectl get nodes
[root@apiserver ~]# kubectl get nodes
NAME                    STATUS   ROLES    AGE     VERSION
apiserver.demo          Ready    master   6m46s   v1.15.1
localhost.localdomain   Ready    <none>   30s     v1.15.1

此时ROLES为状态,修改node节点角色

[root@apiserver ~]# kubectl label node localhost.localdomain node-role.kubernetes.io/worker=worker
node/localhost.localdomain labeled
[root@apiserver ~]# kubectl get nodes
NAME                    STATUS   ROLES    AGE   VERSION
apiserver.demo          Ready    master   40m   v1.15.1
localhost.localdomain   Ready    worker   34m   v1.15.1

查看pod是否都正常运行:

[root@apiserver ~]# kubectl get pods -A
NAMESPACE       NAME                                       READY   STATUS    RESTARTS   AGE
kube-system     calico-kube-controllers-7b4657785d-vfgfr   1/1     Running   0          12m
kube-system     calico-node-bh6dl                          1/1     Running   0          12m
kube-system     calico-node-zn6pz                          1/1     Running   0          9m16s
kube-system     coredns-6967fb4995-5fg4q                   1/1     Running   0          15m
kube-system     coredns-6967fb4995-8v5kb                   1/1     Running   0          15m
kube-system     etcd-apiserver.demo                        1/1     Running   0          14m
kube-system     kube-apiserver-apiserver.demo              1/1     Running   0          14m
kube-system     kube-controller-manager-apiserver.demo     1/1     Running   0          14m
kube-system     kube-proxy-gpcvm                           1/1     Running   0          9m16s
kube-system     kube-proxy-nhqd7                           1/1     Running   0          15m
kube-system     kube-scheduler-apiserver.demo              1/1     Running   0          14m
nginx-ingress   nginx-ingress-nshwh                        1/1     Running   0          5m28s

终于完成了Kubernetes 集群的安装

参考链接:https://www.kubernetes.org.cn/5650.html

接下来安装kuboard

k8s命令补全:

$ yum install -y bash-completion
$ locate bash_completion
/usr/share/bash-completion/bash_completion
$ source /usr/share/bash-completion/bash_completion
$ source <(kubectl completion bash)
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值