CentOs下安装 Docker& kubernetes

参考:https://developer.aliyun.com/article/748412
https://my.oschina.net/u/4197945/blog/3134071
https://blog.csdn.net/qq_26897321/article/details/124127198

一、安装docker


1.基础包

yum install -y yum-utils device-mapper-persistent-data lvm2


2.设置仓库

yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo


3.安装Docker Engine - Community --此命令会安装最新版本(latest)

yum install docker-ce docker-ce-cli containerd.io


4.启动docker

sudo systemctl start docker

systemctl enable docker.service


5.使用docker 加速,到/etc/docker/ 目录下,增加daemon.json 文件
 

cd /etc/docker/

编辑文件 /etc/docker/daemon.json

{
   "registry-mirrors" : ["https://mj9kvemk.mirror.aliyuncs.com"],
   "exec-opts":["native.cgroupdriver=systemd"]
}


6.重启Docker

systemctl daemon-reload
systemctl restart docker

二、环境配置

#查看 hostname

hostnamectl 

#设置每个机器自己的hostname (可以不设置)

hostnamectl set-hostname xxx


 
# 将 SELinux 设置为 permissive 模式(相当于将其禁用)

sudo setenforce 0
sudo sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config

 
# 关闭交换分区

swapoff -a  
sed -ri 's/.*swap.*/#&/' /etc/fstab


 
#允许 iptables 检查桥接流量

cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
br_netfilter
EOF
 
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF


# 从所有系统配置文件中加载参数

sysctl --system

关闭防火墙

systemctl stop firewalld.service
systemctl disable firewalld.service

配置系统基本安装源(注意centos8阿里云的镜像已经没有了,需要使用下面的)
由于centos-8 镜像已经移除了,换成这个

curl -o /etc/yum.repos.d/CentOS-Base.repo curl -o /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-vault-8.5.2111.repo

现在这个源有点问题,切换成这个

curl -O http://mirrors.aliyun.com/repo/Centos-7.repo
yum clean all
yum makecache 

添加K8s安装源
新增文件 /etc/yum.repos.d/kubernetes.repo

[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
       https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg

三、正式安装 kubectl、kubelet、kubeadm

1.安装kubectl、kubelet、kubeadm

一定要指定版本,我这里使用1.23.1 ,之前使用1.24.2后面的镜像一直装不上,折腾好久

yum install -y kubelet-1.23.1 kubeadm-1.23.1 kubectl-1.23.1
systemctl enable kubelet
systemctl start kubelet

2.查看版本

kubeadm version
kubectl version --client
kubelet --version

3. 初始化kubernetes集群
由于国内网络原因,kubeadm init会卡住不动,所以先设置国内镜像
 

kubeadm config images pull --image-repository=registry.aliyuncs.com/google_containers
kubeadm init

# 10.206.0.13 是本机的ip,其他不用修改
kubeadm init \
  --apiserver-advertise-address=10.206.0.13 \
  --image-repository registry.aliyuncs.com/google_containers \
  --kubernetes-version v1.23.1 \
  --service-cidr=10.96.0.0/12 \
  --pod-network-cidr=10.244.0.0/16

初始化出错解决:


[init] Using Kubernetes version: v1.24.2
[preflight] Running pre-flight checks
        [WARNING FileExisting-tc]: tc not found in system path
        [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase preflight: [preflight] Some fatal errors occurred:
        [ERROR FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml]: /etc/kubernetes/manifests/kube-apiserver.yaml already exists
        [ERROR FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml]: /etc/kubernetes/manifests/kube-controller-manager.yaml already exists
        [ERROR FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml]: /etc/kubernetes/manifests/kube-scheduler.yaml already exists
        [ERROR FileAvailable--etc-kubernetes-manifests-etcd.yaml]: /etc/kubernetes/manifests/etcd.yaml already exists
        [ERROR CRI]: container runtime is not running: output: E0623 15:54:44.777054 1088530 remote_runtime.go:925] "Status from runtime service failed" err="rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
time="2022-06-23T15:54:44+08:00" level=fatal msg="getting status of runtime: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
, error: exit status 1
        [ERROR FileContent--proc-sys-net-ipv4-ip_forward]: /proc/sys/net/ipv4/ip_forward contents are not set to 1
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher
 

这里提示服务未启动,运行下面命令

systemctl enable kubelet.service

再继续执行初始化命令:kubeadm init


[init] Using Kubernetes version: v1.24.2
[preflight] Running pre-flight checks
        [WARNING FileExisting-tc]: tc not found in system path
error execution phase preflight: [preflight] Some fatal errors occurred:
        [ERROR FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml]: /etc/kubernetes/manifests/kube-apiserver.yaml already exists
        [ERROR FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml]: /etc/kubernetes/manifests/kube-controller-manager.yaml already exists
        [ERROR FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml]: /etc/kubernetes/manifests/kube-scheduler.yaml already exists
        [ERROR FileAvailable--etc-kubernetes-manifests-etcd.yaml]: /etc/kubernetes/manifests/etcd.yaml already exists
        [ERROR CRI]: container runtime is not running: output: E0623 15:55:37.984948 1090481 remote_runtime.go:925] "Status from runtime service failed" err="rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
time="2022-06-23T15:55:37+08:00" level=fatal msg="getting status of runtime: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
, error: exit status 1
        [ERROR FileContent--proc-sys-net-ipv4-ip_forward]: /proc/sys/net/ipv4/ip_forward contents are not set to 1
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher

很明显“kube-apiserver.yaml,kube-controller-manager.yaml,kube-scheduler.yaml,etcd.yaml” 这几个已经存在了,只要重置一下即可。

kubeadm reset
mv /etc/containerd/config.toml /tmp/
systemctl restart containerd

# 再次执行 
kubeadm init

如果镜像还是拉取失败,请参考:k8s拉取镜像失败最简单最快最完美解决方法 [ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-apiserver - 灰信网(软件开发博客聚合)

4.配置环境变量(只在master执行) 

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

5.添加k8s的Node节点

# 在master节点获取token:
kubeadm token create --print-join-command --ttl 0
# 在node1和node2添加如下的命令向k8s集群中添加Node节点(为上方命令返回的内容,复制即可):
kubeadm join 192.168.40.136:6443 --token yruyio.n4hal2qdb5iweknf     --discovery-token-ca-cert-hash sha256:0ac7ed632224e1e07cef223b1159d03b2231dfc0456817db7eaf3c8651eef49c
# 获取所有节点(正常可以获取到master,node1和node2)
kubectl get nodes

6.在master节点部署CNI网络插件:


# 此链接如果下载不了,可以下面提供的地址下载
wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
kubectl apply -f kube-flannel.yml
# 查看部署进度(可能会出现镜像拉取失败的情况,耐心等待一会就好了)
kubectl get pods -n kube-system

K8S部署资源文件,包含kube-flannel+kubernetes-dashboard+ingress-Linux文档类资源-CSDN下载

检查状态是否成功:

kubectl get cs
kubectl cluster-info

四、Dashboard 部署

需要下载kubernetes-dashboard.yaml 文件,可以通过上面给的链接下载

1.在master执行以下命令:

kubectl apply -f kubernetes-dashboard.yaml
# 开启代理 ip写自己实际ip
kubectl proxy --address=192.168.100.36 --disable-filter=true &

访问https://192.168.100.36:30002

2.在K8S 上使用域名访问

现在域名管理配置域名解析,并申请SSL证书

新建一个certs目录,专门放SSL证书,

mkdir certs && cd certs

把申请好SSL证书放到certs目录,继续下面操作

#删除原有的证书secret
kubectl delete secret kubernetes-dashboard-certs -n kube-system

#使用新的证书secret
kubectl -n kube-system create secret tls kubernetes-dashboard-certs --key k8s-dashboard.ipwe.net.cn.key --cert k8s-dashboard.ipwe.net.cn_bundle.crt

#查看pod复制dashboard的pod名称
kubectl get pod -n kube-system

#重启pod(删除会自动重启) pod name 是上面查到的 kubernetes-dashboard-XXXXXXXXXXX
kubectl delete pod <pod name> -n kube-system

获取token:

kubectl describe secrets -n kube-system $(kubectl -n kube-system get secret | awk '/cluster-admin/

通过令牌方式访问

kubectl apply -f kubernetes-dashboard-ingress.yaml

出现错误:

 error when creating "kubernetes-dashboard-ingress.yaml": Internal error occurred: failed calling webhook "validate.nginx.ingress.kubernetes.io": failed to call webhook: Post "https://ingress-nginx-controller-admission.ingress-nginx.svc:443/networking/v1/ingresses?timeout=10s": dial tcp 10.107.233.56:443: connect: connection refused
 

解决方案:

1使用下面的命令查看 webhook

kubectl get validatingwebhookconfigurations ingress-nginx-admission

2 .删除ingress-nginx-admission

kubectl delete -A ValidatingWebhookConfiguration ingress-nginx-admission

  • 1
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值