从零开始部署Kubernetes

前言

公司最近需要使用ceph分布式文件存储方案,按照官方推荐需要部署到k8s上,所以学习测试一下k8s的简单部署

参考链接:入门和集群安装部署

Kubernetes 官方文档:Kubernetes

一、集群搭建

1.集群规划

因为是测试环境,所以使用1个Master节点和2个Node节点进行部署

角色IP主机名组件
master192.168.2.74k8s-masterkube-controller-manager
kube-apiserver
kube-scheduler
docker
etcd
node-1192.168.2.75k8s-node-1kubelet
kube-proxy
cri-dockerd
docker
node-2192.168.2.67k8s-node-2kubelet
kube-proxy
cri-dockerd
docker

2.操作系统初始化

k8s对服务器的配置会有一定要求,所以先对操作系统进行初始化
所有服务器上执行下列这些命令

2.1 关闭防火墙

#关闭防火墙
systemctl stop firewalld
#禁用防火墙
systemctl disable firewalld

2.2 关闭SELinux

官方对于关闭SELinux的解释是:

  • 通过运行命令 setenforce 0 和 sed … 将 SELinux 设置为 permissive 模式可以有效地将其禁用。 这是允许容器访问主机文件系统所必需的,而这些操作是为了例如 Pod 网络工作正常。你必须这么做,直到 kubelet 做出对 SELinux 的支持进行升级为止
  • 如果你知道如何配置 SELinux 则可以将其保持启用状态,但可能需要设定 kubeadm 不支持的部分配置
#临时关闭SELinux
setenforce 0
#永久关闭SELinux
sed -i 's/enforcing/disabled/' /etc/selinux/config
#查看SELinux状态
# Enforcing:默认状态,强制模式,代表 SELinux 正常运行,所有的策略已经生效
# Permissive:宽容模式,代表SELinux正常运行,但是不会限制进程访问文件和资源,只会显示警告
# Disabled:关闭模式,SELinux并没有实际运行
# setenforce 0设置后状态为Permissive
getenforce

2.3 关闭swap

swap启用后,在使用磁盘空间和内存交换数据时,性能表现会较差,会减慢程序执行的速度

# 临时关闭
swapoff -a
# 永久关闭
sed -ri 's/.*swap.*/#&/' /etc/fstab
# 查看结果
free -h

2.4 添加hosts

# 编辑hosts文件
vim /etc/hosts
# 在最下方添加主机配置
192.168.2.74 k8s-master
192.168.2.75 k8s-node-1
192.168.2.67 k8s-node-2

2.5 时间同步

ntpdate time.windows.com
#如果没有安装ntp则需要执行 yum -y install ntp

到这里服务器基本上都配置好了,接下来开始安装容器

3.安装docker

3.1 安装docker

# 配置阿里云镜像源
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
#如果没有 yum-config-manager 需要执行 yum -y install yum-utils进行安装
# 缓存
yum makecache
# 安装
yum install -y docker-ce
# 启动
systemctl start docker
# 开机自启
systemctl enable docker
# 查看版本
docker --version

配置国内镜像源

# 配置国内镜像源
cat > /etc/docker/daemon.json <<EOF
{
    "exec-opts": ["native.cgroupdriver=systemd"],
    "registry-mirrors": ["https://c3qpbwyr.mirror.aliyuncs.com"]
}
EOF
# 重启docker
systemctl restart docker
# 查看docker信息
docker info

3.2 安装cri-docker

由于k8s1.24版本后移除了cri-dockerd,所以必须安装一个额外的服务cri-dockerd

官方说明:

Docker Engine 没有实现 CRI, 而这是容器运行时在 Kubernetes 中工作所需要的。 为此,必须安装一个额外的服务 cri-dockerd。 cri-dockerd 是一个基于传统的内置 Docker 引擎支持的项目, 它在 1.24 版本从 kubelet 中移除。

wget https://github.com/Mirantis/cri-dockerd/releases/download/v0.3.3/cri-dockerd-0.3.3.amd64.tgz
# 如果无法访问下载需要手动到github上下载并上传到服务器
#解压
tar -zxvf cri-dockerd-0.3.3.amd64.tgz
cp cri-dockerd/cri-dockerd /usr/bin/
chmod +x /usr/bin/cri-dockerd 

cri-dockerd开源地址
cri-dockerd下载地址
cri-docker下载

配置启动文件
启动文件在github上可以找到:启动文件

配置cri-docker.service文件

cat <<"EOF" > /usr/lib/systemd/system/cri-docker.service
[Unit]
Description=CRI Interface for Docker Application Container Engine
Documentation=https://docs.mirantis.com
After=network-online.target firewalld.service docker.service
Wants=network-online.target
Requires=cri-docker.socket
[Service]
Type=notify
ExecStart=/usr/bin/cri-dockerd --network-plugin=cni --pod-infra-container-image=registry.aliyuncs.com/google_containers/pause:3.9
ExecReload=/bin/kill -s HUP $MAINPID
TimeoutSec=0
RestartSec=2
Restart=always
StartLimitBurst=3
StartLimitInterval=60s
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
TasksMax=infinity
Delegate=yes
KillMode=process
[Install]
WantedBy=multi-user.target
EOF

配置cri-docker.socket文件

cat <<"EOF" > /usr/lib/systemd/system/cri-docker.socket
[Unit]
Description=CRI Docker Socket for the API
PartOf=cri-docker.service
[Socket]
ListenStream=%t/cri-dockerd.sock
SocketMode=0660
SocketUser=root
SocketGroup=docker
[Install]
WantedBy=sockets.target
EOF

启动 cri-docker 并设置开机⾃动启动

systemctl daemon-reload
systemctl enable cri-docker --now
# 查看启动状态
systemctl status cri-docker

3.3 使用kubeadm部署k8s

3.3.1 配置kubernetes的阿里云yum源
cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
3.3.2 安装kubeadm kubelet kubectl
# 安装
yum install -y kubelet-1.27.2 kubeadm-1.27.2 kubectl-1.27.2
# 开机自启
systemctl enable kubelet
# 查看版本
kubeadm version
3.3.3 下载相关镜像(master节点执行)

创建集群时需要下载很多镜像,为了加快创建集群的过程,防止超时,所以我们预先将需要的镜像下载完成

# 查询所需镜像
kubeadm config images list
# 下载镜像执行脚本
vim images-download.sh
# 将以下内容复制进脚本
#!/bin/bash
docker pull registry.aliyuncs.com/google_containers/kube-apiserver:v1.27.2
docker pull registry.aliyuncs.com/google_containers/kube-controller-manager:v1.27.2
docker pull registry.aliyuncs.com/google_containers/kube-scheduler:v1.27.2
docker pull registry.aliyuncs.com/google_containers/kube-proxy:v1.27.2
docker pull registry.aliyuncs.com/google_containers/pause:3.9
docker pull registry.aliyuncs.com/google_containers/etcd:3.5.7-0
docker pull registry.aliyuncs.com/google_containers/coredns:v1.10.1
# 执行
sh images-download.sh
3.3.4 初始化集群(master节点执行)
# 生成默认启动配置文件
kubeadm config print init-defaults > init-default.yaml
# 修改配置文件
apiVersion: kubeadm.k8s.io/v1beta3
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 192.168.2.74 # 指定master节点ip
  bindPort: 6443
nodeRegistration:
  criSocket: unix:///var/run/cri-dockerd.sock # 指定cri-docker作为容器运行时
  imagePullPolicy: IfNotPresent
  name: k8s-master # 当前主机名称
  taints: null
---
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta3
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns: {}
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: registry.aliyuncs.com/google_containers # 指定镜像源地址
kind: ClusterConfiguration
kubernetesVersion: 1.27.2 # 指定版本
networking:
  dnsDomain: cluster.local
  serviceSubnet: 10.96.0.0/12 # services网段地址
  podSubnet: 10.1.0.0/16 # pod网段地址
scheduler: {}
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
cgroupDriver: systemd

# 使用配置文件启动, 这一步是出错最多的地方
kubeadm init --config=init-default.yaml
# 如果初始化失败需要重置
kubeadm reset --cri-socket unix:///var/run/cri-dockerd.sock
# 一般错误检查docker状态,docker info查看状态,是否配置镜像源和cgroupdriver,修改配置后记得重启
# 检查kubelet状态 systemctl status kubelet
# 查看错误日志
journalctl -xeu kubelet
# 如果都没问题,日志显示"node master not found",可能是修改hostname后未生效,需要重启虚拟机reboot

如果一切正常的话,执行结果如下

# 执行结果
[init] Using Kubernetes version: v1.27.2
[preflight] Running pre-flight checks
I0615 17:01:49.985161   13129 checks.go:563] validating Kubernetes and kubeadm version
I0615 17:01:49.985241   13129 checks.go:168] validating if the firewall is enabled and active
I0615 17:01:50.054928   13129 checks.go:203] validating availability of port 6443
I0615 17:01:50.055285   13129 checks.go:203] validating availability of port 10259
I0615 17:01:50.055366   13129 checks.go:203] validating availability of port 10257
I0615 17:01:50.055441   13129 checks.go:280] validating the existence of file /etc/kubernetes/manifests/kube-apiserver.yaml
I0615 17:01:50.055473   13129 checks.go:280] validating the existence of file /etc/kubernetes/manifests/kube-controller-manager.yaml
I0615 17:01:50.055500   13129 checks.go:280] validating the existence of file /etc/kubernetes/manifests/kube-scheduler.yaml
I0615 17:01:50.055527   13129 checks.go:280] validating the existence of file /etc/kubernetes/manifests/etcd.yaml
I0615 17:01:50.055560   13129 checks.go:430] validating if the connectivity type is via proxy or direct
I0615 17:01:50.055711   13129 checks.go:469] validating http connectivity to first IP address in the CIDR
I0615 17:01:50.055814   13129 checks.go:469] validating http connectivity to first IP address in the CIDR
I0615 17:01:50.055848   13129 checks.go:104] validating the container runtime
I0615 17:01:50.129578   13129 checks.go:639] validating whether swap is enabled or not
I0615 17:01:50.129677   13129 checks.go:370] validating the presence of executable crictl
I0615 17:01:50.129732   13129 checks.go:370] validating the presence of executable conntrack
I0615 17:01:50.129771   13129 checks.go:370] validating the presence of executable ip
I0615 17:01:50.129808   13129 checks.go:370] validating the presence of executable iptables
I0615 17:01:50.129853   13129 checks.go:370] validating the presence of executable mount
I0615 17:01:50.129914   13129 checks.go:370] validating the presence of executable nsenter
I0615 17:01:50.129954   13129 checks.go:370] validating the presence of executable ebtables
I0615 17:01:50.129993   13129 checks.go:370] validating the presence of executable ethtool
I0615 17:01:50.130054   13129 checks.go:370] validating the presence of executable socat
I0615 17:01:50.130118   13129 checks.go:370] validating the presence of executable tc
I0615 17:01:50.130167   13129 checks.go:370] validating the presence of executable touch
I0615 17:01:50.130216   13129 checks.go:516] running all checks
I0615 17:01:50.163080   13129 checks.go:401] checking whether the given node name is valid and reachable using net.LookupHost
        [WARNING Hostname]: hostname "master" could not be reached
        [WARNING Hostname]: hostname "master": lookup master on 114.114.114.114:53: no such host
I0615 17:01:50.174544   13129 checks.go:605] validating kubelet version
I0615 17:01:50.267666   13129 checks.go:130] validating if the "kubelet" service is enabled and active
I0615 17:01:50.289127   13129 checks.go:203] validating availability of port 10250
I0615 17:01:50.289232   13129 checks.go:329] validating the contents of file /proc/sys/net/bridge/bridge-nf-call-iptables
I0615 17:01:50.289311   13129 checks.go:329] validating the contents of file /proc/sys/net/ipv4/ip_forward
I0615 17:01:50.289373   13129 checks.go:203] validating availability of port 2379
I0615 17:01:50.289489   13129 checks.go:203] validating availability of port 2380
I0615 17:01:50.289601   13129 checks.go:243] validating the existence and emptiness of directory /var/lib/etcd
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
W0615 17:01:50.289919   13129 images.go:80] could not find officially supported version of etcd for Kubernetes v1.27.2, falling back to the nearest etcd version (3.5.7-0)
I0615 17:01:50.289957   13129 checks.go:828] using image pull policy: IfNotPresent
I0615 17:01:50.335655   13129 checks.go:846] image exists: registry.aliyuncs.com/google_containers/kube-apiserver:v1.27.2
I0615 17:01:50.369433   13129 checks.go:846] image exists: registry.aliyuncs.com/google_containers/kube-controller-manager:v1.27.2
I0615 17:01:50.403591   13129 checks.go:846] image exists: registry.aliyuncs.com/google_containers/kube-scheduler:v1.27.2
I0615 17:01:50.443438   13129 checks.go:846] image exists: registry.aliyuncs.com/google_containers/kube-proxy:v1.27.2
I0615 17:01:50.513298   13129 checks.go:846] image exists: registry.aliyuncs.com/google_containers/pause:3.9
I0615 17:01:50.547460   13129 checks.go:846] image exists: registry.aliyuncs.com/google_containers/etcd:3.5.7-0
I0615 17:01:50.596719   13129 checks.go:846] image exists: registry.aliyuncs.com/google_containers/coredns:v1.10.1
[certs] Using certificateDir folder "/etc/kubernetes/pki"
I0615 17:01:50.596782   13129 certs.go:112] creating a new certificate authority for ca
[certs] Generating "ca" certificate and key
I0615 17:01:50.988491   13129 certs.go:519] validating certificate period for ca certificate
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local master] and IPs [10.96.0.1 192.168.2.74]
[certs] Generating "apiserver-kubelet-client" certificate and key
I0615 17:01:51.699331   13129 certs.go:112] creating a new certificate authority for front-proxy-ca
[certs] Generating "front-proxy-ca" certificate and key
I0615 17:01:51.845572   13129 certs.go:519] validating certificate period for front-proxy-ca certificate
[certs] Generating "front-proxy-client" certificate and key
I0615 17:01:51.930014   13129 certs.go:112] creating a new certificate authority for etcd-ca
[certs] Generating "etcd/ca" certificate and key
I0615 17:01:52.129704   13129 certs.go:519] validating certificate period for etcd/ca certificate
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost master] and IPs [192.168.2.74 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost master] and IPs [192.168.2.74 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
I0615 17:01:52.963693   13129 certs.go:78] creating new public/private key files for signing service account users
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0615 17:01:53.185714   13129 kubeconfig.go:103] creating kubeconfig file for admin.conf
[kubeconfig] Writing "admin.conf" kubeconfig file
I0615 17:01:53.325105   13129 kubeconfig.go:103] creating kubeconfig file for kubelet.conf
[kubeconfig] Writing "kubelet.conf" kubeconfig file
I0615 17:01:53.467470   13129 kubeconfig.go:103] creating kubeconfig file for controller-manager.conf
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0615 17:01:53.641995   13129 kubeconfig.go:103] creating kubeconfig file for scheduler.conf
[kubeconfig] Writing "scheduler.conf" kubeconfig file
I0615 17:01:53.746803   13129 kubelet.go:67] Stopping the kubelet
I0615 17:01:53.764478   13129 flags.go:102] setting kubelet hostname-override to "master"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
I0615 17:01:53.866973   13129 manifests.go:99] [control-plane] getting StaticPodSpecs
I0615 17:01:53.867223   13129 certs.go:519] validating certificate period for CA certificate
I0615 17:01:53.867292   13129 manifests.go:125] [control-plane] adding volume "ca-certs" for component "kube-apiserver"
I0615 17:01:53.867301   13129 manifests.go:125] [control-plane] adding volume "etc-pki" for component "kube-apiserver"
I0615 17:01:53.867308   13129 manifests.go:125] [control-plane] adding volume "k8s-certs" for component "kube-apiserver"
I0615 17:01:53.870267   13129 manifests.go:154] [control-plane] wrote static Pod manifest for component "kube-apiserver" to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
I0615 17:01:53.870290   13129 manifests.go:99] [control-plane] getting StaticPodSpecs
I0615 17:01:53.870518   13129 manifests.go:125] [control-plane] adding volume "ca-certs" for component "kube-controller-manager"
I0615 17:01:53.870529   13129 manifests.go:125] [control-plane] adding volume "etc-pki" for component "kube-controller-manager"
I0615 17:01:53.870536   13129 manifests.go:125] [control-plane] adding volume "flexvolume-dir" for component "kube-controller-manager"
I0615 17:01:53.870543   13129 manifests.go:125] [control-plane] adding volume "k8s-certs" for component "kube-controller-manager"
I0615 17:01:53.870551   13129 manifests.go:125] [control-plane] adding volume "kubeconfig" for component "kube-controller-manager"
I0615 17:01:53.871205   13129 manifests.go:154] [control-plane] wrote static Pod manifest for component "kube-controller-manager" to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[control-plane] Creating static Pod manifest for "kube-scheduler"
I0615 17:01:53.871221   13129 manifests.go:99] [control-plane] getting StaticPodSpecs
I0615 17:01:53.871412   13129 manifests.go:125] [control-plane] adding volume "kubeconfig" for component "kube-scheduler"
I0615 17:01:53.871807   13129 manifests.go:154] [control-plane] wrote static Pod manifest for component "kube-scheduler" to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
W0615 17:01:53.871900   13129 images.go:80] could not find officially supported version of etcd for Kubernetes v1.27.2, falling back to the nearest etcd version (3.5.7-0)
I0615 17:01:53.872672   13129 local.go:65] [etcd] wrote Static Pod manifest for a local etcd member to "/etc/kubernetes/manifests/etcd.yaml"
I0615 17:01:53.872694   13129 waitcontrolplane.go:83] [wait-control-plane] Waiting for the API server to be healthy
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 9.528166 seconds
I0615 17:02:03.402334   13129 uploadconfig.go:112] [upload-config] Uploading the kubeadm ClusterConfiguration to a ConfigMap
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I0615 17:02:03.525402   13129 uploadconfig.go:126] [upload-config] Uploading the kubelet component config to a ConfigMap
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
I0615 17:02:03.666273   13129 uploadconfig.go:131] [upload-config] Preserving the CRISocket information for the control-plane node
I0615 17:02:03.666322   13129 patchnode.go:31] [patchnode] Uploading the CRI Socket information "unix:///var/run/cri-dockerd.sock" to the Node API object "master" as an annotation
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node master as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node master as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]
[bootstrap-token] Using token: abcdef.0123456789abcdef
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
I0615 17:02:05.037629   13129 clusterinfo.go:47] [bootstrap-token] loading admin kubeconfig
I0615 17:02:05.038328   13129 clusterinfo.go:58] [bootstrap-token] copying the cluster from admin.conf to the bootstrap kubeconfig
I0615 17:02:05.038640   13129 clusterinfo.go:70] [bootstrap-token] creating/updating ConfigMap in kube-public namespace
I0615 17:02:05.092174   13129 clusterinfo.go:84] creating the RBAC rules for exposing the cluster-info ConfigMap in the kube-public namespace
I0615 17:02:05.120199   13129 kubeletfinalize.go:90] [kubelet-finalize] Assuming that kubelet client certificate rotation is enabled: found "/var/lib/kubelet/pki/kubelet-client-current.pem"
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
I0615 17:02:05.121556   13129 kubeletfinalize.go:134] [kubelet-finalize] Restarting the kubelet to enable client certificate rotation
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.2.74:6443 --token abcdef.0123456789abcdef \
        --discovery-token-ca-cert-hash sha256:fde6a1a064fb50a557318a561e6c5e969b5fba9b9bfb1f61d76b50e10e1dcc86

# 然后按照提示执行
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
# 验证
kubectl get nodes

到这里master节点基本上就安装完成了,接下来将node节点加入集群

3.3.5 node节点加入集群(node节点执行)
# 加入集群,需要指定cri-socket
 kubeadm join 192.168.2.74:6443 --token abcdef.0123456789abcdef --discovery-token-ca-cert-hash sha256:8a1ab06f48cb546045e80a914acd8a59e5c9d98a511b47104463686fc40c8140 --cri-socket=unix:///var/run/cri-dockerd.sock
 # 如果忘记加入集群命令,在master节点执行kubeadm token create --ttl 0 --print-join-command
 # --ttl 0表示永不过期,注意:需要在加入集群命令最后加上 --cri-socket=unix:///var/run/cri-dockerd.sock
 # 执行结果
 [preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
# 在master节点查看
kubectl get nodes

NAME         STATUS     ROLES           AGE   VERSION
k8s-node-1   NotReady   <none>          11s   v1.27.2
master       NotReady   control-plane   23m   v1.27.2

然后在其他节点分别执行

3.3.6 安装网络插件

全部加入节点后,我们会看到所有节点的状态是NotReady,这是因为我们还没有安装网络插件,接下来就开始安装网络插件
不同版本calico组件对应的k8s版本: calico对应k8s版本

# 1.首先,在群集上安装运算符。
kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.26.1/manifests/tigera-operator.yaml
# 2.下载配置 Calico 所需的自定义资源
curl https://raw.githubusercontent.com/projectcalico/calico/v3.26.1/manifests/custom-resources.yaml -O
# 如果出现curl: (35) Encountered end of file错误可能是无法解析到https://raw.githubusercontent.com
# 需要修改hosts文件,在末尾添加
185.199.108.133 https://raw.githubusercontent.com
# 3. 创建清单以安装 Calico。
kubectl create -f custom-resources.yaml
# 查看pod状态,所有的都是Running状态
[root@k8s-master home]# kubectl get pods -A
NAMESPACE          NAME                                       READY   STATUS    RESTARTS   AGE
calico-apiserver   calico-apiserver-699f6d579b-5vh84          1/1     Running   0          3m22s
calico-apiserver   calico-apiserver-699f6d579b-vhl7c          1/1     Running   0          3m22s
calico-system      calico-kube-controllers-5b648f7946-hxq8v   1/1     Running   0          6m56s
calico-system      calico-node-ksl8w                          1/1     Running   0          6m56s
calico-system      calico-node-zvctd                          1/1     Running   0          6m56s
calico-system      calico-typha-69945ddf9f-tk7gr              1/1     Running   0          6m56s
calico-system      csi-node-driver-7qctm                      2/2     Running   0          6m56s
calico-system      csi-node-driver-rm7r4                      2/2     Running   0          6m56s
kube-system        coredns-7bdc4cb885-2kb8k                   1/1     Running   0          8m25s
kube-system        coredns-7bdc4cb885-kfq7w                   1/1     Running   0          8m25s
kube-system        etcd-k8s-master                            1/1     Running   0          8m39s
kube-system        kube-apiserver-k8s-master                  1/1     Running   0          8m40s
kube-system        kube-controller-manager-k8s-master         1/1     Running   0          8m39s
kube-system        kube-proxy-nhvjd                           1/1     Running   0          7m25s
kube-system        kube-proxy-s6km2                           1/1     Running   0          8m25s
kube-system        kube-scheduler-k8s-master                  1/1     Running   0          8m39s
tigera-operator    tigera-operator-5f4668786-fblzh            1/1     Running   0          7m6s
# 等待安装完成后查看状态,可以看到所有节点都是Ready状态
[root@k8s-master home]# kubectl get node
NAME         STATUS   ROLES           AGE     VERSION
k8s-master   Ready    control-plane   9m44s   v1.27.2
k8s-node-1   Ready    <none>          8m26s   v1.27.2

到这里集群基本上就部署完成了。

3.4 安装仪表盘

为了更直观的观察集群的情况,一般需要安装仪表盘

wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.7.0/aio/deploy/recommended.yaml
# 修改配置
vim recommended.yaml
# 只需要修改其中标记的两处
kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  type: NodePort # 提供外部访问
  ports:
    - port: 443
      targetPort: 8443
      nodePort: 30001  # 对外端口
  selector:
    k8s-app: kubernetes-dashboard
# 然后应用
kubectl create -f recommended.yaml
# 这时候通过这台主机+nodePort就能访问到页面了,但是需要token,创建用户
vim dashboard-admin.yaml
# 修改配置文件

apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kubernetes-dashboard

---

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kubernetes-dashboard

---

apiVersion: v1
kind: Secret
metadata:
  name: admin-user
  namespace: kubernetes-dashboard
  annotations:
    kubernetes.io/service-account.name: admin-user
type: kubernetes.io/service-account-token

# 应用
kubectl create -f dashboard-admin.yaml
# 查看token,将token对应的值填上就能进入仪表盘了

kubectl -n kubernetes-dashboard describe secret admin-user
  • 0
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
禅道项目管理软件5.1 windows一键安装包(适用于windows系列) 从4.0.beta2版本开始,我们重构了windows一键安装包的控制面板程序,本篇文章将讲述如何使用该控制面板来启动禅道环境。 注:360软件会经常误报,建议使用金山毒霸,或者安装禅道之前,先暂停360软件。 一、安装 1.1 在我们的站点下载最新的windows集成运行环境(.exe结尾)。 1.2 双击解压缩到某一个分区的根目录,比如c:\xampp,或者d:\xampp, 必须是根目录。 1.3 进入xampp文件夹,双击start.bat启动控制面板程序 二、启动并访问禅道 2.1 启动控制面板之后,点击“启动禅道”按钮,系统会自动启动禅道所需要的apache和mysql服务。 2.2 启动成功之后,点击“访问禅道”,即可打开禅道环境的首页。5秒钟之后,页面会自动跳转到禅道的页面。 三、相关密码 3.1 禅道项目管理软件的登录帐号是admin,密码是123456,请登录之后尽快修改自己的密码。 3.2 mysql数据库的管理员帐号是root,密码为空。(如果您修改了root帐号的密码,请一定记得修改zentao/config/my.php里面的数据库密码) 3.3 数据库管理是使用的phpmyadmin程序,基于安全方面的考虑,只能在禅道所在的机器上面访问,从其他机器访问会被禁止。 四、关于该环境 4.1 禅道的访问路径为http://禅道机器的ip地址:端口号/zentao/,其中ip地址换成禅道机器所在的ip地址,端口号如果不是80的话,换成实际的端口号。 4.2 该控制面板会自动安装apache和mysql为服务,服务名分别是apachezt, mysqlzt。 4.3 该环境是从xampp版本精简而来。 五、如果控制面板失败 控制面板是我们全新写的程序,难免会出现一些差错,如果无法通过控制面板启动禅道程序,我们还提供了命令行的控制脚本来启动禅道: 5.1 切换到xampp\service目录 5.2 双击install.bat这个脚本,系统会自动尝试安装apache和mysql为服务,并启动。 5.3 安装成功之后,在这个目录下面会有port.apache和port.mysql来记录apache和mysql的端口号,如果您想更改端口,可以将这两个文件删掉,重新运行install.bat脚本。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值