【kuberneters】k8s集群安装部署

目录

一、环境信息

1、系统信息

2、虚拟机信息 

3、虚拟机资源信息

二、详细安装步骤

1、三台主机基础设置

(1)、关闭防火墙

(2)、关闭selinux

(3)、关闭swap分区

(4)、配置时间同步

(5)、添加主机映射关系

(6)、免密认证设置

(7)、设置网桥参数

(8)、安装docker、kubeadm、kubelet

2、master主机部署kubernetes

(1)、执行命令

(2)、设置环境变量使用kubectl工具

(3)、非root用户使用kubectl工具的配置

(4)、查看当前镜像和容器

(5)、查看集群节点

(6)、安装Pod网络插件(CNI)

3、worker节点加入集群

(1)、每个worker节点执行刚保存过的命令

(2)、master主机查看集群信息


一、环境信息

1、系统信息

环境名版本号
linux系统CentOS Linux release 7.9.2009 (Core)

2、虚拟机信息 

主机IP主机名角色
192.168.230.21

master

master node
192.168.230.22node01worker node
192.168.230.23node02worker node

3、虚拟机资源信息

主机处理器内存
master2C2G
node012C2G
node022C2G

二、详细安装步骤

1、三台主机基础设置

        注意:每台主机都需要执行

(1)、关闭防火墙

#暂时关闭

systemctl stop firewalld

#永久关闭

systemctl disable firewalld

(2)、关闭selinux

#永久关闭

sed -i 's/enforcing/disabled/' /etc/selinux/config

#临时关闭

setenforce 0

(3)、关闭swap分区

        k8s禁止虚拟内存以提高性能。

#永久关闭

sed -ri 's/.*swap.*/#&/' /etc/fstab

#临时关闭

swapoff -a

(4)、配置时间同步

yum -y install chrony

(5)、添加主机映射关系

echo '''
192.168.230.21 master
192.168.230.22 node01
192.168.230.23 node02
''' >> /etc/hosts

(6)、免密认证设置

Linux_SSH免密登录

ssh-keygen

ssh-copy-id master

ssh-copy-id node01

ssh-copy-id node02

(7)、设置网桥参数

cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF

#生效

sysctl --system

(8)、安装docker、kubeadm、kubelet

a、 先安装Docker

Kubernetes默认CRI(容器运行时)为Docker,因此先安装Docker。

wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo

yum -y install docker-ce

systemctl enable --now docker

#查看docker版本

docker --version

cat > /etc/docker/daemon.json << EOF
{
  "registry-mirrors": ["https://lngv2rof.mirror.aliyuncs.com"],
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "storage-driver": "overlay2"
}
EOF

b、添加kubernetes阿里云YUM软件源

cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

c、安装kubeadm,kubelet和kubectl

yum install -y kubelet-1.20.0 kubeadm-1.20.0 kubectl-1.20.0

#安装完成以后不要启动,设置开机自启动即可
systemctl enable kubelet

2、master主机部署kubernetes

        注意:只在master主机上执行;

(1)、执行命令

[root@master ~]# kubeadm init --apiserver-advertise-address=192.168.230.21 \
--image-repository registry.aliyuncs.com/google_containers \
--kubernetes-version v1.20.0 \
--service-cidr=10.96.0.0/12 \
--pod-network-cidr=10.244.0.0/16
[init] Using Kubernetes version: v1.20.0
[preflight] Running pre-flight checks
	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.22. Latest validated version: 19.03
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local master] and IPs [10.96.0.1 192.168.230.21]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost master] and IPs [192.168.230.21 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost master] and IPs [192.168.230.21 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 9.016110 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.20" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node master as control-plane by adding the labels "node-role.kubernetes.io/master=''" and "node-role.kubernetes.io/control-plane='' (deprecated)"
[mark-control-plane] Marking the node master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: agylrf.iu6n421bqvm6bvc4
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.230.21:6443 --token agylrf.iu6n421bqvm6bvc4 \
    --discovery-token-ca-cert-hash sha256:25a22171490198da4c3627716c9c80cc64ee93233e5c627c0def513e8dc195b3 

记录此命令,后面增加worker节点使用:

kubeadm join 192.168.230.21:6443 --token agylrf.iu6n421bqvm6bvc4 \
    --discovery-token-ca-cert-hash sha256:25a22171490198da4c3627716c9c80cc64ee93233e5c627c0def513e8dc195b3 

(2)、设置环境变量使用kubectl工具

echo 'export KUBECONFIG=/etc/kubernetes/admin.conf' > /etc/profile.d/k8s.sh
source /etc/profile.d/k8s.sh

(3)、非root用户使用kubectl工具的配置

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
kubectl get nodes

(4)、查看当前镜像和容器

[root@master ~]#  docker images 
REPOSITORY                                                        TAG        IMAGE ID       CREATED       SIZE
registry.aliyuncs.com/google_containers/kube-proxy                v1.20.0    10cc881966cf   2 years ago   118MB
registry.aliyuncs.com/google_containers/kube-apiserver            v1.20.0    ca9843d3b545   2 years ago   122MB
registry.aliyuncs.com/google_containers/kube-controller-manager   v1.20.0    b9fa1895dcaa   2 years ago   116MB
registry.aliyuncs.com/google_containers/kube-scheduler            v1.20.0    3138b6e3d471   2 years ago   46.4MB
registry.aliyuncs.com/google_containers/etcd                      3.4.13-0   0369cf4303ff   2 years ago   253MB
registry.aliyuncs.com/google_containers/coredns                   1.7.0      bfe3a36ebd25   2 years ago   45.2MB
registry.aliyuncs.com/google_containers/pause                     3.2        80d28bedfe5d   2 years ago   683kB

[root@master ~]# docker ps
CONTAINER ID   IMAGE                                               COMMAND                  CREATED         STATUS         PORTS     NAMES
4db51a6a72ae   10cc881966cf                                        "/usr/local/bin/kube…"   3 minutes ago   Up 3 minutes             k8s_kube-proxy_kube-proxy-k6dgr_kube-system_1188158c-64b3-454f-a9aa-5a1c2100c101_0
bfd351d964ab   registry.aliyuncs.com/google_containers/pause:3.2   "/pause"                 3 minutes ago   Up 3 minutes             k8s_POD_kube-proxy-k6dgr_kube-system_1188158c-64b3-454f-a9aa-5a1c2100c101_0
2c666baf0d57   b9fa1895dcaa                                        "kube-controller-man…"   4 minutes ago   Up 4 minutes             k8s_kube-controller-manager_kube-controller-manager-master_kube-system_5c575d17517839b576ab4817fd06353f_0
8c099e43b023   3138b6e3d471                                        "kube-scheduler --au…"   4 minutes ago   Up 4 minutes             k8s_kube-scheduler_kube-scheduler-master_kube-system_0378cf280f805e38b5448a1eceeedfc4_0
6ad12ff997b5   ca9843d3b545                                        "kube-apiserver --ad…"   4 minutes ago   Up 4 minutes             k8s_kube-apiserver_kube-apiserver-master_kube-system_5714a27152f7d4a8a1e6655759ad2204_0
da55405670a7   0369cf4303ff                                        "etcd --advertise-cl…"   4 minutes ago   Up 4 minutes             k8s_etcd_etcd-master_kube-system_bcfd2c30df426a8872a94df4d057316d_0
89b80ccabb6b   registry.aliyuncs.com/google_containers/pause:3.2   "/pause"                 4 minutes ago   Up 4 minutes             k8s_POD_kube-controller-manager-master_kube-system_5c575d17517839b576ab4817fd06353f_0
a0dde947949b   registry.aliyuncs.com/google_containers/pause:3.2   "/pause"                 4 minutes ago   Up 4 minutes             k8s_POD_etcd-master_kube-system_bcfd2c30df426a8872a94df4d057316d_0
9c17ae8840d0   registry.aliyuncs.com/google_containers/pause:3.2   "/pause"                 4 minutes ago   Up 4 minutes             k8s_POD_kube-scheduler-master_kube-system_0378cf280f805e38b5448a1eceeedfc4_0
4ad98ebbe552   registry.aliyuncs.com/google_containers/pause:3.2   "/pause"                 4 minutes ago   Up 4 minutes             k8s_POD_kube-apiserver-master_kube-system_5714a27152f7d4a8a1e6655759ad2204_0
[root@master ~]# 

(5)、查看集群节点

[root@master ~]# kubectl get nodes
NAME     STATUS     ROLES                  AGE     VERSION
master   NotReady   control-plane,master   4m26s   v1.20.0

注意:"NotReady"表示还没就绪,后台还有任务在进行,稍等片刻会变成Ready;

(6)、安装Pod网络插件(CNI)

[root@master ~]# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

注意:如果无法连接,可以将这个链接点开复制里面的内容到一个文件内,然后指定文件;

kubectl apply -f flannel.yaml

3、worker节点加入集群

(1)、每个worker节点执行刚保存过的命令

kubeadm join 192.168.230.21:6443 --token agylrf.iu6n421bqvm6bvc4 \
    --discovery-token-ca-cert-hash sha256:25a22171490198da4c3627716c9c80cc64ee93233e5c627c0def513e8dc195b3 

(2)、master主机查看集群信息

[root@master ~]#  kubectl get nodes
NAME     STATUS   ROLES                  AGE   VERSION
master   Ready    control-plane,master   78m   v1.20.0
node01   Ready    <none>                 71m   v1.20.0
node02   Ready    <none>                 71m   v1.20.0
  • 0
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论
Kubernetes (k8s) 是一个容器编排平台,可以轻松地部署、管理和扩展容器化应用程序。Redis 是一个流行的开源内存数据存储系统,可用于缓存、消息传递、队列等多种场景。在 k8s部署 Redis 集群需要以下步骤: 1. 创建 Redis 镜像 你可以使用 Dockerfile 创建 Redis 镜像,或者从 Docker Hub 下载 Redis 镜像。例如,可以使用以下命令下载 Redis 镜像: ``` docker pull redis ``` 2. 创建 Redis 配置文件 Redis 集群需要一个配置文件来定义节点和复制策略。可以创建一个 config 文件夹,并在其中创建 redis.conf 文件,配置内容可以参考 Redis 官方文档。 3. 创建 Redis 集群的 YAML 文件 可以使用 k8s 的 YAML 文件定义 Redis 集群。以下是一个示例 YAML 文件: ```yaml apiVersion: apps/v1 kind: StatefulSet metadata: name: redis-cluster spec: serviceName: redis-cluster replicas: 6 selector: matchLabels: app: redis-cluster template: metadata: labels: app: redis-cluster spec: containers: - name: redis image: redis command: - sh args: - -c - | redis-server /config/redis.conf --port $(echo $POD_NAME | cut -d'-' -f2 | cut -d'-' -f1 | sed 's/^0*//'):6379 --cluster-enabled yes --cluster-config-file /data/nodes.conf --cluster-node-timeout 5000 --appendonly yes ports: - containerPort: 6379 name: client - containerPort: 16379 name: gossip volumeMounts: - name: config mountPath: /config - name: data mountPath: /data volumes: - name: config configMap: name: redis-cluster - name: data emptyDir: {} volumeClaimTemplates: - metadata: name: data spec: accessModes: [ "ReadWriteOnce" ] resources: requests: storage: 1Gi --- apiVersion: v1 kind: Service metadata: name: redis-cluster spec: ports: - port: 6379 name: client - port: 16379 name: gossip clusterIP: None selector: app: redis-cluster ``` 这个 YAML 文件定义了一个名为 redis-cluster 的 StatefulSet 和一个名为 redis-cluster 的 Service。StatefulSet 定义了 6 个 Redis 节点,每个节点有一个容器,使用 Redis 镜像,并在容器启动时运行 Redis 命令。每个容器都会挂载 config 文件夹和 data 文件夹,其中 config 文件夹中存放 Redis 配置文件,data 文件夹中存放 Redis 数据。Service 将端口映射到每个容器的端口,同时将 clusterIP 设置为 None,以便其他 Pod 无法通过 Service 访问 Redis 集群。 4. 部署 Redis 集群 使用以下命令部署 Redis 集群: ``` kubectl apply -f redis-cluster.yaml ``` 这将创建一个 redis-cluster 的 StatefulSet 和一个 redis-cluster 的 Service。 5. 验证 Redis 集群 可以使用以下命令验证 Redis 集群是否正常运行: ``` kubectl exec -it redis-cluster-0 -- redis-cli cluster info ``` 这将在第一个 Redis 节点上运行 redis-cli 命令,并输出 Redis 集群的信息。 以上就是在 k8s部署 Redis 集群的步骤。需要注意的是,Redis 集群需要至少 3 个节点才能正常运行。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

郝少

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值