1.Kubernetes介绍

1. Kubernetes介绍

1.1 应用部署方式演变

在部署应用程序的方式上,主要经历了三个时代:

  • 传统部署:互联网早期,会直接将应用程序部署在物理机上

    优点:简单,不需要其它技术的参与

    缺点:不能为应用程序定义资源使用边界,很难合理地分配计算资源,而且程序之间容易产生影响

  • 虚拟化部署:可以在一台物理机上运行多个虚拟机,每个虚拟机都是独立的一个环境

    优点:程序环境不会相互产生影响,提供了一定程度的安全性

    缺点:增加了操作系统,浪费了部分资源

  • 容器化部署:与虚拟化类似,但是共享了操作系统

    优点:

    可以保证每个容器拥有自己的文件系统、CPU、内存、进程空间等

    运行应用程序所需要的资源都被容器包装,并和底层基础架构解耦

    容器化的应用程序可以跨云服务商、跨Linux操作系统发行版进行部署

img

容器化部署方式给带来很多的便利,但是也会出现一些问题,比如说:

  • 一个容器故障停机了,怎么样让另外一个容器立刻启动去替补停机的容器
  • 当并发访问量变大的时候,怎么样做到横向扩展容器数量

这些容器管理的问题统称为容器编排问题,为了解决这些容器编排问题,就产生了一些容器编排的软件:

  • Swarm:Docker自己的容器编排工具
  • Mesos:Apache的一个资源统一管控的工具,需要和Marathon结合使用
  • Kubernetes:Google开源的的容器编排工具

1.2 kubernetes简介

img

kubernetes,是一个全新的基于容器技术的分布式架构领先方案,是谷歌严格保密十几年的秘密武器----Borg系统的一个开源版本,于2014年9月发布第一个版本,2015年7月发布第一个正式版本。

kubernetes的本质是一组服务器集群,它可以在集群的每个节点上运行特定的程序,来对节点中的容器进行管理。目的是实现资源管理的自动化,主要提供了如下的主要功能:

  • 自我修复:一旦某一个容器崩溃,能够在1秒中左右迅速启动新的容器
  • 弹性伸缩:可以根据需要,自动对集群中正在运行的容器数量进行调整
  • 服务发现:服务可以通过自动发现的形式找到它所依赖的服务
  • 负载均衡:如果一个服务起动了多个容器,能够自动实现请求的负载均衡
  • 版本回退:如果发现新发布的程序版本有问题,可以立即回退到原来的版本
  • 存储编排:可以根据容器自身的需求自动创建存储卷

1.3 kubernetes组件

一个kubernetes集群主要是由控制节点(master)、**工作节点(node)**构成,每个节点上都会安装不同的组件。

master:集群的控制平面,负责集群的决策 ( 管理 )

ApiServer : 资源操作的唯一入口,接收用户输入的命令,提供认证、授权、API注册和发现等机制

Scheduler : 负责集群资源调度,按照预定的调度策略将Pod调度到相应的node节点上

ControllerManager : 负责维护集群的状态,比如程序部署安排、故障检测、自动扩展、滚动更新等

Etcd :负责存储集群中各种资源对象的信息

node:集群的数据平面,负责为容器提供运行环境 ( 干活 )

Kubelet : 负责维护容器的生命周期,即通过控制docker,来创建、更新、销毁容器

KubeProxy : 负责提供集群内部的服务发现和负载均衡

Docker : 负责节点上容器的各种操作

img

下面,以部署一个nginx服务来说明kubernetes系统各个组件调用关系:

  1. 首先要明确,一旦kubernetes环境启动之后,master和node都会将自身的信息存储到etcd数据库中

  2. 一个nginx服务的安装请求会首先被发送到master节点的apiServer组件

  3. apiServer组件会调用scheduler组件来决定到底应该把这个服务安装到哪个node节点上

    在此时,它会从etcd中读取各个node节点的信息,然后按照一定的算法进行选择,并将结果告知apiServer

  4. apiServer调用controller-manager去调度Node节点安装nginx服务

  5. kubelet接收到指令后,会通知docker,然后由docker来启动一个nginx的pod

    pod是kubernetes的最小操作单元,容器必须跑在pod中至此,

  6. 一个nginx服务就运行了,如果需要访问nginx,就需要通过kube-proxy来对pod产生访问的代理

这样,外界用户就可以访问集群中的nginx服务了

1.4 kubernetes概念

Master:集群控制节点,每个集群需要至少一个master节点负责集群的管控

Node:工作负载节点,由master分配容器到这些node工作节点上,然后node节点上的docker负责容器的运行

Pod:kubernetes的最小控制单元,容器都是运行在pod中的,一个pod中可以有1个或者多个容器

Controller:控制器,通过它来实现对pod的管理,比如启动pod、停止pod、伸缩pod的数量等等

Service:pod对外服务的统一入口,下面可以维护者同一类的多个pod

Label:标签,用于对pod进行分类,同一类pod会拥有相同的标签

NameSpace:命名空间,用来隔离pod的运行环境

2 Kubernetes快速部署

kubeadm是官方社区推出的一个用于快速部署kubernetes集群的工具。

这个工具能通过两条指令完成一个kubernetes集群的部署:

# 创建一个 Master 节点
$ kubeadm init

# 将一个 Node 节点加入到当前集群中
$ kubeadm join <Master节点的IP和端口>

1. 安装要求

在开始之前,部署Kubernetes集群机器需要满足以下几个条件:

-至少3台机器,操作系统 CentOS7+

  • 硬件配置:2GB或更多RAM,2个CPU或更多CPU,硬盘20GB或更多
  • 集群中所有机器之间网络互通
  • 可以访问外网,需要拉取镜像
  • 禁止swap分区

2. 学习目标

  1. 在所有节点上安装Docker和kubeadm
  2. 部署Kubernetes Master
  3. 部署容器网络插件
  4. 部署 Kubernetes Node,将节点加入Kubernetes集群中
  5. 部署Dashboard Web页面,可视化查看Kubernetes资源

3. 准备环境

img

角色IP
k8s-master192.168.200.5
k8s-node1192.168.200.6
k8s-node2192.168.200.7

准备工作

//关闭三台主机的防火墙和selinux、还有swap分区空间
[root@k8s-master ~]# systemctl status firewalld
● firewalld.service - firewalld - dynamic firewall daemon
   Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled; vendor preset: enabled)
   Active: inactive (dead)
     Docs: man:firewalld(1)
[root@k8s-master ~]# getenforce 
Disabled
[root@k8s-master ~]# vi /etc/fstab   //注释或者取消
#/dev/mapper/cs-swap     none                    swap    defaults        0 0
[root@k8s-node1 ~]# systemctl disable --now firewalld
Removed /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
[root@k8s-node1 ~]# setenforce 0
[root@k8s-node1 ~]# vi /etc/selinux/config 
SELINUX=disabled
[root@k8s-node1 ~]# vi /etc/fstab 
#/dev/mapper/cs-swap     none                    swap    defaults        0 0
[root@k8s-node2 ~]# systemctl disable --now firewalld
Removed /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
[root@k8s-node2 ~]# setenforce 0
[root@k8s-node2 ~]# vi /etc/selinux/config
SELINUX=disabled
[root@k8s-node2 ~]# vi /etc/fstab 
#/dev/mapper/cs-swap     none                    swap    defaults        0 0
//配置DNS域名解析、master上IPv4流量传递到iptables的链。
[root@k8s-master ~]# vi /etc/hosts 
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.200.5 k8s-master
192.168.200.6 k8s-node1
192.168.200.7 k8s-node2
[root@k8s-master ~]# cat > /etc/sysctl.d/k8s.conf << EOF
> net.bridge.bridge-nf-call-ip6tables = 1
> net.bridge.bridge-nf-call-iptables = 1
> EOF
[root@k8s-master ~]# sysctl --system   //让刚刚修改的配置生效
* Applying /usr/lib/sysctl.d/10-default-yama-scope.conf ...
kernel.yama.ptrace_scope = 0
* Applying /usr/lib/sysctl.d/50-coredump.conf ...
kernel.core_pattern = |/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h %e
* Applying /usr/lib/sysctl.d/50-default.conf ...
kernel.sysrq = 16
kernel.core_uses_pid = 1
kernel.kptr_restrict = 1
net.ipv4.conf.all.rp_filter = 1
net.ipv4.conf.all.accept_source_route = 0
net.ipv4.conf.all.promote_secondaries = 1
net.core.default_qdisc = fq_codel
fs.protected_hardlinks = 1
fs.protected_symlinks = 1
* Applying /usr/lib/sysctl.d/50-libkcapi-optmem_max.conf ...
net.core.optmem_max = 81920
* Applying /usr/lib/sysctl.d/50-pid-max.conf ...
kernel.pid_max = 4194304
* Applying /etc/sysctl.d/99-sysctl.conf ...
* Applying /etc/sysctl.d/k8s.conf ...  //能看到这个代表成功
* Applying /etc/sysctl.conf ...

[root@k8s-node1 ~]# vi /etc/hosts 
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.200.5 k8s-master
192.168.200.6 k8s-node1
192.168.200.7 k8s-node2
[root@k8s-node2 ~]# vi /etc/hosts 
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.200.5 k8s-master
192.168.200.6 k8s-node1
192.168.200.7 k8s-node2
//时间同步 配置chrony 并设置开机自启 免密认证 而免密登陆是在master上操作,node节点无需任何操作
[root@k8s-master ~]# yum -y install chrony
[root@k8s-master ~]# vi /etc/chrony.conf
# Use public servers from the pool.ntp.org project.
# Please consider joining the pool (http://www.pool.ntp.org/join.html).
pool time1.aliyun.com iburst
[root@k8s-master ~]# systemctl enable --now chronyd 

[root@k8s-node1 ~]# yum -y install chrony
[root@k8s-node2 ~]# vi /etc/chrony.conf
# Use public servers from the pool.ntp.org project.
# Please consider joining the pool (http://www.pool.ntp.org/join.html).
pool time1.aliyun.com iburst
[root@k8s-node1 ~]# systemctl enable --now chronyd

[root@k8s-node2 ~]# yum -y install chrony
[root@k8s-node2 ~]# vi /etc/chrony.conf 
# Use public servers from the pool.ntp.org project.
# Please consider joining the pool (http://www.pool.ntp.org/join.html).
pool time1.aliyun.com iburst
[root@k8s-node2 ~]# systemctl enable --now chronyd

[root@k8s-master ~]# ssh-keygen -t rsa
Generating public/private rsa key pair.

Enter file in which to save the key (/root/.ssh/id_rsa): Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:K0+Qg6ELCOCncFua8XRWcF395Dc0DxNExpFPcWhVPRs root@k8s-master
The key's randomart image is:
+---[RSA 3072]----+
|.     ..o. ..=*BB|
|o      o  .  .OE=|
|o.o.+ o      ..O*|
|+.oX = .       o*|
|o.= o + S       o|
| . .   o .       |
|  .   . o        |
|       +         |
|        .        |
+----[SHA256]-----+
[root@k8s-master ~]# ssh-copy-id k8s-master
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@k8s-master's password: 

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh 'k8s-master'"
and check to make sure that only the key(s) you wanted were added.

[root@k8s-master ~]# ssh-copy-id k8s-node1
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
The authenticity of host 'k8s-node1 (192.168.200.6)' can't be established.
ECDSA key fingerprint is SHA256:mOMwnT9zILPNfJzswjsbIU0EPLhik6GUW5U09LevHTk.
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@k8s-node1's password: 

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh 'k8s-node1'"
and check to make sure that only the key(s) you wanted were added.

[root@k8s-master ~]# ssh-copy-id k8s-node2
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
The authenticity of host 'k8s-node2 (192.168.200.7)' can't be established.
ECDSA key fingerprint is SHA256:IaOph55OGkL9jzNXiGY1yNthXCU1RENqznlx+SRDDF8.
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@k8s-node2's password: 

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh 'k8s-node2'"
and check to make sure that only the key(s) you wanted were added.

准备工作完成后把所有的主机全部重启一遍

所有节点安装

Docker/kubeadm/kubelet

这一部分的操作是所有主机都要做的,同样是以master为例,如果有特殊配置会单独指出

Kubernetes默认CRI(容器运行时)为Docker,因此先安装Docker。

所有主机安装Docker(包括master),这里以master为例

//下载docker-ce的仓库源
[root@k8s-master ~]# wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo
--2022-09-06 22:33:58--  https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
Resolving mirrors.aliyun.com (mirrors.aliyun.com)... 183.204.47.241, 183.204.47.248, 183.204.47.240, ...
Connecting to mirrors.aliyun.com (mirrors.aliyun.com)|183.204.47.241|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 2081 (2.0K) [application/octet-stream]
Saving to: ‘/etc/yum.repos.d/docker-ce.repo’

/etc/yum.repos.d/docker- 100%[===============================>]   2.03K  --.-KB/s    in 0s      

2022-09-06 22:33:58 (48.2 MB/s) - ‘/etc/yum.repos.d/docker-ce.repo’ saved [2081/2081]
//安装Docker
[root@k8s-master ~]# yum -y install docker-ce
[root@k8s-master ~]# systemctl enable --now docker
Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.

//查看版本号
[root@k8s-master ~]# docker version
Version:           20.10.17
[root@k8s-node1 ~]# docker version
Version:          20.10.17
[root@k8s-node2 ~]# docker version
Version:           20.10.17
//配置Docker的加速器
[root@k8s-master ~]# cat > /etc/docker/daemon.json << EOF
> {
>   "registry-mirrors": ["https://erutzc6v.mirror.aliyuncs.com"],
>   "exec-opts": ["native.cgroupdriver=systemd"],
>   "log-driver": "json-file",
>   "log-opts": {
>     "max-size": "100m"
>   },
>   "storage-driver": "overlay2"
> }
> EOF

2、添加kubernetes(K8s工具)阿里云YUM软件源
添加阿里云的kubernetes,便于安装工具
kubernetes源的地址
所有主机都要操作

//
[root@k8s-master ~]# cat > /etc/yum.repos.d/kubernetes.repo << EOF
> [kubernetes]
> name=Kubernetes
> baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
> enabled=1
> gpgcheck=0
> repo_gpgcheck=0
> gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
> EOF

安装kubeadm,kubelet和kubectl
所有主机都要操作

[root@k8s-master ~]# yum install -y kubelet-1.25.0 kubeadm-1.25.0 kubectl-1.25.0
[root@k8s-master ~]# systemctl enable kubelet  //值设置开机自启,但是不启动
Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
# 1.28.0版本的安装 相对应的docker版本
[root@k8s-master inventory]# docker version
Client: Docker Engine - Community
 Version:           24.0.7

yum install -y kubelet-1.28.0 kubeadm-1.28.0 kubectl-1.28.0

4.部署Kubernetes Master

//生成默认配置文件  由于默认拉取镜像地址k8s.gcr.io国内无法访问,这里指定阿里云镜像仓库地址。
[root@k8s-master ~]# containerd config default > /etc/containerd/config.toml
//把配置文件里面的k8s.gcr.io替换成registry.aliyuncs.com/google_containers
[root@k8s-master ~]# vi /etc/containerd/config.toml
:%s#registry.k8s.io#registry.aliyuncs.com/google_containers#
//这里生成的配置文件node1、node2也要有
//重启
[root@k8s-master ~]# systemctl restart containerd
[root@k8s-master ~]# kubeadm init \
>   --apiserver-advertise-address=192.168.200.5 \
>   --image-repository registry.aliyuncs.com/google_containers \
>   --kubernetes-version v1.25.0 \
>   --service-cidr=10.96.0.0/12 \
>   --pod-network-cidr=10.244.0.0/16
[init] Using Kubernetes version: v1.25.0
[preflight] Running pre-flight checks
        [WARNING FileExisting-tc]: tc not found in system path
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.200.5]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.200.5 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.200.5 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 12.503380 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node k8s-master as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node k8s-master as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]
[bootstrap-token] Using token: gfmgjz.cwzmf3gr9cua51m6
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:
//如果你想启用使用集群的话就要用下面的命令;如果是普通用户就执行下面的操作
  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:
//如果你是管理员用户就执行下面的操作,但是我们一般不会这样操作。 
//因为这是临时的,我们需要做成永久生效的。下面会有做法
  export KUBECONFIG=/etc/kubernetes/admin.conf
//设置一个环境变量告诉系统使用的哪个配置文件
You should now deploy a pod network to the cluster.
//你需要设置一个pod网络到集群中,使用下面的命令
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:
//下面的命令是用来在被控主机上执行的,在被控主机执行过会 这台主机就会被加入到集群中
//所以一定要记住这个命令或者把这个命令保存到文件中或其他记得的地方
kubeadm join 192.168.200.5:6443 --token gfmgjz.cwzmf3gr9cua51m6 \
        --discovery-token-ca-cert-hash sha256:026d1d9daeff9169a196567b59bab6707bcfa307e44eeb5868161a9d74c88d22 
        
[root@k8s-master ~]# echo 'export KUBECONFIG=/etc/kubernetes/admin.conf' > /etc/profile.d/k8s.sh
[root@k8s-master ~]# source /etc/profile.d/k8s.sh 
[root@k8s-master ~]# echo $KUBECONFIG
/etc/kubernetes/admin.conf

# 1.28.0的部署Kubernetes Master
[root@k8s-master ~]# kubeadm init \
>   --apiserver-advertise-address=192.168.200.5 \
>   --image-repository registry.aliyuncs.com/google_containers \
>   --kubernetes-version v1.28.0 \
>   --service-cidr=10.96.0.0/12 \
>   --pod-network-cidr=10.244.0.0/16

5.安装Pod网络插件(CNI)

//master主机操作其他主机不用做。这一步前要确保主机可以正常访问quay.io这个registery(仓库),因为是从红帽官方quay.io拉取镜像
//kube-flannel.yml内容地址https://github.com/flannel-io/flannel/blob/master/Documentation/kube-flannel.yml
[root@k8s-master ~]# vi kube-flannel.yml
---
kind: Namespace
apiVersion: v1
metadata:
  name: kube-flannel
  labels:
    pod-security.kubernetes.io/enforce: privileged
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: flannel
rules:
- apiGroups:
  - ""
  resources:
  - pods
  verbs:
  - get
- apiGroups:
  - ""
  resources:
  - nodes
  verbs:
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - nodes/status
  verbs:
  - patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: flannel
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: flannel
subjects:
- kind: ServiceAccount
  name: flannel
  namespace: kube-flannel
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: flannel
  namespace: kube-flannel
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: kube-flannel-cfg
  namespace: kube-flannel
  labels:
    tier: node
    app: flannel
data:
  cni-conf.json: |
    {
      "name": "cbr0",
      "cniVersion": "0.3.1",
      "plugins": [
        {
          "type": "flannel",
          "delegate": {
            "hairpinMode": true,
            "isDefaultGateway": true
          }
        },
        {
          "type": "portmap",
          "capabilities": {
            "portMappings": true
          }
        }
      ]
    }
  net-conf.json: |
    {
      "Network": "10.244.0.0/16",
      "Backend": {
        "Type": "vxlan"
      }
    }
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds
  namespace: kube-flannel
  labels:
    tier: node
    app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: kubernetes.io/os
                operator: In
                values:
                - linux
      hostNetwork: true
      priorityClassName: system-node-critical
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni-plugin
       #image: flannelcni/flannel-cni-plugin:v1.1.0 for ppc64le and mips64le (dockerhub limitations may apply)
        image: docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin:v1.1.0
        command:
        - cp
        args:
        - -f
        - /flannel
        - /opt/cni/bin/flannel
        volumeMounts:
        - name: cni-plugin
          mountPath: /opt/cni/bin
      - name: install-cni
       #image: flannelcni/flannel:v0.19.2 for ppc64le and mips64le (dockerhub limitations may apply)
        image: docker.io/rancher/mirrored-flannelcni-flannel:v0.19.2
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
       #image: flannelcni/flannel:v0.19.2 for ppc64le and mips64le (dockerhub limitations may apply)
        image: docker.io/rancher/mirrored-flannelcni-flannel:v0.19.2
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
            add: ["NET_ADMIN", "NET_RAW"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        - name: EVENT_QUEUE_DEPTH
          value: "5000"
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
        - name: xtables-lock
          mountPath: /run/xtables.lock
      volumes:
      - name: run
        hostPath:
          path: /run/flannel
      - name: cni-plugin
        hostPath:
          path: /opt/cni/bin
      - name: cni
        hostPath:
          path: /etc/cni/net.d
      - name: flannel-cfg
        configMap:
          name: kube-flannel-cfg
      - name: xtables-lock
        hostPath:
          path: /run/xtables.lock
          type: FileOrCreate
[root@k8s-master ~]# kubectl apply -f kube-flannel.yml 
namespace/kube-flannel created  // created 创建完成,出现这个表示完成
clusterrole.rbac.authorization.k8s.io/flannel created  //created 创建完成
clusterrolebinding.rbac.authorization.k8s.io/flannel created   //created 创建完成
serviceaccount/flannel created
configmap/kube-flannel-cfg created  
daemonset.apps/kube-flannel-ds created  //created 创建完成

6.加入Kubernetes Node

在初始化集群的时候会反馈很多的信息,其中最后一句话是最重要的,这句话是一个命令,是在node端执行的,表示将node主机添加到k8s集群中

到所有的节点上执行这个命令 node1和node2

[root@k8s-node1 ~]# kubeadm join 192.168.200.5:6443 --token gfmgjz.cwzmf3gr9cua51m6 \
>         --discovery-token-ca-cert-hash sha256:026d1d9daeff9169a196567b59bab6707bcfa307e44eeb5868161a9d74c88d22 
[preflight] Running pre-flight checks
        [WARNING FileExisting-tc]: tc not found in system path
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
[root@k8s-node2 ~]# kubeadm join 192.168.200.5:6443 --token gfmgjz.cwzmf3gr9cua51m6 \
>         --discovery-token-ca-cert-hash sha256:026d1d9daeff9169a196567b59bab6707bcfa307e44eeb5868161a9d74c88d22 
[preflight] Running pre-flight checks
        [WARNING FileExisting-tc]: tc not found in system path
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

//然后去master上查看受控节点的状态
[root@k8s-master ~]# kubectl get nodes
NAME         STATUS     ROLES           AGE     VERSION
k8s-master   NotReady   control-plane   26m     v1.25.0
k8s-node1    NotReady   <none>          4m19s   v1.25.0
k8s-node2    NotReady   <none>          4m15s   v1.25.0
[root@k8s-master ~]# kubectl get nodes
NAME         STATUS   ROLES           AGE   VERSION
k8s-master   Ready    control-plane   58m   v1.25.0
k8s-node1    Ready    <none>          36m   v1.25.0
k8s-node2    Ready    <none>          36m   v1.25.0
//管理容器
master上操作
[root@k8s-master ~]# kubectl get ns  //查看所有的名称空间类型
NAME              STATUS   AGE
default           Active   59m
kube-flannel      Active   40m
kube-node-lease   Active   59m
kube-public       Active   59m
kube-system       Active   59m  //自动运行的容器都是属于这一类
//查看现有的容器状态
查看容器的状态的时候要指定名称空间   pods:所有的容器; -n:指定名称空间
[root@k8s-master ~]# kubectl get pods -n kube-system    
NAME                                 READY   STATUS    RESTARTS        AGE
coredns-c676cc86f-djrc5              1/1     Running   0               6m56s
coredns-c676cc86f-m54hk              1/1     Running   0               7m16s
etcd-k8s-master                      1/1     Running   0               59m
kube-apiserver-k8s-master            1/1     Running   0               59m
kube-controller-manager-k8s-master   1/1     Running   6 (9m45s ago)   59m
kube-proxy-4z8b7                     1/1     Running   0               59m
kube-proxy-gwrzc                     1/1     Running   0               37m
kube-proxy-zl42r                     1/1     Running   0               37m
kube-scheduler-k8s-master            1/1     Running   6 (9m45s ago)   59m
//查看容器时在那个主机上运行、IP是多少
[root@k8s-master ~]#  kubectl get pods -n kube-system -o wide
NAME                                 READY   STATUS    RESTARTS      AGE     IP              NODE         NOMINATED NODE   READINESS GATES
coredns-c676cc86f-djrc5              1/1     Running   0             8m20s   10.244.0.5      k8s-master   <none>           <none>
coredns-c676cc86f-m54hk              1/1     Running   0             8m40s   10.244.0.4      k8s-master   <none>           <none>
etcd-k8s-master                      1/1     Running   0             61m     192.168.200.5   k8s-master   <none>           <none>
kube-apiserver-k8s-master            1/1     Running   0             61m     192.168.200.5   k8s-master   <none>           <none>
kube-controller-manager-k8s-master   1/1     Running   7 (57s ago)   61m     192.168.200.5   k8s-master   <none>           <none>
kube-proxy-4z8b7                     1/1     Running   0             60m     192.168.200.5   k8s-master   <none>           <none>
kube-proxy-gwrzc                     1/1     Running   0             39m     192.168.200.6   k8s-node1    <none>           <none>
kube-proxy-zl42r                     1/1     Running   0             39m     192.168.200.7   k8s-node2    <none>           <none>
kube-scheduler-k8s-master            1/1     Running   7 (54s ago)   61m     192.168.200.5   k8s-master   <none>           <none>

7.测试kubernetes集群

在Kubernetes集群中创建一个pod,验证是否正常运行:

//创建一个deployment类型的的容器 名字叫nginx 镜像使用nginx
[root@k8s-master ~]# kubectl create deployment nginx --image=nginx
deployment.apps/nginx created
//暴露deployment类型中的nginx  端口号为80 类型为节点端口
//这里的暴露是service的端口号,而且他是有IP的。因为我们访问容器时访问的service的IP地址
[root@k8s-master ~]# kubectl expose deployment nginx --port=80 --type=NodePort

[root@k8s-master ~]# kubectl get pod,svc
NAME                        READY   STATUS    RESTARTS   AGE
pod/nginx-76d6c9b8c-8dswc   1/1     Running   0          83s

NAME                 TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
service/kubernetes   ClusterIP   10.96.0.1        <none>        443/TCP        5m2s
service/nginx        NodePort    10.102.103.165   <none>        80:32236/TCP   77s

//访问测试
[root@k8s-master ~]# curl http://10.102.103.165
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>
  • 4
    点赞
  • 17
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值