【k8s】虚拟机k8s搭建教程

本文详细指导如何在CentOS 7虚拟机上配置静态IP、关闭防火墙和selinux,安装Docker和Kubernetes(k8s),包括阿里云源、docker加速、kubeadm安装和网络插件配置,最后通过实例测试k8s环境的搭建过程。
摘要由CSDN通过智能技术生成


前言

对于学习k8s时,搭建环境是一个很麻烦的一步,在此,阐述下虚拟机搭建k8s,一主一从。请大家参考,如果想要虚拟机,可直接留言获取。

一、环境配置

1.系统镜像

我使用的镜像为CentOS Linux release 7.9.2009 (Core),镜像可以从官方下载,也可以留言即可找我要已经安装好的虚拟机。

[root@k8smaster ~]# cat /etc/redhat-release 
CentOS Linux release 7.9.2009 (Core)
2.环境要求
  • 一台或多台机器,操作系统CentOS 7
  • 硬件配置:内存2GB或2G+,CPU 2核或CPU 2核+;
  • 集群内各个机器之间能相互通信;
  • 集群内各个机器可以访问外网,需要拉取镜像;
  • 禁止swap分区;
3. 静态ip配置

本人虚拟机使用的是NAT模式,配置如下,大家参考下即可。

TYPE="Ethernet"
PROXY_METHOD="none"
BROWSER_ONLY="no"
BOOTPROTO="static"
DEFROUTE="yes"
IPV4_FAILURE_FATAL="no"
IPV6INIT="yes"
IPV6_AUTOCONF="yes"
IPV6_DEFROUTE="yes"
IPV6_FAILURE_FATAL="no"
IPV6_ADDR_GEN_MODE="stable-privacy"
NAME="ens33"
UUID="49c13795-9323-433a-af08-caacd886f189"
DEVICE="ens33"
ONBOOT="yes"
IPADDR="192.168.16.135"
NETMASK="255.255.255.0"
DNS1="192.168.16.2"

在这里插入图片描述

4.关闭防火墙

k8s一般是在内部网络使用,而且这也是学习环境,直接关闭防火墙,避免网络通讯的困扰。

systemctl stop firewalld
systemctl disable firewalld
5. 关闭selinux
#永久
sed -i 's/enforcing/disabled/' /etc/selinux/config  
#临时
setenforce 0  
6. 关闭swap(k8s禁止虚拟内存以提高性能)
#永久
sed -ri 's/.*swap.*/#&/' /etc/fstab 
#临时
swapoff -a 
7. 配置host

这里要根据实际情况进行配置。配置好对应的master节点和node节点。

cat >> /etc/hosts << EOF
192.168.16.135 k8smaster
192.168.16.137 k8snode
EOF
8. 设置网桥参数
cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
#生效
sysctl --system  
9.更新时间
yum install ntpdate -y
ntpdate time.windows.com

二、docker安装

10.更新yum源
#可选
yum install wget -y

wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo
11.安装docker
yum install docker-ce-19.03.13 -y
12.配置开机自启
systemctl enable docker.service
13.配置加速器

默认情况下,docker拉取镜像速度挺慢的,可以通过设置阿里云加速器
加速器配置
配置完成之后可以将docker重启一下
docker重启

systemctl restart docker.service
13.docker查看命令

查看docker状态

systemctl status docker.service 

查看当前已下载的镜像

docker images

拉取镜像

docker pull hello-world

运行镜像

[root@k8snode ~]# docker run hello-world

Hello from Docker!
This message shows that your installation appears to be working correctly.

To generate this message, Docker took the following steps:
 1. The Docker client contacted the Docker daemon.
 2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
    (amd64)
 3. The Docker daemon created a new container from that image which runs the
    executable that produces the output you are currently reading.
 4. The Docker daemon streamed that output to the Docker client, which sent it
    to your terminal.

To try something more ambitious, you can run an Ubuntu container with:
 $ docker run -it ubuntu bash

Share images, automate workflows, and more with a free Docker ID:
 https://hub.docker.com/

For more examples and ideas, visit:
 https://docs.docker.com/get-started/

如此docker安装成功。

四、k8s安装

14. 添加k8s的阿里云YUM源
cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
15. 安装 kubeadm,kubelet 和 kubectl
 yum install kubelet-1.19.4 kubeadm-1.19.4 kubectl-1.19.4 -y
16.开机自启
systemctl enable kubelet.service
17.查看是否安装成功
yum list installed | grep kubelet
yum list installed | grep kubeadm
yum list installed | grep kubectl

版本查看

[root@k8snode ~]# kubelet --version
Kubernetes v1.19.4
18.半路总结

如果正确安装到这里的时候,已经成功安装一大半了,如上的配置都是需要在所有的节点进行配置。可通过xshell工具将所有指令发送到所有的虚拟机,操作如下。
在这里插入图片描述
在这里插入图片描述
另外,有一些配置是需要重启才能生效的,因此,这里可以重启一下。

19.初始化主节点
kubeadm init --apiserver-advertise-address=192.168.16.135 --image-repository registry.aliyuncs.com/google_containers --kubernetes-version v1.19.4 --service-cidr=10.96.0.0/12 --pod-network-cidr=10.244.0.0/16

192.168.16.135是主节点的地址,要自行修改。其它的不用修改。

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

最后查看节点信息

kubectl get nodes
20.初始化node节点
kubeadm join 192.168.16.135:6443 --token a0n3bj.8o7dhcphidtid5fk \
    --discovery-token-ca-cert-hash sha256:00b608e1314662953a52975c2b5c6c2f4440d2abb255434e459935ba373fa4e8

如上这段仅仅是参考,实际部署的时候这段join命令会在主节点init命令的时候进行打印
至此通过kubectl get nodes查看的时候所有的节点是NotReady状态。

21.网络插件

将如下保存为kube-flannel.yml文件。在主节点上执行即可

kubectl apply -f kube-flannel.yml

执行完毕稍等1分钟左右,再次查看通过kubectl get nodes查看。

---
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
  name: psp.flannel.unprivileged
  annotations:
    seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default
    seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default
    apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default
    apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default
spec:
  privileged: false
  volumes:
  - configMap
  - secret
  - emptyDir
  - hostPath
  allowedHostPaths:
  - pathPrefix: "/etc/cni/net.d"
  - pathPrefix: "/etc/kube-flannel"
  - pathPrefix: "/run/flannel"
  readOnlyRootFilesystem: false
  # Users and groups
  runAsUser:
    rule: RunAsAny
  supplementalGroups:
    rule: RunAsAny
  fsGroup:
    rule: RunAsAny
  # Privilege Escalation
  allowPrivilegeEscalation: false
  defaultAllowPrivilegeEscalation: false
  # Capabilities
  allowedCapabilities: ['NET_ADMIN', 'NET_RAW']
  defaultAddCapabilities: []
  requiredDropCapabilities: []
  # Host namespaces
  hostPID: false
  hostIPC: false
  hostNetwork: true
  hostPorts:
  - min: 0
    max: 65535
  # SELinux
  seLinux:
    # SELinux is unused in CaaSP
    rule: 'RunAsAny'
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: flannel
rules:
- apiGroups: ['extensions']
  resources: ['podsecuritypolicies']
  verbs: ['use']
  resourceNames: ['psp.flannel.unprivileged']
- apiGroups:
  - ""
  resources:
  - pods
  verbs:
  - get
- apiGroups:
  - ""
  resources:
  - nodes
  verbs:
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - nodes/status
  verbs:
  - patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: flannel
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: flannel
subjects:
- kind: ServiceAccount
  name: flannel
  namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: flannel
  namespace: kube-system
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: kube-flannel-cfg
  namespace: kube-system
  labels:
    tier: node
    app: flannel
data:
  cni-conf.json: |
    {
      "name": "cbr0",
      "cniVersion": "0.3.1",
      "plugins": [
        {
          "type": "flannel",
          "delegate": {
            "hairpinMode": true,
            "isDefaultGateway": true
          }
        },
        {
          "type": "portmap",
          "capabilities": {
            "portMappings": true
          }
        }
      ]
    }
  net-conf.json: |
    {
      "Network": "10.244.0.0/16",
      "Backend": {
        "Type": "vxlan"
      }
    }
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: kubernetes.io/os
                operator: In
                values:
                - linux
      hostNetwork: true
      priorityClassName: system-node-critical
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni
        image: quay.io/coreos/flannel:v0.13.0
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: quay.io/coreos/flannel:v0.13.0
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
            add: ["NET_ADMIN", "NET_RAW"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      volumes:
      - name: run
        hostPath:
          path: /run/flannel
      - name: cni
        hostPath:
          path: /etc/cni/net.d
      - name: flannel-cfg
        configMap:
          name: kube-flannel-cfg

22.测试

k8s环境安装成功,拉取nginx镜像进行测试。

#创建deploy
kubectl create deployment nginx --image=nginx
#开放端口
kubectl expose deployment nginx --port=80 --type=NodePort

查看端口

[root@k8smaster ~]# kubectl get service
NAME         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
kubernetes   ClusterIP   10.96.0.1       <none>        443/TCP        2d17h
nginx        NodePort    10.109.58.243   <none>        80:32024/TCP   4s

32024是可以访问的端口。
node节点ip:32024即可访问。137是我配置的node节点的ip。
在这里插入图片描述

搭建K8s集群一般需要至少两个节点,你可以在Ubuntu虚拟机搭建一个包含两个节点的集群。下面是一个简单的流程: 1. 在Ubuntu虚拟机上安装Kubernetes和Docker: ``` sudo apt-get update sudo apt-get install docker.io sudo apt-get install -y apt-transport-https curl curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add - sudo touch /etc/apt/sources.list.d/kubernetes.list echo "deb http://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee -a /etc/apt/sources.list.d/kubernetes.list sudo apt-get update sudo apt-get install -y kubelet kubeadm kubectl ``` 2. 初始化K8s集群,并将第一个节点设置为主节点: ``` sudo kubeadm init --pod-network-cidr=10.244.0.0/16 ``` 3. 安装网络插件: ``` sudo kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml ``` 4. 将其他节点加入到集群中: ``` sudo kubeadm join <主节点IP>:6443 --token <token> --discovery-token-ca-cert-hash sha256:<hash> ``` 5. 部署一个Deployment: ``` sudo kubectl create deployment nginx --image=nginx ``` 6. 部署一个Service: ``` sudo kubectl expose deployment nginx --port=80 --type=LoadBalancer ``` 7. 通过Service的IP地址访问部署的Nginx服务。 8. 进行弹性部署,可以通过以下命令来修改Deployment的副本数: ``` sudo kubectl scale deployment nginx --replicas=3 ``` 以上是一个简单的K8s集群搭建流程,可以根据实际需要进行适当的修改。
评论 17
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

叁滴水

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值