k8s实战(一)kubernetes 环境安装

引言

k8s风靡的时代已经到来,小猿决定从今天开始逐步学习k8s,首先在实战之前得说说小猿在实践k8s安装的两个实践问题。
第一个是版本问题,因为k8s是最简单最粗暴的理解就是管理docker容器的一个工具,因此docker和k8s的版本一致是非常重要的,不要忽视此问题的重要性,否则可能会让你焦头烂额,至于版本问题去哪里查,小猿也没有找到比较官方的说法,有人说去github kubernetes的Changelog上找,可是有的版本能找到,有的版本找不到。
第二个问题是部署方法,小猿在此处使用的k8s 不使用二进制方法部署,如果童鞋想看需用二进制文件的方式部署,则请看这个大佬的文章

kubernetes基本概念

What is Kubernetes?

Kubernetes is a portable, extensible, open source platform for managing containerized workloads and services, that facilitates both declarative configuration and automation. It has a large, rapidly growing ecosystem. Kubernetes services, support, and tools are widely available.

Kubernetes Components

A Kubernetes cluster consists of the components that represent the control plane and a set of machines called nodes.

The Kubernetes API

The Kubernetes API lets you query and manipulate the state of objects in Kubernetes. The core of Kubernetes’ control plane is the API server and the HTTP API that it exposes. Users, the different parts of your cluster, and external components all communicate with one another through the API server.

节点架构

在这里插入图片描述

Container Runtime

The container runtime is responsible for managing the life cycle of each container running in the node. After a pod is scheduled on the node, the runtime pulls the images specified by the pod from the registry. When a pod is terminated, the runtime kills the containers that belong to the pod. Kubernetes may communicate with any Open Container Initiative (OCI)-compliant container runtime, including Docker and CRI-O.
The OCI is a standard that defines the runtime specification and image specification, with a goal to drive standardization of container runtimes and image formats.

Kubelet

The Kubelet is the Kubernetes agent whose responsibility is to interact with the container runtime to perform operations such as starting, stopping and maintaining containers.
Each kubelet also monitors the state of the pods. When a pod does not meet the desired state as defined by the deployment, it may be restarted on the same node. The node’s status is transmitted to the head every few seconds via heartbeat messages. If the head detects a node failure, the replication controller observes this state change and schedules the pods on other healthy nodes.

Kube-Proxy

The kube-proxy component is implemented as a network proxy and a load balancer that orchestrates the network to route requests to the appropriate pods. It routes traffic to the appropriate pod based on the associated service name and the port number of an incoming request. It also takes advantage of OS-specific networking capabilities by manipulating the policies and rules defined through iptables. Each kube-proxy component may be integrated with network layers such as Calico and Flannel.

Logging Layer

The orchestrator makes frequent use of logging as a means for gathering resource usage and performance metrics for containers on each node, such as CPU, memory, file and network usage. The Cloud Native Computing Foundation hosts a software component that provides a unified logging layer for use with Kubernetes or other orchestrators, called Fluentd. This component generates metrics that the Kubernetes head controller needs in order to keep track of available cluster resources, as well as the health of the overall infrastructure.

k8s集群架构简要示意图

在这里插入图片描述

Pods

Pods are the smallest deployable units of computing that you can create and manage in Kubernetes.
A Pod (as in a pod of whales or pea pod) is a group of one or more containers, with shared storage and network resources, and a specification for how to run the containers. A Pod’s contents are always co-located and co-scheduled, and run in a shared context. A Pod models an application-specific “logical host”: it contains one or more application containers which are relatively tightly coupled. In non-cloud contexts, applications executed on the same physical or virtual machine are analogous to cloud applications executed on the same logical host.

As well as application containers, a Pod can contain init containers that run during Pod startup. You can also inject ephemeral containers for debugging if your cluster offers this

Deployments

A Deployment provides declarative updates for Pods and ReplicaSets.

You describe a desired state in a Deployment, and the Deployment Controller changes the actual state to the desired state at a controlled rate. You can define Deployments to create new ReplicaSets, or to remove existing Deployments and adopt all their resources with new Deployments.

在这里插入图片描述

DNS

While the other addons are not strictly required, all Kubernetes clusters should have cluster DNS, as many examples rely on it.

Cluster DNS is a DNS server, in addition to the other DNS server(s) in your environment, which serves DNS records for Kubernetes services.

Containers started by Kubernetes automatically include this DNS server in their DNS searches.

service

While the control plane and the worker nodes form the core cluster infrastructure, the workloads are the containerized applications deployed in Kubernetes.
After developing and testing a microservice, the developers package it as a container, which is the smallest unit of deployment packaged as a pod. A set of containers belonging to the same application is grouped, packaged, deployed and managed within Kubernetes.
Kubernetes exposes primitives for deployment, while constantly scaling, discovering, and monitoring the health of these microservices. Namespaces are typically used to logically separate one application from the other. They act as a logical cluster by providing a well-defined boundary and scope for all resources and services belonging to an application.
Within a namespace, the following Kubernetes primitives are deployed

在这里插入图片描述

controller

In Kubernetes, controllers augment pods by adding additional capabilities, such as desired configuration state and runtime characteristics.
A deployment brings declarative updates to pods. It guarantees that the desired state is always maintained by tracking the health of the pods participating in the deployment. Each deployment manages a ReplicaSet, which maintains a stable set of replica pods running at any given time, as defined by the desired state.
Deployments bring PaaS-like capabilities to pods through scaling, deployment history and rollback features. When a deployment is configured with a minimum replica count of two, Kubernetes ensures that at least two pods are always running, which brings fault tolerance. Even when deploying the pod with just one replica, it is highly recommended to use a deployment controller instead of a plain vanilla pod specification.
A statefulset is similar to a deployment, but is meant for pods that need persistence and a well-defined identifier and guaranteed order of creation. For workloads such as database clusters, a statefulset controller will create a highly available set of pods in a given order that have a predictable naming convention. Stateful workloads that need to be highly available, such as Cassandra, Kafka, ZooKeeper, and SQL Server, are deployed as statefulsets in Kubernetes.
To force a pod to run on every node of the cluster, a DaemonSet controller can be used. Since Kubernetes automatically schedules a DaemonSet in newly provisioned worker nodes, it becomes an ideal candidate to configure and prepare the nodes for the workload. For example, if an existing network file system (NFS) or Gluster file share has to be mounted on the nodes before deploying the workload, it is recommended to package and deploy the pod as a DaemonSet. Monitoring agents are good candidates to be used as a DaemonSet, to ensure that each node runs the monitoring agent.
For batch processing and scheduling jobs, pods can be packaged for a run-to-completion job or a cron job. A job creates one or more pods and ensures that a specified number of them successfully terminate. Pods configured for run to completion execute the job and exit, while a cron job will run a job based on the schedule defined in the crontab format.
Controllers define the life cycle of pods based on the workload characteristics and their execution context.
Now that we understand the basics of the Kubernetes control plane and how applications run on Kubernetes, it’s time to talk about service discovery to better understand how production workloads run in Kubernetes.

准备工作

三个centos7同一版本的虚拟机

网络设置

三台虚拟机的网络模式为Network Address Tranlation。
网络配置文件既需要修改ip addr 也需要修改mac,典型的配置文件如下所示

TYPE=Ethernet
PROXY_METHOD=none
BROWSER_ONLY=no
#BOOTPROTO=dhcp
BOOTPROTO=static
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
IPV6_ADDR_GEN_MODE=stable-privacy
NAME=ens33
UUID=a32c82c0-4b1a-489c-bc08-1ffbed671f3f
DEVICE=ens33
ONBOOT=yes
GATEWAY=192.168.177.2
IPADDR=192.168.177.180
NETMASK=255.255.255.0
MACADDR=00:07:E9:05:E8:B4
#set dns server or the docker can not get the network resources
DNS1=8.8.8.8
DNS2=114.114.114.114
PEERDNS=no

ZONE=public
PREFIX=24
CONNECTION_METERED=no

其他的两台照此配置即可,需要高边MAC 和ipaddr 其他的uuid 也最好变改一下。

防火墙设置

systemctl stop firewalld
systemctl disable firewalld

关闭selinux

sed -i 's/enforcing/disabled/' /etc/selinux/config
setenforce 0

关闭swap

#临时
swapoff -a
#永久
sed -ri 's/.*swap.*/#&/' /etc/fstab
#验证
free -g
##swap 必须为 0;

添加主机名与ip对应的关系

192.168.177.180 k8smaster
192.168.177.181 k8snode1
192.168.177.182 k8snode2

在这里插入图片描述上图中小猿以master节点为例。

hostnamectl set-hostname k8smaster

将桥接的 IPv4 流量传递到 iptables 的链

cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system

同步时间

###查看时间 
date
yum install -y ntpdate
ntpdate time.windows.com

k8s安装

据说k8s 在1.21 完全抛弃 Docker了,届时将实现OCI (Container Runtime Interface),用containerd(CNCF毕业项目)作为替代品,但小猿这任然以docker为例子做讲解。首先要确定版本
docker 版本

yum list docker-ce --showduplicates | sort -r

在这里插入图片描述
docker 版本采用19.03.6
kubernetes版本 小猿的案例采用kubernetes 1.7.3

docker 安装

移除以前的docker 版本
sudo yum remove docker \
docker-client \
docker-client-latest \
docker-common \
docker-latest \
docker-latest-logrotate \
docker-logrotate \
docker-engine
安装必须的依赖
sudo yum install -y yum-utils \
device-mapper-persistent-data \
lvm2
设置 docker repo 的 yum 位置
 sudo yum-config-manager \
    --add-repo \
    http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

安装docker的时候需要指定版本
yum list docker-ce --showduplicates | sort -r
# docker 指定版本安装
#sudo yum install docker-ce-3:19.03.6-3.el7 docker-ce-cli-3:19.03.6-3.el7 containerd.io
sudo yum install -y --setopt=obsoletes=0 docker-ce-19.03.6 docker-ce-selinux-19.03.6 docker-ce-cli-19.03.6
配置docker 加速
sudo mkdir -p /etc/docker
sudo tee /etc/docker/daemon.json <<-'EOF'
{
  "registry-mirrors": ["https://a61nkhlq.mirror.aliyuncs.com"]
}
EOF
//重启docker的伴随线程
sudo systemctl daemon-reload
//重启docker
sudo systemctl restart docker
sudo systemctl enable docke

kubernetes安装

卸载kubernetes历史版本

卸载

kubeadm reset -f
modprobe -r ipip
lsmod
rm -rf ~/.kube/
rm -rf /etc/kubernetes/
rm -rf /etc/systemd/system/kubelet.service.d
rm -rf /etc/systemd/system/kubelet.service
rm -rf /usr/bin/kube*
rm -rf /etc/cni
rm -rf /opt/cni
rm -rf /var/lib/etcd
rm -rf /var/etcd
yum clean all
yum remove kube*
查看yum源中的版本
yum list|grep kube
添加阿里云 yum 源
 cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
开始安装
yum install -y kubelet-1.17.3 kubeadm-1.17.3 kubectl-1.17.3
systemctl enable kubelet
systemctl start kubelet

在这里插入图片描述

部署 k8smaster

初始化之前

k8s初始化时需要下载一些依赖,因此需要执行拉去命令,下面的命令直接做成shell 可执行文件,注意下面的脚本需要在master节点上执行即可

images=(
        kube-apiserver:v1.17.3
        kube-proxy:v1.17.3
        kube-controller-manager:v1.17.3
        kube-scheduler:v1.17.3
        coredns:1.6.5
        etcd:3.4.3-0
        pause:3.1
)

for imageName in ${images[@]} ; do
    docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName
#   docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName  k8s.gcr.io/$imageName
done

所有需要的镜像下载完毕
在这里插入图片描述

master节点初始化
kubeadm init \
--apiserver-advertise-address=192.168.177.180 \
--image-repository registry.cn-hangzhou.aliyuncs.com/google_containers \
--kubernetes-version v1.17.3 \
--service-cidr=192.96.0.0/16 \
--pod-network-cidr=192.244.0.0/16

在这里插入图片描述
按照提示来执行。

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

发布pod网络

网络种类很多,我们采用Flannel网络插件,其配置最好先下载,再使用,
在这里插入图片描述

配置文件
---
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
  name: psp.flannel.unprivileged
  annotations:
    seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default
    seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default
    apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default
    apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default
spec:
  privileged: false
  volumes:
  - configMap
  - secret
  - emptyDir
  - hostPath
  allowedHostPaths:
  - pathPrefix: "/etc/cni/net.d"
  - pathPrefix: "/etc/kube-flannel"
  - pathPrefix: "/run/flannel"
  readOnlyRootFilesystem: false
  # Users and groups
  runAsUser:
    rule: RunAsAny
  supplementalGroups:
    rule: RunAsAny
  fsGroup:
    rule: RunAsAny
  # Privilege Escalation
  allowPrivilegeEscalation: false
  defaultAllowPrivilegeEscalation: false
  # Capabilities
  allowedCapabilities: ['NET_ADMIN', 'NET_RAW']
  defaultAddCapabilities: []
  requiredDropCapabilities: []
  # Host namespaces
  hostPID: false
  hostIPC: false
  hostNetwork: true
  hostPorts:
  - min: 0
    max: 65535
  # SELinux
  seLinux:
    # SELinux is unused in CaaSP
    rule: 'RunAsAny'
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: flannel
rules:
- apiGroups: ['extensions']
  resources: ['podsecuritypolicies']
  verbs: ['use']
  resourceNames: ['psp.flannel.unprivileged']
- apiGroups:
  - ""
  resources:
  - pods
  verbs:
  - get
- apiGroups:
  - ""
  resources:
  - nodes
  verbs:
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - nodes/status
  verbs:
  - patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: flannel
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: flannel
subjects:
- kind: ServiceAccount
  name: flannel
  namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: flannel
  namespace: kube-system
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: kube-flannel-cfg
  namespace: kube-system
  labels:
    tier: node
    app: flannel
data:
  cni-conf.json: |
    {
      "name": "cbr0",
      "cniVersion": "0.3.1",
      "plugins": [
        {
          "type": "flannel",
          "delegate": {
            "hairpinMode": true,
            "isDefaultGateway": true
          }
        },
        {
          "type": "portmap",
          "capabilities": {
            "portMappings": true
          }
        }
      ]
    }
  net-conf.json: |
    {
      "Network": "192.244.0.0/16",
      "Backend": {
        "Type": "vxlan"
      }
    }
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: kubernetes.io/os
                operator: In
                values:
                - linux
      hostNetwork: true
      priorityClassName: system-node-critical
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni-plugin
       #image: flannelcni/flannel-cni-plugin:v1.0.1 for ppc64le and mips64le (dockerhub limitations may apply)
        image: rancher/mirrored-flannelcni-flannel-cni-plugin:v1.0.1
        command:
        - cp
        args:
        - -f
        - /flannel
        - /opt/cni/bin/flannel
        volumeMounts:
        - name: cni-plugin
          mountPath: /opt/cni/bin
      - name: install-cni
       #image: flannelcni/flannel:v0.17.0 for ppc64le and mips64le (dockerhub limitations may apply)
        image: rancher/mirrored-flannelcni-flannel:v0.17.0
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
       #image: flannelcni/flannel:v0.17.0 for ppc64le and mips64le (dockerhub limitations may apply)
        image: rancher/mirrored-flannelcni-flannel:v0.17.0
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
            add: ["NET_ADMIN", "NET_RAW"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        - name: EVENT_QUEUE_DEPTH
          value: "5000"
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
        - name: xtables-lock
          mountPath: /run/xtables.lock
      volumes:
      - name: run
        hostPath:
          path: /run/flannel
      - name: cni-plugin
        hostPath:
          path: /opt/cni/bin
      - name: cni
        hostPath:
          path: /etc/cni/net.d
      - name: flannel-cfg
        configMap:
          name: kube-flannel-cfg
      - name: xtables-lock
        hostPath:
          path: /run/xtables.lock
          type: FileOrCreate

配置文件准备好后可以放到主节点中,主节点需要用配置文件建立pod网络

应用配置文件
##应用flannel
kubectl apply -f kube-flannel.yml
## 卸载应用
kubectl delete -f kube-flannel.yml

如果遇到下面的警告或者错误,请运行
在这里插入图片描述

echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> /etc/profile
source /etc/profile

应用成功
在这里插入图片描述
至此pod配置基本完成
获取所有名称空间

kubectl get ns
kubectl get pods --all-namespaces

在这里插入图片描述
确保所有的名称空间及名称处于running状态
查看节点状态。
在这里插入图片描述

加入节点

主要是另个从节点执行下面的命令。

kubeadm join 192.168.177.180:6443 --token q5esd5.ysl9kvp55hbedai5 \
    --discovery-token-ca-cert-hash sha256:0c2ba2e9024dfbef1a221750c5527a9ef2c664f19797aba65f113b9512605e4f 

在这里插入图片描述
再次查看节点数量
在这里插入图片描述
在这里插入图片描述

加入成功。

修改接口角色属性名

#增加角色属性名
kubectl label nodes k8snode1 node-role.kubernetes.io/node1=node1
#删除角色属性名
kubectl label nodes k8snode1 node-role.kubernetes.io/node1-

在这里插入图片描述

监控 pod 进度

watch kubectl get pod -n kube-system -o wide

在这里插入图片描述
至此小猿的kubernetes就安装成功了,接下来就是部署相关项目,期待下节分享。

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值