通过kubeadm安装k8s

节点规划说明

版本:1.20.2

kubeadm方式安装。

要求:master节点需要2*cpu,2G内存。

节点IP组件
master10.0.0.100docker,kubeadm,kubelet,kubectl,flannel,kube-dasboard
node0110.0.0.101docker,kubeadm,kubelet,kubectl,flannel
node0210.0.0.102docker,kubeadm,kubelet,kubectl,flannel

master:

kubectl : k8s的所有操作都是通过kubectl指令操作的

REST API :k8s对外部服务提供的接口服务,例如图形化界面或者kubectl都会通过REST API接口下发指令来控制k8s

scheduler :调度器,例如创建pod,scheduler可以控制将pod分配到哪个pod节点

controller-manager:检测pod或者node的健康状态,并维持pod的正常运行,如果pod故障,controller-manger会自动修复,例如在启动一个pod副本

kubelet:代理软件,例如在master上对node节点下发的指令,都需要通过kubelet组建来告知各个组件

etcd:数据库,所有配置数据都存放在etcd数据库中

kubeproxy: 在所有节点都需要运行kubeproxy,后期通过创建svc来将pod映射到外网,当外部通过svc-ip访问pod的时候就需要通过kubeporxy进行路由转发到pod

node:

kubeadm kubectl kubelet docker

pod:k8s环境运行的最小单位,一个pod中可以包含一个或多个容器

开始安装

初始化各个节点

安装常用工具

yum -y install wget telnet net-tools lrzsz vim zip unzip

修改主机名

将master节点修改为k8s-master,将node节点修改为k8s-node01 k8s-node02,并且关闭防火墙和selinux

systemctl stop firewalld && systemctl disable firewalld
setenforce 0
sed -i 's/SELINUX=enabled/SELINUX=disabled/g' /etc/selinux/config 

配置hosts主机地址

cat /etc/hosts
10.0.0.100  k8s-master
10.0.0.101  k8s-node01
10.0.0.102  k8s-node02

临时关闭swap

swapoff -a 

永久关闭swap

vim /etc/fstab
#删除掉swap磁盘自动挂载
#UUID=7dca0bd5-a4c3-4cf9-a22d-3dec16859738 swap     swap    defaults        0 0

配置yum源(系统自带的kubernetes版本太低)

rm -rf /etc/yum.repos.d/*
cd /etc/yum.repos.d/
wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo

配置k8s.repo

vim k8s.repo

[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg

更新并设置缓存

yum clean all && yum makecache

安装docker

yum install -y yum-utils device-mapper-persistent-data lvm2
yum -y install docker

由于内核不支持 overlay2所以需要升级内核或者禁用overlay2(我们选择禁用,安装完docker可以启动docker测试下是否支持,启动docker不报错的可以忽略这一步)

vim /etc/sysconfig/docker 将 --selinux-enabled=false

启动docker服务

systemctl start docker && systemctl enable docker

设置服务器时区

timedatectl set-timezone Asia/Shanghai

设置k8s相关参数

k8s启动iptables网络桥接设置

cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF

sysctl -p

echo "1" > /proc/sys/net/ipv4/ip_forward
echo "1" > /proc/sys/net/bridge/bridge-nf-call-iptables

安装k8s相关安装包 这里安装的是1.20.2版本,可以根据自己的需求指定版本,前提是你yum源支持 使用 yum list kube* 来查看当前yum源支持的最新版本

yum list kube*

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-bVbLqqAs-1613817730576)(K8S安装.assets/1613305356077.png)]

安装kubeadm kubelet kubectl

yum -y install kubeadm-1.20.2 kubelet-1.20.2 kubectl-1.20.2

启动服务

systemctl restart kubelet && systemctl enable kubelet

重启系统让配置生效

reboot

开始初始化k8s集群

在master节点执行主节点初始化

kubeadm init --apiserver-advertise-address=10.0.0.100   --image-repository registry.aliyuncs.com/google_containers  --kubernetes-version v1.20.2 --service-cidr=10.2.0.0/16  --pod-network-cidr=10.244.0.0/16 --ignore-preflight-errors=NumCPU --ignore-preflight-errors=Mem

说明:

–apiserver-advertise-address=10.0.0.100 #用于指定kube-apiserver监听的ip地址,就是 master本机IP地址。

–image-repository registry.aliyuncs.com/google_containers #指定阿里云仓库镜像地址

–kubernetes-version v1.20.2 #指定kubernetes的版本

–service-cidr=10.2.0.0/16 #指定svc控制器使用的IP地址范围

–pod-network-cidr=10.244.0.0/16 #执行kubenetes集群管理的最小单元pod使用的IP地址范围

在执行中可能会有cpu和内存资源的检查:

如果不满足cpu*2 mem>2G可使用如下方式忽略检查

–ignore-preflight-errors=NumCPU --ignore-preflight-errors=Mem

如下显示表示初始化成功**

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 10.0.0.100:6443 --token nu08d0.4mnf1yc0ctrvuncq \
    --discovery-token-ca-cert-hash sha256:7c6daf601ea593c87ee6e0ca475cf28a2b5a82cab98dab98a9dcbd7634b0bf6c 

设置kubeadm配置文件

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

安装flannel网络

通过kubectl命令执行yaml文件来完成flannel网络的创建

在执行flannel的pod创建之前,首先在master节点手动拉取flannel镜像

docker pull quay.io/coreos/flannel:v0.11.0-amd64

vim kube-flannel.yaml

---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: flannel
rules:
  - apiGroups:
      - ""
    resources:
      - pods
    verbs:
      - get
  - apiGroups:
      - ""
    resources:
      - nodes
    verbs:
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - nodes/status
    verbs:
      - patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: flannel
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: flannel
subjects:
- kind: ServiceAccount
  name: flannel
  namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: flannel
  namespace: kube-system
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: kube-flannel-cfg
  namespace: kube-system
  labels:
    tier: node
    app: flannel
data:
  cni-conf.json: |
    {
      "name": "cbr0",
      "plugins": [
        {
          "type": "flannel",
          "delegate": {
            "hairpinMode": true,
            "isDefaultGateway": true
          }
        },
        {
          "type": "portmap",
          "capabilities": {
            "portMappings": true
          }
        }
      ]
    }
  net-conf.json: |
    {
      "Network": "10.244.0.0/16",
      "Backend": {
        "Type": "vxlan"
      }
    }
---
apiVersion: extensions/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds-amd64
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      hostNetwork: true
      nodeSelector:
        beta.kubernetes.io/arch: amd64
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni
        image: quay.io/coreos/flannel:v0.11.0-amd64
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: quay.io/coreos/flannel:v0.11.0-amd64
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: true
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: run
          mountPath: /run
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      volumes:
        - name: run
          hostPath:
            path: /run
        - name: cni
          hostPath:
            path: /etc/cni/net.d
        - name: flannel-cfg
          configMap:
            name: kube-flannel-cfg
---
apiVersion: extensions/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds-arm64
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      hostNetwork: true
      nodeSelector:
        beta.kubernetes.io/arch: arm64
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni
        image: quay.io/coreos/flannel:v0.11.0-arm64
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: quay.io/coreos/flannel:v0.11.0-arm64
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: true
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: run
          mountPath: /run
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      volumes:
        - name: run
          hostPath:
            path: /run
        - name: cni
          hostPath:
            path: /etc/cni/net.d
        - name: flannel-cfg
          configMap:
            name: kube-flannel-cfg
---
apiVersion: extensions/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds-arm
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      hostNetwork: true
      nodeSelector:
        beta.kubernetes.io/arch: arm
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni
        image: quay.io/coreos/flannel:v0.11.0-arm
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: quay.io/coreos/flannel:v0.11.0-arm
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: true
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: run
          mountPath: /run
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      volumes:
        - name: run
          hostPath:
            path: /run
        - name: cni
          hostPath:
            path: /etc/cni/net.d
        - name: flannel-cfg
          configMap:
            name: kube-flannel-cfg
---
apiVersion: extensions/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds-ppc64le
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      hostNetwork: true
      nodeSelector:
        beta.kubernetes.io/arch: ppc64le
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni
        image: quay.io/coreos/flannel:v0.11.0-ppc64le
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: quay.io/coreos/flannel:v0.11.0-ppc64le
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: true
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: run
          mountPath: /run
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      volumes:
        - name: run
          hostPath:
            path: /run
        - name: cni
          hostPath:
            path: /etc/cni/net.d
        - name: flannel-cfg
          configMap:
            name: kube-flannel-cfg
---
apiVersion: extensions/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds-s390x
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      hostNetwork: true
      nodeSelector:
        beta.kubernetes.io/arch: s390x
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni
        image: quay.io/coreos/flannel:v0.11.0-s390x
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: quay.io/coreos/flannel:v0.11.0-s390x
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: true
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: run
          mountPath: /run
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      volumes:
        - name: run
          hostPath:
            path: /run
        - name: cni
          hostPath:
            path: /etc/cni/net.d
        - name: flannel-cfg
          configMap:
            name: kube-flannel-cfg

请注意apiVersion的版本:

rbac.authorization.k8s.io/v1beta1 ClusterRole在v1.17+中已弃用,在v1.22+中不可用;请使用rbac.authorization.k8s.io/v1群集角色

执行创建flannel pod

kubectl apply -f  kube-flannel.yaml

查看创建的pod -n kube-system

kubectl get pods -n kube-system

node节点加入集群

计算节点安装flannel网络

flannel容器下载需要在google下载,在国内因为网络不畅通,可以从阿里云下载,然后修改镜像 tag。然后创建flannel pod。

wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

#选择适合自己版本的一个下载即可
docker pull registry.cn-shanghai.aliyuncs.com/leozhanggg/flannel:v0.12.0-amd64
docker pull registry.cn-shanghai.aliyuncs.com/leozhanggg/flannel:v0.12.0-arm
docker pull registry.cn-shanghai.aliyuncs.com/leozhanggg/flannel:v0.12.0-ppc64le
docker pull registry.cn-shanghai.aliyuncs.com/leozhanggg/flannel:v0.12.0-s390x

#然后修改tag
docker tag registry.cn-shanghai.aliyuncs.com/leozhanggg/flannel:v0.12.0-amd64 quay.io/coreos/flannel:v0.12.0-amd64
docker tag registry.cn-shanghai.aliyuncs.com/leozhanggg/flannel:v0.12.0-arm64 quay.io/coreos/flannel:v0.12.0-arm64
docker tag registry.cn-shanghai.aliyuncs.com/leozhanggg/flannel:v0.12.0-arm quay.io/coreos/flannel:v0.12.0-arm
docker tag registry.cn-shanghai.aliyuncs.com/leozhanggg/flannel:v0.12.0-ppc64le quay.io/coreos/flannel:v0.12.0-ppc64le
docker tag registry.cn-shanghai.aliyuncs.com/leozhanggg/flannel:v0.12.0-s390x quay.io/coreos/flannel:v0.12.0-s390x

kubectl apply -f kube-flannel.yml

执行kubeadm join 加入k8s集群:

在master节点初始化时,会生成一段哈希值,用来在node节点执行,并加入node节点在k8s集群中。

kubeadm join 10.0.0.100:6443 --token nu08d0.4mnf1yc0ctrvuncq \
    --discovery-token-ca-cert-hash sha256:7c6daf601ea593c87ee6e0ca475cf28a2b5a82cab98dab98a9dcbd7634b0bf6c 

如果哈希值忘记了也可以执行如下命令重新获取

kubeadm token create --print-join-command

kubeadm join 10.0.0.100:6443 --token dgx841.of3v7qs5wtw0f745     --discovery-token-ca-cert-hash sha256:7c6daf601ea593c87ee6e0ca475cf28a2b5a82cab98dab98a9dcbd7634b0bf6c 

部署kubernetes-dashboard

cd $HOME/kube-yml/;

vim kubernetes-dashboard.yml

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-certs
  namespace: kube-system
type: Opaque

---
# ------------------- Dashboard Service Account ------------------- #

apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kube-system

---
# ------------------- Dashboard Role & Role Binding ------------------- #

kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: kubernetes-dashboard-minimal
  namespace: kube-system
rules:
  # Allow Dashboard to create 'kubernetes-dashboard-key-holder' secret.
- apiGroups: [""]
  resources: ["secrets"]
  verbs: ["create"]
  # Allow Dashboard to create 'kubernetes-dashboard-settings' config map.
- apiGroups: [""]
  resources: ["configmaps"]
  verbs: ["create"]
  # Allow Dashboard to get, update and delete Dashboard exclusive secrets.
- apiGroups: [""]
  resources: ["secrets"]
  resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs"]
  verbs: ["get", "update", "delete"]
  # Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
- apiGroups: [""]
  resources: ["configmaps"]
  resourceNames: ["kubernetes-dashboard-settings"]
  verbs: ["get", "update"]
  # Allow Dashboard to get metrics from heapster.
- apiGroups: [""]
  resources: ["services"]
  resourceNames: ["heapster"]
  verbs: ["proxy"]
- apiGroups: [""]
  resources: ["services/proxy"]
  resourceNames: ["heapster", "http:heapster:", "https:heapster:"]
  verbs: ["get"]

---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: kubernetes-dashboard-minimal
  namespace: kube-system
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: kubernetes-dashboard-minimal
subjects:
- kind: ServiceAccount
  name: kubernetes-dashboard
  namespace: kube-system

---
# ------------------- Dashboard Deployment ------------------- #

kind: Deployment
apiVersion: apps/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kube-system
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: kubernetes-dashboard
  template:
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
    spec:
      containers:
      - name: kubernetes-dashboard
        image: registry.aliyuncs.com/google_containers/kubernetes-dashboard-amd64:v1.10.1
        ports:
        - containerPort: 8443
          protocol: TCP
        args:
          - --auto-generate-certificates
          # Uncomment the following line to manually specify Kubernetes API server Host
          # If not specified, Dashboard will attempt to auto discover the API server and connect
          # to it. Uncomment only if the default does not work.
          # - --apiserver-host=http://my-address:port
        volumeMounts:
        - name: kubernetes-dashboard-certs
          mountPath: /certs
          # Create on-disk volume to store exec logs
        - mountPath: /tmp
          name: tmp-volume
        livenessProbe:
          httpGet:
            scheme: HTTPS
            path: /
            port: 8443
          initialDelaySeconds: 30
          timeoutSeconds: 30
      volumes:
      - name: kubernetes-dashboard-certs
        secret:
          secretName: kubernetes-dashboard-certs
      - name: tmp-volume
        emptyDir: {}
      serviceAccountName: kubernetes-dashboard
      # Comment the following tolerations if Dashboard must not be deployed on master
      tolerations:
      - key: node-role.kubernetes.io/master
        effect: NoSchedule

---
# ------------------- Dashboard Service ------------------- #

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kube-system
spec:
  type: NodePort
  ports:
    - port: 443
      targetPort: 8443
  selector:
    k8s-app: kubernetes-dashboard

kubectl apply -f  kubernetes-dashboard.yml

[root@k8s-master ~/kube-yaml]$kubectl get svc kubernetes-dashboard -n kube-system
NAME                   TYPE       CLUSTER-IP     EXTERNAL-IP   PORT(S)         AGE
kubernetes-dashboard   NodePort   10.2.119.135   <none>        443:31913/TCP   2m1s

测试访问

因为网络问题,拒绝访问,下面来研究一下k8s网络问题

k8s网络

目前k8s提供四种网络模式:

hostport: 提供容器所在的宿主机端口映射,不支持多pod负载。

hostnetwork: 提供宿主机网络,容器和宿主机的网卡信息相同。不支持pod多负载

nodeport: svc级别,由kube-proxy操控,所有节点规则统一,逻辑上事全局的,随机端口范围: --service-node-port-range ‘30000-32767’ 。

externelips: svc级别,指定一台node提供对应端口,支持多pod负载。

k8s集群中的四中网络模式

1.clusterIP:

分配一个内部集群IP地址,只能在集群内部访问(同Namespace内的Pod),默认ServiceType。 ClusterIP 模式的 Service 为你提供的,就是一个 Pod 的稳定的 IP 地址,即 VIP。 该IP地址只在集群内部可以访问。

apiVersion: v1
kind: Service
metadata:
  name: service-python
spec:
  ports:
  - port: 3000
    protocol: TCP
    targetPort: 443
  selector:
    run: pod-python
  type: ClusterIP

2.NodePort:

分配一个内部集群IP地址,并在每个节点上启用一个端口来暴露服务,可以在集群外部访问。 访问地址:: node端口范围30000-32067 ,可以在外网访问node节点上映射的容器端口。

apiVersion: v1
kind: Service
metadata:
  name: service-python
spec:
  ports:
  - port: 3000
    protocol: TCP
    targetPort: 443
    nodePort: 30080
  selector:
    run: pod-python
  type: NodePort

3.loadbalance:

分配一个内部集群IP地址,并在每个节点上启用一个端口来暴露服务。 除此之外,Kubernetes会请求底层云平台上的负载均衡器,将每个Node([NodeIP]:[NodePort])作为后端添加进去 。需要云平台底层的负载均衡器支持。

apiVersion: v1
kind: Service
metadata:
  name: service-python
spec:
  ports:
  - port: 3000
    protocol: TCP
    targetPort: 443
    nodePort: 30080
  selector:
    run: pod-python
  type: LoadBalancer

该pod的外部IP将是互联网能够访问的负载均衡IP地址。

4.ExternalName:

类型为 ExternalName 的service将服务映射到 DNS 名称,而不是典型的选择器,例如my-service或者cassandra。您可以使用spec.externalName参数指定这些服务。

coreDNS 1.7以上版本支持该模式。

kind: Service
apiVersion: v1
metadata:
  name: service-python
spec:
  ports:
  - port: 3000
    protocol: TCP
    targetPort: 443
  type: ExternalName
  externalName: remote.server.url.com

可以通过访问remote.server.url.com来访问容器提供的服务。

ingress:

service只能提供4层负载均衡的能力,虽然service可以通过NodePort的方式来服务,但是随着服务的增多,会在物理机上开辟太多端口,管理起来混乱。

那么我们换一种思路来暴露服务,创建一个具有N个副本的nginx服务,在nginx服务内配置各个服务的域名与集群内部的服务的IP,这些nginx服务再通过NodePort的方式来暴露。外部服务通过域名:Nginx NodePort端口来访问nginx,nginx再通过域名反向代理到真实服务。

上面的这个流程就是ingress做的事,ingress分为ingress controller与ingress配置。ingress controller是反向代理服务器,对外通过NodePort(或者其他方式)来暴露,ingress配置是抽象出来的域名代理配置。

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: ingress-test
  namespace: test
spec:
  rules:
  - host: my.ingress.com
    http:
      paths:
      - path:
        backend:
          serviceName: service-clusterip
          servicePort: 80

  • 1
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

Lebron_zhb

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值