kubernetes 简单安装实践

kubernetes

  • kubernetes-1.16 + docker-18.09.6
  • 官网
    • kubernetes,简称K8s,是用8代替8个字符“ubernete”而成的缩写。是一个开源的,用于管理云平台中多个主机上的容器化的应用,Kubernetes的目标是让部署容器化的应用简单并且高效(powerful),Kubernetes提供了应用部署,规划,更新,维护的一种机制。
  • 关于k8s版本问题api的修改,紧跟官网
  • 老长时间不进行,现在更新一篇部署文档…

1 概述

1.1 k8s (容器集群管理系统)

  • 自我修复能力
  • 弹性伸缩
  • 自动部署和回滚
  • 服务发现和负载均衡
  • 机密和配置管理
  • 存储编排
  • 批处理

1.2 架构与组件

  • kubectl
  • etcd
  • master
    • API server (auth)
    • scheduler (选择node)
    • controller (管理控制器)
  • node
    • kubelet (管理容器)
    • kube-proxy (四层负载均衡)
    • plugins (网络,…)
    • container(docker)

1.3 核心概念

  • Pod
    • 最小部署单元
    • 一组容器的集合
    • 共享网络命名空间
    • Pod 短暂的
  • controllers
    • replicaset
    • deployment (无状态)
    • statefulset (有状态)
    • daemonset (每个node 运行一个 )
    • job
    • cronjob
  • service
    • 防止pod失联
    • 定义访问策略
      • clusterIP, nodeport, loadbalancer
  • label: 附加在某个资源上(node, pod)
  • namespace (对象路基隔离)

2 k8s搭建

  • kubeadm
  • 二进制

2.1 生产环境规划

  • 测试集群 单master (etcd 高可用的保障)
  • 多master 集群提高管理的高可用
    • master api-server 的高可用
角色IP组件
k8s-master110.199.204.175api-server,controller-manager,scheduler,etcd
k8s-master210.199.204.178api-server,controller-manager,scheduler,etcd
k8s-node110.199.204.176kubelet, kube-proxy, docker, etcd
k8s-node210.199.204.177kubelet, kube-proxy, docker
LB-master10.199.204.180(vip=10.199.204.181)nginx L4
LB-backup10.199.204.179nginx L4

2.2 环境初始化

# 关闭防火墙
systemctl stop firewalld
systemctl disable firewalld

# 关闭selinux
setenforce 0
sed -i 's/enforcing/disabled/' /etc/selinux/config

# 关闭swap
swapoff -a
vi /etc/fstab

# 修改主机名,添加解析
hostnamectl set-hostname master01
.
.
.

# 同步系统时间
ntpdate ntp.aliyun.com

2.3 etcd 部署

  • 使用cfssl 工具自签证书(openssl 类似)

    • 自签证书
      • 根证书CA 颁发crt(pem 格式), key (正常访问)
        • 证书添加IP 可信任 (记得预留)
        • 携带CA 证书 (http 转发不方便携带)
    • 权威机构(购买)
    # cfssl 安装使用
    [root@master01 TLS]# cat cfssl.sh
    #curl -L https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 -o /usr/local/bin/cfssl
    #curl -L https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 -o /usr/local/bin/cfssljson
    #curl -L https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 -o /usr/local/bin/cfssl-certinfo
    cp -rf cfssl cfssl-certinfo cfssljson /usr/local/bin
    chmod +x /usr/local/bin/cfssl*
    
  • 证书的使用

    • 第一套
      Node
      LB
      apiserver
      etcd
    • 第二套
      etcd
2.3.1 etcd 自签证书
  • 确定CA 证书办法主机
# 目录结构
- TLS
  - etcd
  - k8s
  
# 脚本生成证书  

# 配置文件
[root@master01 etcd]# cat  ca-csr.json ca-config.json  server-csr.json
---
{
    "CN": "etcd CA",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Beijing",
            "ST": "Beijing"
        }
    ]
}
---
{
  "signing": {
    "default": {
      "expiry": "87600h"
    },
    "profiles": {
      "www": {
         "expiry": "87600h",
         "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ]
      }
    }
  }
}
---
{
    "CN": "etcd",
    "hosts": [
        "10.199.204.175",
        "10.199.204.176",
        "10.199.204.177"
        ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "BeiJing",
            "ST": "BeiJing"
        }
    ]
}
---
[root@master01 etcd]# cat generate_etcd_cert.sh
# 生成ca 文件
cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
# 生成证书文件(server 开头)
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www server-csr.json | cfssljson -bare server
2.3.2 etcd 二进制安装
  • coreos k/v 存储数据库
  • 3,5,7 建议的部署方式(最多也是7节点,没有分片技术,全量数据复制)
# github 二进制包下载
[root@master01 etcd]# tree
.
├── bin
│   ├── etcd
│   └── etcdctl
├── cfg
│   └── etcd.conf
└── ssl
    ├── ca.pem
    ├── server-key.pem
    └── server.pem

# 配置文件
[root@master01 cfg]# cat etcd.conf

#[Member]
ETCD_NAME="etcd-1"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
# 几点之间通讯需要证书(可以使用同一套)
ETCD_LISTEN_PEER_URLS="https://10.199.204.175:2380"
# 客户端连接需要证书
ETCD_LISTEN_CLIENT_URLS="https://10.199.204.175:2379"

#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://10.199.204.175:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://10.199.204.175:2379"
ETCD_INITIAL_CLUSTER="etcd-1=https://10.199.204.175:2380,etcd-2=https://10.199.204.176:2380,etcd-3=https://10.199.204.177:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"


# systemd 管理服务(自签证书添加ca 文件)
[root@master01 k8s]# cat etcd.service
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target

[Service]
Type=notify
EnvironmentFile=/opt/etcd/cfg/etcd.conf
ExecStart=/opt/etcd/bin/etcd \
        --name=${ETCD_NAME} \
        --data-dir=${ETCD_DATA_DIR} \
        --listen-peer-urls=${ETCD_LISTEN_PEER_URLS} \
        --listen-client-urls=${ETCD_LISTEN_CLIENT_URLS},http://127.0.0.1:2379 \
        --advertise-client-urls=${ETCD_ADVERTISE_CLIENT_URLS} \
        --initial-advertise-peer-urls=${ETCD_INITIAL_ADVERTISE_PEER_URLS} \
        --initial-cluster=${ETCD_INITIAL_CLUSTER} \
        --initial-cluster-token=${ETCD_INITIAL_CLUSTER_TOKEN} \
        --initial-cluster-state=new \
        --cert-file=/opt/etcd/ssl/server.pem \
        --key-file=/opt/etcd/ssl/server-key.pem \
        --peer-cert-file=/opt/etcd/ssl/server.pem \
        --peer-key-file=/opt/etcd/ssl/server-key.pem \
        --trusted-ca-file=/opt/etcd/ssl/ca.pem \
        --peer-trusted-ca-file=/opt/etcd/ssl/ca.pem
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

# 集群构建
将etcd 目录和 service 文件下发到集群机器
修改配置文件
- ETCD_NAME="etcd-1"
- 相关节点IP
systemctl daemon-reload
systemctl start etcd
systemctl enable etcd

#集群状态查看
/opt/etcd/bin/etcdctl --ca-file=/opt/etcd/ssl/ca.pem --cert-file=/opt/etcd/ssl/server.pem --key-file=/opt/etcd/ssl/server-key.pem --endpoints="https://10.199.204.175:2379,https://10.199.204.176:2379,https://10.199.204.177:2379" cluster-health

2.4 master 部署

2.4.1 api-server 证书创建
  • 默认监听 8080
  • LB VIP, master 访问api-server 都需要添加受信任
[root@master01 k8s]# cat ca-csr.json ca-config.json kube-proxy-csr.json server-csr.json
{
    "CN": "kubernetes",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Beijing",
            "ST": "Beijing",
            "O": "k8s",
            "OU": "System"
        }
    ]
}
---
{
  "signing": {
    "default": {
      "expiry": "87600h"
    },
    "profiles": {
      "kubernetes": {
         "expiry": "87600h",
         "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ]
      }
    }
  }
}
---
{
  "CN": "system:kube-proxy",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "L": "BeiJing",
      "ST": "BeiJing",
      "O": "k8s",
      "OU": "System"
    }
  ]
}
---
{
    "CN": "kubernetes",
    "hosts": [
      "10.0.0.1",
      "127.0.0.1",
      "kubernetes",
      "kubernetes.default",
      "kubernetes.default.svc",
      "kubernetes.default.svc.cluster",
      "kubernetes.default.svc.cluster.local",
      "10.199.204.175",
      "10.199.204.176",
      "10.199.204.177",
      "10.199.204.178",
      "10.199.204.179",
      "10.199.204.180",
      "10.199.204.181"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "BeiJing",
            "ST": "BeiJing",
            "O": "k8s",
            "OU": "System"
        }
    ]
}

# 生成证书
[root@master01 k8s]# cat generate_k8s_cert.sh
cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server
# kube-proxy 使用
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy

2.4.2 master 二进制安装
# github 二进制包下载
server binaries x86 (master + node)

# 目录结构
[root@master01 kubernetes]# tree
.
├── bin
│   ├── kube-apiserver
│   ├── kube-controller-manager
│   ├── kubectl
│   └── kube-scheduler
├── cfg
│   ├── kube-apiserver.conf
│   ├── kube-controller-manager.conf
│   ├── kube-scheduler.conf
│   └── token.csv
├── logs
└── ssl
    ├── ca-key.pem
    ├── ca.pem
    ├── kube-proxy.pem # master 不需要使用
    ├── server-key.pem
    └── server.pem

# 配置文件
[root@master01 cfg]# cat kube-apiserver.conf
KUBE_APISERVER_OPTS="--logtostderr=false \
--v=2 \
--log-dir=/opt/kubernetes/logs \
--etcd-servers=https://10.199.204.175:2379,https://10.199.204.176:2379,https://10.199.204.177:2379 \
--bind-address=10.199.204.175 \
--secure-port=6443 \
--advertise-address=10.199.204.175 \
--allow-privileged=true \
--service-cluster-ip-range=10.0.0.0/24 \
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \
--authorization-mode=RBAC,Node \
--enable-bootstrap-token-auth=true \
--token-auth-file=/opt/kubernetes/cfg/token.csv \
--service-node-port-range=30000-32767 \
--kubelet-client-certificate=/opt/kubernetes/ssl/server.pem \
--kubelet-client-key=/opt/kubernetes/ssl/server-key.pem \
--tls-cert-file=/opt/kubernetes/ssl/server.pem  \
--tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \
--client-ca-file=/opt/kubernetes/ssl/ca.pem \
--service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \
--etcd-cafile=/opt/etcd/ssl/ca.pem \
--etcd-certfile=/opt/etcd/ssl/server.pem \
--etcd-keyfile=/opt/etcd/ssl/server-key.pem \
--audit-log-maxage=30 \
--audit-log-maxbackup=3 \
--audit-log-maxsize=100 \
--audit-log-path=/opt/kubernetes/logs/k8s-audit.log"

[root@master01 cfg]# cat kube-controller-manager.conf
KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=false \
--v=2 \
--log-dir=/opt/kubernetes/logs \
--leader-elect=true \
--master=127.0.0.1:8080 \
--address=127.0.0.1 \
--allocate-node-cidrs=true \
--cluster-cidr=10.244.0.0/16 \
--service-cluster-ip-range=10.0.0.0/24 \
--cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem \
--cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem  \
--root-ca-file=/opt/kubernetes/ssl/ca.pem \
--service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem \
--experimental-cluster-signing-duration=87600h0m0s"

[root@master01 cfg]# cat kube-scheduler.conf
KUBE_SCHEDULER_OPTS="--logtostderr=false \
--v=2 \
--log-dir=/opt/kubernetes/logs \
--leader-elect \
--master=127.0.0.1:8080 \
--address=127.0.0.1"

[root@master01 cfg]# cat token.csv
c47ffb939f5ca36231d9e3121a252940,kubelet-bootstrap,10001,"system:node-bootstrapper"


# systemd service 文件
[root@master01 k8s]# cat kube-apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-apiserver.conf
ExecStart=/opt/kubernetes/bin/kube-apiserver $KUBE_APISERVER_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target

---
[root@master01 k8s]# cat kube-controller-manager.service
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-controller-manager.conf
ExecStart=/opt/kubernetes/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target

---
[root@master01 k8s]# cat kube-scheduler.service
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-scheduler.conf
ExecStart=/opt/kubernetes/bin/kube-scheduler $KUBE_SCHEDULER_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target

# 启动服务
service 移动到/usr/lib/systemd/system/ 目录下
for i in $(ls /opt/kubernetes/bin | grep -v kubectl); do systemctl status $i;done

# 启动bootstrapping 自动kublete 颁发证书(添加账户)
kubectl create clusterrolebingding kubelet-bootstrap \
--clusterrole=system:node-bootstrapper \
--user=kubelet-bootstrap

# 查看权限
[root@master01 k8s]#  kubectl get clusterrole | grep boot
system:node-bootstrapper        ---                                     

# yaml 文件输出
[root@master01 k8s]#  kubectl get clusterrole   system:node-bootstrapper -o yaml

[root@master01 k8s]#  kubectl get clusterrolebinding kubelet-bootstrap
NAME                AGE
kubelet-bootstrap   ---d

# token 自己生成
- apiserver 配置token必须要node 节点bootstrap.kubeconfig 配置一致

head -c 16 /dev/urandom | od -An -t x | tr -d ' '

2.5 node 部署

2.5.1 二进制部署docker

docker 二进制下载

# 文件构成
[root@node01 docker]# tree
.
├── containerd
├── containerd-shim
├── ctr
├── daemon.json
├── docker
├── dockerd
├── docker-init
├── docker-proxy
└── runc


---
# systemd 配置
[root@node01 opt]# cat docker.service
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target firewalld.service containerd.service
Wants=network-online.target

[Service]
Type=notify
ExecStart=/usr/bin/dockerd
ExecReload=/bin/kill -s HUP $MAINPID
TimeoutSec=0
RestartSec=2
Restart=always
StartLimitBurst=3
StartLimitInterval=60s
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
TasksMax=infinity
Delegate=yes
KillMode=process

[Install]
WantedBy=multi-user.target


---
# 命令移动
cp /opt/docker/* /usr/bin
cp docker.service /usr/lib/systemd/system/
cp /opt/docker/daemon.json /etc/docker/daemon.json

# 修改docker cgroup驱动:native.cgroupdriver=systemd(docker info 信息默认是就不需要修改)
cat > /etc/docker/daemon.json <<EOF
{
  "graph": "/data/docker",
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "storage-driver": "overlay2",
  "storage-opts": [
    "overlay2.override_kernel_check=true"
  ],
  "registry-mirrors": ["http://f1361db2.m.daocloud.io"],
  "insecure-registries": ["iregistry.wanda.cn"]

}
EOF

systemctl restart docker  # 重启使配置生效
2.5.2 kubelet, kube-proxy 部署
# 目录结构
[root@node01 kubernetes]# tree
.
├── bin
│  ├── kubelet
│  └── kube-proxy
├── cfg
│  ├── bootstrap.kubeconfig
│  ├── kubelet.conf
│  ├── kubelet-config.yml
│  ├── kubelet.kubeconfig # 添加集群自动生成文件
│  ├── kube-proxy.conf
│  ├── kube-proxy-config.yml
│  └── kube-proxy.kubeconfig
├── logs
└── ssl # 证书文件时自动颁发的
    ├── ca.pem # 认证文件需要copy
    ├── kubelet-client-2019-12-10-18-22-45.pem
    ├── kubelet-client-current.pem -> /opt/kubernetes/ssl/kubelet-client-20xx-1x-1x-1x-22-45.pem
    ├── kubelet.crt
    ├── kubelet.key
    ├── kube-proxy-key.pem # kube-proxy 证书copy
    └── kube-proxy.pem  # kube-proxy 证书copy

# 配置文件
- config 基本配置文件
- kubeconfig 连接apiserver的配置文件
- yaml 常规配置文件(早期没有,后期动态更新)

[root@node01 cfg]# cat bootstrap.kubeconfig
apiVersion: v1
clusters:
- cluster:
    certificate-authority: /opt/kubernetes/ssl/ca.pem
    server: https://10.199.204.181:6443
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: kubelet-bootstrap
  name: default
current-context: default
kind: Config
preferences: {}
users:
- name: kubelet-bootstrap
  user:
    token: c47ffb939f5ca36231d9e3121a252940
---
[root@node01 cfg]# cat kubelet.conf
KUBELET_OPTS="--logtostderr=false \
--v=2 \
--log-dir=/opt/kubernetes/logs \
--hostname-override=node01 \
--network-plugin=cni \
--kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \
--bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \
--config=/opt/kubernetes/cfg/kubelet-config.yml \
--cert-dir=/opt/kubernetes/ssl \
--pod-infra-container-image=k8s.gcr.io/pause:3.1"
---
[root@node01 cfg]# cat kubelet-config.yml
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: 0.0.0.0
port: 10250
readOnlyPort: 10255
cgroupDriver: cgroupfs
clusterDNS:
- 10.0.0.2
clusterDomain: cluster.local
failSwapOn: false
authentication:
  anonymous:
    enabled: false
  webhook:
    cacheTTL: 2m0s
    enabled: true
  x509:
    clientCAFile: /opt/kubernetes/ssl/ca.pem
authorization:
  mode: Webhook
  webhook:
    cacheAuthorizedTTL: 5m0s
    cacheUnauthorizedTTL: 30s
evictionHard:
  imagefs.available: 15%
  memory.available: 100Mi
  nodefs.available: 10%
  nodefs.inodesFree: 5%
maxOpenFiles: 1000000
maxPods: 110

---
[root@node01 cfg]# cat kubelet.kubeconfig
apiVersion: v1
clusters:
- cluster:
    certificate-authority: /opt/kubernetes/ssl/ca.pem
    server: https://10.199.204.181:6443
  name: default-cluster
contexts:
- context:
    cluster: default-cluster
    namespace: default
    user: default-auth
  name: default-context
current-context: default-context
kind: Config
preferences: {}
users:
- name: default-auth
  user:
    client-certificate: /opt/kubernetes/ssl/kubelet-client-current.pem
    client-key: /opt/kubernetes/ssl/kubelet-client-current.pem

---
[root@node01 cfg]# cat kube-proxy.conf
KUBE_PROXY_OPTS="--logtostderr=false \
--v=2 \
--log-dir=/opt/kubernetes/logs \
--config=/opt/kubernetes/cfg/kube-proxy-config.yml"

---
[root@node01 cfg]# cat kube-proxy-config.yml
kind: KubeProxyConfiguration
apiVersion: kubeproxy.config.k8s.io/v1alpha1
address: 0.0.0.0
metricsBindAddress: 0.0.0.0:10249
clientConnection:
  kubeconfig: /opt/kubernetes/cfg/kube-proxy.kubeconfig
hostnameOverride: node01
clusterCIDR: 10.0.0.0/24
mode: ipvs
ipvs:
  scheduler: "rr"
iptables:
  masqueradeAll: true

---
[root@node01 cfg]# cat kube-proxy.conf
KUBE_PROXY_OPTS="--logtostderr=false \
--v=2 \
--log-dir=/opt/kubernetes/logs \
--config=/opt/kubernetes/cfg/kube-proxy-config.yml"
[root@node01 cfg]# cat kube-proxy-config.yml
kind: KubeProxyConfiguration
apiVersion: kubeproxy.config.k8s.io/v1alpha1
address: 0.0.0.0
metricsBindAddress: 0.0.0.0:10249
clientConnection:
  kubeconfig: /opt/kubernetes/cfg/kube-proxy.kubeconfig
hostnameOverride: node01
clusterCIDR: 10.0.0.0/24
mode: ipvs
ipvs:
  scheduler: "rr"
iptables:
  masqueradeAll: true
[root@node01 cfg]# cat kube-proxy.kubeconfig
apiVersion: v1
clusters:
- cluster:
    certificate-authority: /opt/kubernetes/ssl/ca.pem
    server: https://10.199.204.181:6443
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: kube-proxy
  name: default
current-context: default
kind: Config
preferences: {}
users:
- name: kube-proxy
  user:
    client-certificate: /opt/kubernetes/ssl/kube-proxy.pem
    client-key: /opt/kubernetes/ssl/kube-proxy-key.pem
    
# systemd
[root@node01 opt]# cat kubelet.service
[Unit]
Description=Kubernetes Kubelet
After=docker.service
Before=docker.service

[Service]
EnvironmentFile=/opt/kubernetes/cfg/kubelet.conf
ExecStart=/opt/kubernetes/bin/kubelet $KUBELET_OPTS
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

---
[root@node01 opt]# cat kube-proxy.service
[Unit]
Description=Kubernetes Proxy
After=network.target

[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-proxy.conf
ExecStart=/opt/kubernetes/bin/kube-proxy $KUBE_PROXY_OPTS
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

# copy 配置文件
cp *.service /usr/lib/systemd/system/

# 启动服务
- 记得copy kube-proxy, ca 证书
systemctl start kubelet
systemctl start kube-proxy

# 检验(master 查看并授权)

# 查看
 kubectl get csr
# 添加
 kubectl certificate approve
# 查看
kubectl get node

# 节点添加
- 修改相应的节点名称
2.5.3 bootstrap 工作流程
kublete启动
bootstrap-kubeconfig
apiserver
验证token
验证证书
csr
颁发证书
启动成功
2.5.4 node plugins cni network 部署

二进制安装下载

# 文件默认目录
[root@node01 cni]# pwd
/opt/cni
[root@node01 cni]# ll
total 8
drwxr-xr-x. 2 root root 4096 Dec 10 18:34 bin
drwxr-xr-x. 2 root root 4096 Dec 10 18:32 net.d

# 解压
tar xfvz cni-plugins-linux-amd64-v0.8.2.tgz -C /opt/cni/bin

# controller-manager 有网络配置需一样
 kubectl apply -f kube-flannel.yaml

# 查看插件部署
 kubectl get pods -n kube-system
 
# 查看pod 事件
 kubectl describe pod kube-flannel-ds-amd64-dch78 -n kube-system

# 查看pods 日志(如果无法访问,注意授权)
kubectl logs  kube-flannel-ds-amd64-dch78 -n kube-system

# pod 创建测试
# 创建pod
kubectl create deployment web --image=nginx
# 创建sevice
kucevtl expose deployment web --port=80 --type=NodePort
# 查看pod 正常,访问node 几点ip+port 完成访问
kubectl get pods -o wide
2.5.5 node plugins web UI 部署
# github 查看部署(注意版本依赖)
kubectl apply -f dashboard.yaml 

# 查看部署的dashboard
kubectl get pods,svc -n kubernetes-dashboard

# 浏览器访问(火狐-没有跳转,需要自己添加https)
https://10.199.204.176:30001/

# 解决谷歌无法访问的问题
# 查看原有证书
 kubectl get secret kubernetes-dashboard-certs -n  kubernetes-dashboard -o yaml
 

# 创建授权用户
 kubectl apply -f dashboard-adminuser.yaml
 
# 获取token(token 认证访问)
 kubectl -n kubernetes-dashboard describe secret $(kubectl -n kubernetes-dashboard get secret | grep admin-user | awk '{print $1}')
2.5.6 node plugins nginx 代理实现ssl 解密
# 使用原有的ca 进行签发证书
(umask 077; openssl genrsa -out dashboard.k8s.com.key 2048)
openssl req -new -key dashboard.k8s.com.key -out dashboard.k8s.com.csr -subj "/CN=dashboard.k8s.com/C=CN/ST=BJ/L=Beijing/O=SRE/OU=ops"
openssl x509 -req -in dashboard.k8s.com.csr -CA ca.pem -CAkey ca-key.pem -CAcreateserial -out dashboard.k8s.com.crt -days 3650

# nginx 配置
[root@lb_backup conf.d]# cat dashborad.conf
server {
    listen       80;
    server_name  dashboard.k8s.com;
    rewrite ^(.*)$ https://${server_name}$1 permanent;
}

server {
    listen       443 ssl;
    server_name  dashboard.k8s.com;

    ssl_certificate "/etc/nginx/conf.d/ssl/dashboard.k8s.com.crt";
    ssl_certificate_key "/etc/nginx/conf.d/ssl/dashboard.k8s.com.key";
    ssl_session_cache shared:SSL:1m;
    ssl_session_timeout  10m;
    ssl_ciphers HIGH:!aNULL:!MD5;
    ssl_prefer_server_ciphers on;

    location / {
        proxy_pass https://10.199.204.176:30001;
        proxy_set_header Host       $http_host;
        proxy_set_header x-forwarded-for $proxy_add_x_forwarded_for;
    }
}

# 实现chrome 浏览器访问问题

2.5.6 node plugins coreDNS 部署
# dns 为service 提供解析
kubectl get svc 获取名称可以访问

# 二进制部署需要单独安装coredns
- 自身需要clusterIP 注意配置
- node 节点kubelet-config.yml coredns 配置IP 一定要匹配
# 部署
 kubectl apply -f coredns.yaml
 
# 查看
kubectl get pods -n kube-system

# 测试
启动容器后在容器种可以访问name
 

2.6 多master 扩展部署

  • LB nginx L4 实现api-server 的高可用(sream 模块)
  • LB keepalived 实现高可用(提供VIP)
2.6.1 master 多台配置
# 原有master 
- 二进制文件+配置文件(注意修改IP)
- service 文件
- ssl 文件(etcd + apiserver )

# 启动服务
- 每一台机器都可执行kubectl
2.6.2 LB 多台配置
  • nginx + keepalived 实现
# nginx 配置文件

stream {

    log_format  main  '$remote_addr $upstream_addr - [$time_local] $status $upstream_bytes_sent';

    access_log  /var/log/nginx/k8s-access.log  main;

    upstream k8s-apiserver {
                server 10.199.204.175:6443;
                server 10.199.204.178:6443;
            }

    server {
       listen 6443;
       proxy_pass k8s-apiserver;
    }
}

# keepalived 配置文件

[root@lb_backup keepalived]# cat keepalived.conf

global_defs {
   notification_email {
     acassen@firewall.loc
     failover@firewall.loc
     sysadmin@firewall.loc
   }
   notification_email_from Alexandre.Cassen@firewall.loc
   smtp_server 127.0.0.1
   smtp_connect_timeout 30
   router_id NGINX_BACKUP
}

vrrp_script check_nginx {
    script "/etc/keepalived/check_nginx.sh"
}

vrrp_instance VI_1 {
    state BACKUP
    interface eth0
    virtual_router_id 51 # VRRP 路由 ID实例,每个实例是唯一的
    priority 90    # 优先级,备服务器设置 90
    advert_int 1    # 指定VRRP 心跳包通告间隔时间,默认1秒
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        10.199.204.181/24
    }
    track_script {
        check_nginx
    }
}

# keepalive check 文件(两到三秒的切换的时间)
[root@lb_backup HA]# cat check_nginx.sh
#!/bin/bash
count=$(ps -ef |grep nginx |egrep -cv "grep|$$")

if [ "$count" -eq 0 ];then
    exit 1
else
    exit 0
fi

# node 节点配置
sed -i 's/10.199.204.175/10.199.204.181/g' * 

# 查看
[root@node01 cfg]# grep 10.199 *
bootstrap.kubeconfig:    server: https://10.199.204.181:6443
kubelet.kubeconfig:    server: https://10.199.204.181:6443
kube-proxy.kubeconfig:    server: https://10.199.204.181:6443

# 重启服务(观察日志nginx 访问日志)


# 验证(轮询方式转发)
curl -k --header "Authorization: Bearer c47ffb939f5ca36231d9e3121a252940" https://10.199.204.181:6443/version
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值