kubernetes安装详解

一:简介

二:基础环境安装

1.系统环境

osRoleipMemory
Centos 7master01192.168.25.304G
Centos 7node01192.168.25.314G
Centos 7node02192.168.25.314G

2.关闭selinux

sed -i "s/SELINUX\=.*/SELINUX=disabled/g" /etc/selinux/config

3.关闭防火墙

systemctl disable firewalld && systemctl stop firewalld

4.修改主机名

hostnamectl set-hostname Role_name

5.添加hosts解析

echo -e "192.168.25.30 master01\n192.168.25.31 node01\n192.168.25.32 node02" >> /etc/hosts

6.设置k8s内核参数

设置内核参数

cat << EOF > /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 vm.swappiness=0 EOF

加载内核模块

modprobe br_netfilter echo "modprobe br_netfilter" >> /etc/rc.local

使内核参数生效

sysctl -p /etc/sysctl.d/k8s.conf

7.关闭系统swap

修改fstab文件,关闭swap的自动挂载。

8.修改防火墙策略

/sbin/iptables -P FORWARD ACCEPT echo "sleep 60 && /sbin/iptables -P FORWARD ACCEPT" >> /etc/rc.local

9.安装依赖包

yum install -y epel-release 
yum install -y yum-utils device-mapper-persistent-data lvm2 net-tools conntrack-tools wget

10.时间同步

yum -y install ntpdate /usr/sbin/ntpdate -u ntpserver1: ntp1.aliyun.com /usr/sbin/ntpdate -u ntp1.aliyun.com

11.安装docker-ce软件

提示:master节点不需要安装

  1. 删除自带的docker

    yum remove docker \
                  docker-client \
                  docker-client-latest \
                  docker-common \
                  docker-latest \
                  docker-latest-logrotate \
                  docker-logrotate \
                  docker-selinux \
                  docker-engine-selinux \
                  docker-engine
    
  2. 安装依赖包

    yum install -y yum-utils \ 
    device-mapper-persistent-data \ 
    lvm2
    
  3. 安装yum源

    yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
    
  4. 安装docker-ce

  5. 启动,并设置开机自启[安装设置好flanneld后,再启动docker]

    systemctl start docker && systemctl enable docker
    

11.安装CFSSL

cfssl

export CFSSL_URL="https://pkg.cfssl.org/R1.2" wget "${CFSSL_URL}/cfssl_linux-amd64" -O /usr/local/bin/cfssl wget "${CFSSL_URL}/cfssljson_linux-amd64" -O /usr/local/bin/cfssljson chmod +x /usr/local/bin/cfssl /usr/local/bin/cfssljson

三:创建CA证书和密钥

kubernetes系统各组件需要使用TLS证书对通信进行加密,本本档使用CloudFlare的工具集cfssl来生成Certificate Authority(CA)证书和密钥文件,CA是自签名的证书,用来签名后续创建的其他TLS证书。

以下操作都在master节点上执行,证书只需要创建一次即可,以后新增节点时,只需要将/etc/kubernetes/目录下的证书拷贝到新节点即可。

1.创建CA配置文件

mkdir /root/ssl cd /root/ssl cat > ca-config.json << EOF { "signing": { "default": { "expiry": "8760h" }, "profiles": { "kubernetes": { "usages": [ "signing", "key encipherment", "server auth", "client auth" ], "expiry": "8760h" } } } } EOF
  • ca-config.json:可以定义多个profiles,分别指定不同的过期时间,使用场景等参数,后续在签名证书时会使用到某个profile;
  • signing:表示该证书可用于签名其他证书;生成ca.pem证书中的CA=TRUE;
  • server auto:表示client可以用该CA对server提供的证书进行验证;
  • client auth:表示server可以用该CA对client提供的证书进行验证

2.创建CA证书签名请求

cat > ca-csr.json << EOF { "CN": "kubernetes", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "BeiJing", "L": "BeiJing", "O": "k8s", "OU": "System" } ] } EOF
  • “CN”:Common Name,kube-apiserver从证书中提取该字段作为请求的用户名(User name);浏览器检验该字段验证网站是否合法;
  • “O”:Organization,kube-apiserver从证书提取该字段作为请求用户所属的组(Group);

3.生成CA证书和私钥

# cfssl gencert -initca ca-csr.json | cfssljson -bare ca 2018/03/29 14:38:31 [INFO] generating a new CA key and certificate from CSR 2018/03/29 14:38:31 [INFO] generate received request 2018/03/29 14:38:31 [INFO] received CSR 2018/03/29 14:38:31 [INFO] generating key: rsa-2048 2018/03/29 14:38:31 [INFO] encoded CSR 2018/03/29 14:38:31 [INFO] signed certificate with serial number 438768005817886692243142700194592359153651905696

4.创建kubernetes证书签名请求文件

cat > kubernetes-csr.json << EOF { "CN": "kubernetes", "hosts": [ "127.0.0.1", "192.168.25.30", "192.168.25.31", "192.168.25.32", "10.254.0.1", "kubernetes", "kubernetes.default", "kubernetes.default.svc", "kubernetes.default.svc.cluster", "kubernetes.default.svc.cluster.local" ], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "BeiJing", "L": "BeiJing", "O": "k8s", "OU": "System" } ] } EOF
  • hosts中的内容可以为空,即是按照上面的配置,向集群中增加新节点也不需要重新生成证书;如果hosts不为空,则需要指定授权使用该证书的IP或域名列表,由于该证书后续被etcd集群和kubernetes master集群使用,所以上面分别指定了etcd集群,kubernetes master集群的主机IP和kuberunetes服务ip。

5.生成kubernetes证书和私钥

# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kubernetes-csr.json | cfssljson -bare kubernetes 2018/03/29 14:46:12 [INFO] generate received request 2018/03/29 14:46:12 [INFO] received CSR 2018/03/29 14:46:12 [INFO] generating key: rsa-2048 2018/03/29 14:46:12 [INFO] encoded CSR 2018/03/29 14:46:12 [INFO] signed certificate with serial number 6955479006214073693226115919937339031303355422 2018/03/29 14:46:12 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for websites. For more information see the Baseline Requirements for the Issuance and Management of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org); specifically, section 10.2.3 ("Information Requirements"). # ls kubernetes* kubernetes.csr kubernetes-csr.json kubernetes-key.pem kubernetes.pem

6.创建admin证书签名请求文件

cat > admin-csr.json << EOF { "CN": "admin", "hosts": [], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "BeiJing", "L": "BeiJing", "O": "system:masters", "OU": "System" } ] } EOF
  • kube-apiserver使用RBAC对客户端(如Kubelet,kube-proxy,Pod)请求进行授权。
  • kube-apiserver预定义了一些RBAC使用的RoleBindings,如cluster-admin将Group System:masters与Role cluster-admin绑定,该Role授予kube-apiserver的所有API的权限;
  • OU指定该证书的Group为system:masters,kubelet使用该证书访问kube-apiserver时,由于证书为CA签名,所以认证通过,同时由于证书用户组为经过预授权的system:masters,所以被授予访问所有API的权限

7.生成admin证书和私钥

# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin 2018/03/29 14:57:01 [INFO] generate received request 2018/03/29 14:57:01 [INFO] received CSR 2018/03/29 14:57:01 [INFO] generating key: rsa-2048 2018/03/29 14:57:02 [INFO] encoded CSR 2018/03/29 14:57:02 [INFO] signed certificate with serial number 356467939883849041935828635530693821955945645537 2018/03/29 14:57:02 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for websites. For more information see the Baseline Requirements for the Issuance and Management of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org); specifically, section 10.2.3 ("Information Requirements") # ls admin* admin.csr admin-csr.json admin-key.pem admin.pem

8.创建kube-proxy证书签名请求文件

cat > kube-proxy-csr.json << EOF { "CN": "system:kube-proxy", "hosts": [], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "BeiJing", "L": "BeiJing", "O": "k8s", "OU": "System" } ] } EOF
  • CN指定该证书的User为system:kube-proxy;
  • kube-apiserver预定义的RoleBinding cluster-admin将User system:kube-proxy与Role System:node-proxies绑定,该Role授予了调用kube-apiserver Proxy相关API的权限;

9.生成kube-proxy客户端证书和私钥

# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy 2018/03/29 15:09:36 [INFO] generate received request 2018/03/29 15:09:36 [INFO] received CSR 2018/03/29 15:09:36 [INFO] generating key: rsa-2048 2018/03/29 15:09:36 [INFO] encoded CSR 2018/03/29 15:09:36 [INFO] signed certificate with serial number 225974417080991591210780916866547658424323006961 2018/03/29 15:09:36 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for websites. For more information see the Baseline Requirements for the Issuance and Management of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org); specifically, section 10.2.3 ("Information Requirements"). # ls kube-proxy* kube-proxy.csr kube-proxy-csr.json kube-proxy-key.pem kube-proxy.pem

10.证书分发

将生成的证书和密钥文件(后缀为pem)拷贝到所有机器的/etc/kubernetes/ssl目录下

mkdir -p /etc/kubernetes/ssl cp *.pem /etc/kubernetes/ssl ssh node01 "mkdir -p /etc/kubernetes/ssl" scp *.pem node01:/etc/kubernetes/ssl ssh node02 "mkdir -p /etc/kubernetes/ssl" scp *.pem node02:/etc/kubernetes/ssl

四:部署Etcd集群

在三个节点都需要安装etcd,下面的操作在每台机器上操作一遍。

1.下载etcd安装包并生成命令

wget https://github.com/coreos/etcd/releases/download/v3.2.12/etcd-v3.2.12-linux-amd64.tar.gz tar -xvf etcd-v3.2.12-linux-amd64.tar.gz mv etcd-v3.2.12-linux-amd64/etcd* /usr/local/bin # 生成以下两条命令 # etcd etcd etcdctl

2.创建工作目录

3.创建系统服务文件

master01

cat > etcd.service << EOF [Unit] Description=Etcd Server After=network.target After=network-online.target Wants=network-online.target Documentation=https://github.com/coreos [Service] Type=notify WorkingDirectory=/var/lib/etcd/ ExecStart=/usr/local/bin/etcd \\ --name master01 \\ --cert-file=/etc/kubernetes/ssl/kubernetes.pem \\ --key-file=/etc/kubernetes/ssl/kubernetes-key.pem \\ --peer-cert-file=/etc/kubernetes/ssl/kubernetes.pem \\ --peer-key-file=/etc/kubernetes/ssl/kubernetes-key.pem \\ --trusted-ca-file=/etc/kubernetes/ssl/ca.pem \\ --peer-trusted-ca-file=/etc/kubernetes/ssl/ca.pem \\ --initial-advertise-peer-urls https://192.168.25.30:2380 \\ --listen-peer-urls https://192.168.25.30:2380 \\ --listen-client-urls https://192.168.25.30:2379,http://127.0.0.1:2379 \\ --advertise-client-urls https://192.168.25.30:2379 \\ --initial-cluster-token etcd-cluster-0 \\ --initial-cluster master01=https://192.168.25.30:2380,node01=https://192.168.25.31:2380,node02=https://192.168.25.32:2380 \\ --initial-cluster-state new \\ --data-dir=/var/lib/etcd Restart=on-failure RestartSec=5 LimitNOFILE=65536 [Install] WantedBy=multi-user.target EOF

node01

cat > etcd.service << EOF [Unit] Description=Etcd Server After=network.target After=network-online.target Wants=network-online.target Documentation=https://github.com/coreos [Service] Type=notify WorkingDirectory=/var/lib/etcd/ ExecStart=/usr/local/bin/etcd \\ --name node01 \\ --cert-file=/etc/kubernetes/ssl/kubernetes.pem \\ --key-file=/etc/kubernetes/ssl/kubernetes-key.pem \\ --peer-cert-file=/etc/kubernetes/ssl/kubernetes.pem \\ --peer-key-file=/etc/kubernetes/ssl/kubernetes-key.pem \\ --trusted-ca-file=/etc/kubernetes/ssl/ca.pem \\ --peer-trusted-ca-file=/etc/kubernetes/ssl/ca.pem \\ --initial-advertise-peer-urls https://192.168.25.31:2380 \\ --listen-peer-urls https://192.168.25.31:2380 \\ --listen-client-urls https://192.168.25.31:2379,http://127.0.0.1:2379 \\ --advertise-client-urls https://192.168.25.31:2379 \\ --initial-cluster-token etcd-cluster-0 \\ --initial-cluster master01=https://192.168.25.30:2380,node01=https://192.168.25.31:2380,node02=https://192.168.25.32:2380 \\ --initial-cluster-state new \\ --data-dir=/var/lib/etcd Restart=on-failure RestartSec=5 LimitNOFILE=65536 [Install] WantedBy=multi-user.target EOF

node02

cat > etcd.service << EOF [Unit] Description=Etcd Server After=network.target After=network-online.target Wants=network-online.target Documentation=https://github.com/coreos [Service] Type=notify WorkingDirectory=/var/lib/etcd/ ExecStart=/usr/local/bin/etcd \\ --name node02 \\ --cert-file=/etc/kubernetes/ssl/kubernetes.pem \\ --key-file=/etc/kubernetes/ssl/kubernetes-key.pem \\ --peer-cert-file=/etc/kubernetes/ssl/kubernetes.pem \\ --peer-key-file=/etc/kubernetes/ssl/kubernetes-key.pem \\ --trusted-ca-file=/etc/kubernetes/ssl/ca.pem \\ --peer-trusted-ca-file=/etc/kubernetes/ssl/ca.pem \\ --initial-advertise-peer-urls https://192.168.25.32:2380 \\ --listen-peer-urls https://192.168.25.32:2380 \\ --listen-client-urls https://192.168.25.32:2379,http://127.0.0.1:2379 \\ --advertise-client-urls https://192.168.25.32:2379 \\ --initial-cluster-token etcd-cluster-0 \\ --initial-cluster master01=https://192.168.25.30:2380,node01=https://192.168.25.31:2380,node02=https://192.168.25.32:2380 \\ --initial-cluster-state new \\ --data-dir=/var/lib/etcd Restart=on-failure RestartSec=5 LimitNOFILE=65536 [Install] WantedBy=multi-user.target EOF
  • 指定etcd的工作目录为/var/lib/etcd,数据目录为/var/lib/etcd,需在启动服务前创建这个目录,否则启动服务的时候会报错“Failed at step CHDIR spawning /usr/bin/etcd: No such file or directory”;
  • 为了保证通信安全,需要指定etcd的公私钥(cert-file和key-file),Peers通信的公私钥和CA 证书(peer-cert-file、peer-key-file、peer-trusted-ca-file)、客户端的CA证书(trusted-ca-file);
  • 创建kubernetes.pem证书时使用的kubernestes-csr.json文件的hosts字段包含所有的etcd节点的IP,否则证书校验会出错;
  • –initial-cluster-state值为new时,-name的参数值必须位于-initial-cluster列表中;

4.启动etcd服务

cp etcd.service /etc/systemd/system/ systemctl daemon-reload systemctl enable etcd systemctl start etcd systemctl status etcd

5.验证etcd服务

# etcdctl \ --ca-file=/etc/kubernetes/ssl/ca.pem \ --cert-file=/etc/kubernetes/ssl/kubernetes.pem \ --key-file=/etc/kubernetes/ssl/kubernetes-key.pem \ cluster-health member 2ea4d6efe7f32da is healthy: got healthy result from https://192.168.25.32:2379 member 5246473f59267039 is healthy: got healthy result from https://192.168.25.31:2379 member be723b813b44392b is healthy: got healthy result from https://192.168.25.30:2379 cluster is healthy

五:部署Flannel

在node节点上都需要部署安装

1.下载安装Flannel

wget https://github.com/coreos/flannel/releases/download/v0.9.1/flannel-v0.9.1-linux-amd64.tar.gz mkdir flannel tar -xzvf flannel-v0.9.1-linux-amd64.tar.gz -C flannel cp flannel/{flanneld,mk-docker-opts.sh} /usr/local/bin

2.向etcd中写入网段信息,只需要在一台执行即可

etcdctl --endpoints=https://192.168.25.30:2379,https://192.168.25.31:2379,https://192.168.25.32:2379 \ --ca-file=/etc/kubernetes/ssl/ca.pem \ --cert-file=/etc/kubernetes/ssl/kubernetes.pem \ --key-file=/etc/kubernetes/ssl/kubernetes-key.pem \ mkdir /kubernetes/network etcdctl --endpoints=https://192.168.25.30:2379,https://192.168.25.31:2379,https://192.168.25.32:2379 \ --ca-file=/etc/kubernetes/ssl/ca.pem \ --cert-file=/etc/kubernetes/ssl/kubernetes.pem \ --key-file=/etc/kubernetes/ssl/kubernetes-key.pem \ mk /kubernetes/network/config '{"Network":"172.30.0.0/16","SubnetLen":24,"Backend":{"Type":"vxlan"}}'

3.创建服务启动文件

cat > flanneld.service << EOF [Unit] Description=Flanneld overlay address etcd agent After=network.target After=network-online.target Wants=network-online.target After=etcd.service Before=docker.service [Service] Type=notify ExecStart=/usr/local/bin/flanneld \\ -etcd-cafile=/etc/kubernetes/ssl/ca.pem \\ -etcd-certfile=/etc/kubernetes/ssl/kubernetes.pem \\ -etcd-keyfile=/etc/kubernetes/ssl/kubernetes-key.pem \\ -etcd-endpoints=https://192.168.25.30:2379,https://192.168.25.31:2379,https://192.168.25.32:2379 \\ -etcd-prefix=/kubernetes/network ExecStartPost=/usr/local/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/docker Restart=on-failure [Install] WantedBy=multi-user.target RequiredBy=docker.service EOF
  • mk-docker-opts.sh:将分配给flanneld的pod子网网段信息写入到/run/flannel/docker文件中,后续docker启动时使用这个文件中参数值设置docker0网桥
  • flanneld使用系统缺省路由所在的接口和其他节点通信,对于有多个网络接口的机器(如内网和公网),可使用-iface=enpxx选项值指定通信接口;

4.启动Flanneld服务

mv flanneld.service /etc/systemd/system/ systemctl daemon-reload systemctl enable flanneld systemctl start flanneld systemctl status flanneld

5.检查flanneld服务状态

# /usr/local/bin/etcdctl \ --endpoints=https://192.168.25.30:2379,https://192.168.25.31:2379,https://192.168.25.32:2379 \ --ca-file=/etc/kubernetes/ssl/ca.pem \ --cert-file=/etc/kubernetes/ssl/kubernetes.pem \ --key-file=/etc/kubernetes/ssl/kubernetes-key.pem \ ls /kubernetes/network/subnets /kubernetes/network/subnets/172.30.82.0-24 /kubernetes/network/subnets/172.30.1.0-24 /kubernetes/network/subnets/172.30.73.0-24

6.配置docker使用flanneld网络

/usr/lib/systemd/system/docker.service

[Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com After=network-online.target firewalld.service Wants=network-online.target [Service] Type=notify # the default is not to use systemd for cgroups because the delegate issues still # exists and systemd currently does not support the cgroup feature set required # for containers run by docker # 修改 ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS # 新增 EnvironmentFile=/run/flannel/docker ExecReload=/bin/kill -s HUP $MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity # Uncomment TasksMax if your systemd version supports it. # Only systemd 226 and above support this version. #TasksMax=infinity TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process # restart the docker process if it exits prematurely Restart=on-failure StartLimitBurst=3 StartLimitInterval=60s [Install] WantedBy=multi-user.target
  • flanneld 启动时将网络配置写入到 /run/flannel/docker 文件中的变量 DOCKER_NETWORK_OPTIONS,dockerd 命令行上指定该变量值来设置 docker0 网桥参数;
  • 如果指定了多个 EnvironmentFile 选项,则必须将 /run/flannel/docker 放在最后(确保 docker0 使用 flanneld 生成的 bip 参数);
  • 不能关闭默认开启的 –iptables 和 –ip-masq 选项;
  • 如果内核版本比较新,建议使用 overlay 存储驱动;
  • –exec-opt native.cgroupdriver=systemd参数可以指定为”cgroupfs”或者“systemd”

7.启动docker

systemctl daemon-reload && systemctl start docker && systemctl enable docker

六:部署kubectl工具

kubectl是kubernetes的集群管理工具,任何节点通过kubetcl都可以管理整个k8s集群。本文档部署在master01这个节点,部署成功后会生成/root/.kube/config文件,kubectl就是通过这个获取kube-apiserver地址,证书,用户名等信息。

1.下载安装包

wget https://dl.k8s.io/v1.8.6/kubernetes-client-linux-amd64.tar.gz tar -xzvf kubernetes-client-linux-amd64.tar.gz sudo cp kubernetes/client/bin/kube* /usr/local/bin/ chmod a+x /usr/local/bin/kube* export PATH=/root/local/bin:$PATH

2.创建/root/.kube/config文件

# 设置集群参数,--server指定Master节点ip kubectl config set-cluster kubernetes \ --certificate-authority=/etc/kubernetes/ssl/ca.pem \ --embed-certs=true \ --server=https://192.168.25.30:6443 # 设置客户端认证参数 kubectl config set-credentials admin \ --client-certificate=/etc/kubernetes/ssl/admin.pem \ --embed-certs=true \ --client-key=/etc/kubernetes/ssl/admin-key.pem # 设置上下文参数 kubectl config set-context kubernetes \ --cluster=kubernetes \ --user=admin # 设置默认上下文 kubectl config use-context kubernetes
  • admin.pem:证书O字段值为system:masters,kube-apiserver预定义的RoleBinding cluster-admin将Group system:master与Role cluster-admin绑定,该Role 授予了调用Kube-apiserver相关的API权限

3.创建bootstartp.kubeconfig文件

#生成token 变量 export BOOTSTRAP_TOKEN=$(head -c 16 /dev/urandom | od -An -t x | tr -d ' ') cat > token.csv <<EOF ${BOOTSTRAP_TOKEN},kubelet-bootstrap,10001,"system:kubelet-bootstrap" EOF mv token.csv /etc/kubernetes/ # 设置集群参数--server为master节点ip kubectl config set-cluster kubernetes \ --certificate-authority=/etc/kubernetes/ssl/ca.pem \ --embed-certs=true \ --server=https://192.168.25.30:6443 \ --kubeconfig=bootstrap.kubeconfig # 设置客户端认证参数 kubectl config set-credentials kubelet-bootstrap \ --token=${BOOTSTRAP_TOKEN} \ --kubeconfig=bootstrap.kubeconfig # 设置上下文参数 kubectl config set-context default \ --cluster=kubernetes \ --user=kubelet-bootstrap \ --kubeconfig=bootstrap.kubeconfig # 设置默认上下文 kubectl config use-context default --kubeconfig=bootstrap.kubeconfig mv bootstrap.kubeconfig /etc/kubernetes/

4.创建kube-proxy.kubeconfig

# 设置集群参数 --server参数为master ip kubectl config set-cluster kubernetes \ --certificate-authority=/etc/kubernetes/ssl/ca.pem \ --embed-certs=true \ --server=https://192.168.25.30:6443 \ --kubeconfig=kube-proxy.kubeconfig # 设置客户端认证参数 kubectl config set-credentials kube-proxy \ --client-certificate=/etc/kubernetes/ssl/kube-proxy.pem \ --client-key=/etc/kubernetes/ssl/kube-proxy-key.pem \ --embed-certs=true \ --kubeconfig=kube-proxy.kubeconfig # 设置上下文参数 kubectl config set-context default \ --cluster=kubernetes \ --user=kube-proxy \ --kubeconfig=kube-proxy.kubeconfig # 设置默认上下文 kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig mv kube-proxy.kubeconfig /etc/kubernetes/
  • 设置集群参数和客户端认证参数,–embed-certs都为true,这会将certificate-authority,client-cretificate和client-key指向的证书文件内容写入到生成的kube-proxy.kebuconfig文件中;
  • kube-proxy.pem证书中CN为system:kube-proxy,kube-apiserver预定义的RoleBinding cluster-admin将User system:kube-proxy与Role system:node-proxy绑定,该Role授予了调用kube-apiserver Proxy相关的api权限;

5.将生成的配置文件拷贝到其他的节点

scp /etc/kubernetes/kube-proxy.kubeconfig node01:/etc/kubernetes/ scp /etc/kubernetes/kube-proxy.kubeconfig node02:/etc/kubernetes/ scp /etc/kubernetes/bootstrap.kubeconfig node01:/etc/kubernetes/ scp /etc/kubernetes/bootstrap.kubeconfig node02:/etc/kubernetes/

七:部署master节点

1.下载安装文件

wget https://dl.k8s.io/v1.8.6/kubernetes-server-linux-amd64.tar.gz tar -xzvf kubernetes-server-linux-amd64.tar.gz cp -r kubernetes/server/bin/{kube-apiserver,kube-controller-manager,kube-scheduler,kubectl,kube-proxy,kubelet} /usr/local/bin/

2.部署apiserver服务

配置kube-apiserver服务管理文件

cat > kube-apiserver.service << EOF [Unit] Description=Kubernetes API Server Documentation=https://github.com/GoogleCloudPlatform/kubernetes After=network.target After=etcd.service [Service] ExecStart=/usr/local/bin/kube-apiserver \\ --logtostderr=true \\ --admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota,NodeRestriction \\ --advertise-address=192.168.25.30 \\ --bind-address=192.168.25.30 \\ --insecure-bind-address=127.0.0.1 \\ --authorization-mode=Node,RBAC \\ --runtime-config=rbac.authorization.k8s.io/v1alpha1 \\ --kubelet-https=true \\ --enable-bootstrap-token-auth \\ --token-auth-file=/etc/kubernetes/token.csv \\ --service-cluster-ip-range=10.254.0.0/16 \\ --service-node-port-range=8400-10000 \\ --tls-cert-file=/etc/kubernetes/ssl/kubernetes.pem \\ --tls-private-key-file=/etc/kubernetes/ssl/kubernetes-key.pem \\ --client-ca-file=/etc/kubernetes/ssl/ca.pem \\ --service-account-key-file=/etc/kubernetes/ssl/ca-key.pem \\ --etcd-cafile=/etc/kubernetes/ssl/ca.pem \\ --etcd-certfile=/etc/kubernetes/ssl/kubernetes.pem \\ --etcd-keyfile=/etc/kubernetes/ssl/kubernetes-key.pem \\ --etcd-servers=https://192.168.25.30:2379,https://192.168.25.31:2379,https://192.168.25.32:2379 \\ --enable-swagger-ui=true \\ --allow-privileged=true \\ --apiserver-count=3 \\ --audit-log-maxage=30 \\ --audit-log-maxbackup=3 \\ --audit-log-maxsize=100 \\ --audit-log-path=/var/lib/audit.log \\ --event-ttl=1h \\ --v=2 Restart=on-failure RestartSec=5 Type=notify LimitNOFILE=65536 [Install] WantedBy=multi-user.target EOF
  • –authorization-mode=RBAC 指定在安全端口使用RBAC模式,拒绝未通过授权的请求;
  • kube-scheduler,kube-controller-manager一般和kube-apiserver部署在同一台机器上,他们使用非安全端口和kube-apiserver通信;
  • kubelet,kube-proxy,kubectl部署在其他Node节点,如果通过安全端口访问kube-apiserver,则必须先通过TLS证书认证,再通过RBAC授权;
  • kube-proxy,kubectl通过在使用的证书里指定相关的User,Group来达到通过RBAC授权的目的。
  • Bootstartp:如果使用了kubelet TLS Bootstartp机制,则不能再指定 –kubelet-certificate-authority、–kubelet-client-certificate 和 –kubelet-client-key 选项,否则后续kube-apiserver校验kubelet证书时出现”x509: certificate signed by unknown authority“ 错误;
  • –admission-control值必须包含ServiceAccount,否则部署集群插件时会失败;
  • –bind-address不能为127.0.0.1;
  • –runtime-config:配置rbac.authorization.k8s.io/v1beta1,表示运行时的apiVersion;
  • service-cluster-ip-range:指定Service cluster ip段地址,该地址路由不可达;
  • –service-node-port-range:指定NodePort的端口范围

确实情况下,kubernetes对像保存在etcd的/registry路径下,可以通过–etcd-prefix参数进行跳转

启动服务,并设置开启自启

cp kube-apiserver.service /etc/systemd/system/ systemctl daemon-reload systemctl enable kube-apiserver systemctl start kube-apiserver systemctl status kube-apiserver

3.部署manager服务

生成服务启动脚本

cat > kube-controller-manager.service << EOF [Unit] Description=Kubernetes Controller Manager Documentation=https://github.com/GoogleCloudPlatform/kubernetes [Service] ExecStart=/usr/local/bin/kube-controller-manager \\ --logtostderr=true \\ --address=127.0.0.1 \\ --master=http://127.0.0.1:8080 \\ --allocate-node-cidrs=true \\ --service-cluster-ip-range=10.254.0.0/16 \\ --cluster-cidr=172.30.0.0/16 \\ --cluster-name=kubernetes \\ --cluster-signing-cert-file=/etc/kubernetes/ssl/ca.pem \\ --cluster-signing-key-file=/etc/kubernetes/ssl/ca-key.pem \\ --service-account-private-key-file=/etc/kubernetes/ssl/ca-key.pem \\ --root-ca-file=/etc/kubernetes/ssl/ca.pem \\ --leader-elect=true \\ --v=2 Restart=on-failure LimitNOFILE=65536 RestartSec=5 [Install] WantedBy=multi-user.target EOF
  • –address的值必须为127.0.0.1,应为当前kube-apiserver期望scheduler和conntroller-manaager在同一台机器上;
  • –master=http://{master_ip}:8080:使用非安全的8080端口与kube-apiserver通信;
  • –cluster-cidr指定Cluster中Pod的CIDR范围,该网段在各Node必须路由可达(flanneld保证)
  • –service-cluster-ip-range参数指定Cluster中Service的CIDR范围,该网络在各Node间必须路由不可达,必须与kube-apiserver中的参数保持一致;
  • –cluster-signing-*指定的证书和私钥文件用来签名TLS BootStrap创建的证书和私钥
  • –root-ca-file用来对kube-apiserver证书进行校验,指定该参数后,才会在Pod容器的ServiceAccount中放置该CA证书文件
  • –leader-elect=true部署多台master集群时选举产生一直处于工作状态的kube-controller-manager进程;

启动服务

cp kube-controller-manager.service /etc/systemd/system/ systemctl daemon-reload systemctl enable kube-controller-manager systemctl start kube-controller-manager systemctl status kube-controller-manage

4.部署scheduler服务

配置kube-scheduler

cat > kube-scheduler.service << EOF [Unit] Description=Kubernetes Scheduler Documentation=https://github.com/GoogleCloudPlatform/kubernetes [Service] ExecStart=/usr/local/bin/kube-scheduler \\ --logtostderr=true \\ --address=127.0.0.1 \\ --master=http://127.0.0.1:8080 \\ --leader-elect=true \\ --v=2 Restart=on-failure LimitNOFILE=65536 RestartSec=5 [Install] WantedBy=multi-user.target EOF
  • –address必须为127.0.0.1,应为当前kube-apiserver期望scheduler和contorller-manager在同一主机;
  • master=http://{MASTER_IP}:8080:使用非安全 8080 端口与 kube-apiserver 通信;
  • –leader-elect=true 部署多台机器组成的 master 集群时选举产生一处于工作状态的 kube-controller-manager 进程;

启动kube-scheduler

cp kube-scheduler.service /etc/systemd/system/ systemctl daemon-reload systemctl enable kube-scheduler systemctl start kube-scheduler systemctl status kube-scheduler

5.验证master节点

# kubectl get componentstatuses NAME STATUS MESSAGE ERROR scheduler Healthy ok controller-manager Healthy ok etcd-2 Healthy {"health": "true"} etcd-1 Healthy {"health": "true"} etcd-0 Healthy {"health": "true"}

八:部署Node节点

1.部署kubelet服务

kubelet在启动时向kube-apiserver发送TLS bootstrapping请求,需要先将bootstrap token文件中的kubelet-bootstrap用户赋予system:node-bootstrapper角色,然后kubelet才有权限创建认证请求。

授权,在master上运行一次即可

kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap

下载和安装kubelet和kube-proxy

wget https://dl.k8s.io/v1.8.6/kubernetes-server-linux-amd64.tar.gz tar -xzvf kubernetes-server-linux-amd64.tar.gz cp -r kubernetes/server/bin/{kube-proxy,kubelet} /usr/local/bin/

创建kubelet工作目录

配置kubelt

master01

cat > kubelet.service << EOF [Unit] Description=Kubernetes Kubelet Documentation=https://github.com/GoogleCloudPlatform/kubernetes After=docker.service Requires=docker.service [Service] WorkingDirectory=/var/lib/kubelet ExecStart=/usr/local/bin/kubelet \\ --address=192.168.25.30 \\ --hostname-override=192.168.25.30 \\ --pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest \\ --experimental-bootstrap-kubeconfig=/etc/kubernetes/bootstrap.kubeconfig \\ --kubeconfig=/etc/kubernetes/kubelet.kubeconfig \\ --require-kubeconfig \\ --cert-dir=/etc/kubernetes/ssl \\ --container-runtime=docker \\ --cluster-dns=10.254.0.2 \\ --cluster-domain=cluster.local \\ --hairpin-mode promiscuous-bridge \\ --allow-privileged=true \\ --serialize-image-pulls=false \\ --register-node=true \\ --logtostderr=true \\ --cgroup-driver=cgroupfs \\ --v=2 Restart=on-failure KillMode=process LimitNOFILE=65536 RestartSec=5 [Install] WantedBy=multi-user.target EOF
  • –address:本机IP,不能设置为127.0.0.1,否则后续Pods访问kubelet的API接口时会失败,因为 Pods 访问的 127.0.0.1 指向自己,而不是 kubelet;
  • –hostname-overeide:本机IP;
  • –cgroup-driver配置成cgroupfs(保持docker和kubelet中的cgroup driver配置一致即可);
  • –experimental-bootstrap-kubeconfig指向bootstrap kubeconfig文件,kubelet使用该文件中的用户名和token向kube-apiserver发送TLS Bootstrapping请求;
  • 管理员通过了CSR请求后,kubelet自动在–cert-dir目录创建证书和私钥文件(kubelet-client.crt和kubelet-client.key),然后写入–kubeconfig文件(自动创建)
  • 建议在–kubeconfig配置文件中指定kube-apiserver地址,如果未指定–api-servers选项,则必须指定–require-kubeconfig选项后才从配置文件中读取kube-apiserver的地址,否则kubelet启动后会找不到kube-apiserver(日志中提示找不到API server),kubectl get nodes不会返回对应的Node信息;
  • –cluster-dns指定kubedns的Service ip(可以先分配,后续创建kubedns服务时指定该IP),–cluster-domain指定域名后缀,这两个参数同时配置才会生效;
  • –cluster-domain指定pod启动时/etc/resolve.conf文件中的search domain,起初我们将其配置成了 cluster.local.,这样在解析 service 的 DNS 名称时是正常的,可是在解析 headless service 中的 FQDN pod name 的时候却错误,因此我们将其修改为 cluster.local,去掉嘴后面的 ”点号“ 就可以解决该问题;
  • –kubeconfig=/etc/kubernetes/kubelet.kubeconfig中指定的kubelet.kubeconfig文件在第一次启动kubelet之前并不存在,请看下文,当通过CSR请求后,会自动生成,如果你的节点节点上已经生成了~/.kube/config文件,你可以将该文件拷贝到该路径面,并命名为kubelet.kubeconfig文件,所有的节点可以共用同一个config文件,这样新添加节点时就不需要再创建CSR请求就能自动添加到kubernetes集群中,同样,在任意能够访问到kubernetes集群的主机上使用kubectl --kubeconfig命令操作集群,只要使用~/.kube/config文件就能通过权限认证,应为这个文件的认证信息为admin,对集群有所有权限。

启动kubelet服务

cp kubelet.service /etc/systemd/system/kubelet.service systemctl daemon-reload systemctl enable kubelet systemctl start kubelet systemctl status kubelet

2.执行TLS证书授权请求

kubelet首次启动时像kube-apiserver发送证书签名求情,必须通过授权后,才会添加到集群。

查询授权请求

# kubectl get csr NAME AGE REQUESTOR CONDITION node-csr-450A0zCMYrGxWozsNukv6vh2NdBspA-hr6Rsz-LA9ro 3m kubelet-bootstrap Pending node-csr-5t_AUkaEhT98xX1g7zTpzaNzRB9rXh453i2Fu_yxvvs 3m kubelet-bootstrap Pending node-csr-p9r9gusX2kTGpyYFPlkoaSGyatLQmtDmL8NBee2D_s8 3m kubelet-bootstrap Pending

同意授权请求

# kubectl certificate approve node-csr-450A0zCMYrGxWozsNukv6vh2NdBspA-hr6Rsz-LA9ro certificatesigningrequest "node-csr-450A0zCMYrGxWozsNukv6vh2NdBspA-hr6Rsz-LA9ro" approved # kubectl certificate approve node-csr-5t_AUkaEhT98xX1g7zTpzaNzRB9rXh453i2Fu_yxvvs certificatesigningrequest "node-csr-5t_AUkaEhT98xX1g7zTpzaNzRB9rXh453i2Fu_yxvvs" approved # kubectl certificate approve node-csr-p9r9gusX2kTGpyYFPlkoaSGyatLQmtDmL8NBee2D_s8 certificatesigningrequest "node-csr-p9r9gusX2kTGpyYFPlkoaSGyatLQmtDmL8NBee2D_s8" approved

查看所有集群节点

# kubectl get nodes NAME STATUS ROLES AGE VERSION 192.168.25.30 Ready <none> 15m v1.8.6 192.168.25.31 Ready <none> 15m v1.8.6 192.168.25.32 Ready <none> 15m v1.8.6

3.部署kube-proxy服务

创建工作目录

mkdir -p /var/lib/kube-proxy

配置kube-proxy服务

cat > kube-proxy.service << EOF [Unit] Description=Kubernetes Kube-Proxy Server Documentation=https://github.com/GoogleCloudPlatform/kubernetes After=network.target [Service] WorkingDirectory=/var/lib/kube-proxy ExecStart=/usr/local/bin/kube-proxy \\ --bind-address=192.168.25.30 \\ --hostname-override=192.168.25.30 \\ --cluster-cidr=10.254.0.0/16 \\ --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig \\ --logtostderr=true \\ --v=2 Restart=on-failure RestartSec=5 LimitNOFILE=65536 [Install] WantedBy=multi-user.target EOF
  • –bind-address:本机ip
  • –hostname-override:本机ip,必须需kubelet的值一致,否则kube-proxy启动后找不到node,从而影响任何iptables规则
  • –cluster-cidr:必须与kube-apiserver的–service-cluster-ip-range选项值一致,kube-proxy根据–clister-cidr判断集群内部和外部流量,指定–cluster-cidr或–masquerade-all选项后kube-proxy才会对访问service ip的请求做SNAT;
  • –kubeconfig 指定的配置文件嵌入了 kube-apiserver 的地址、用户名、证书、秘钥等请求和认证信息;
  • 预定义的 RoleBinding cluster-admin 将User system:kube-proxy 与 Role system:node-proxier 绑定,该 Role 授予了调用 kube-apiserver Proxy 相关 API 的权限;

启动kube-proxy服务

cp kube-proxy.service /etc/systemd/system/ systemctl daemon-reload systemctl enable kube-proxy systemctl start kube-proxy systemctl status kube-proxy

九:插件安装

由于默认镜像为谷歌镜像,所以是需要修改的,所以用docker hup做了跳转,修改好的yamk文件下载地址如下:

百度网盘(o3z9)

1.dns插件

wget https://github.com/kubernetes/kubernetes/releases/download/v1.8.6/kubernetes.tar.gz tar xzvf kubernetes.tar.gz cd /root/kubernetes/cluster/addons/dns mv kubedns-svc.yaml.sed kubedns-svc.yaml #把文件中$DNS_SERVER_IP替换成10.254.0.2 sed -i 's/$DNS_SERVER_IP/10.254.0.2/g' ./kubedns-svc.yaml mv ./kubedns-controller.yaml.sed ./kubedns-controller.yaml #把$DNS_DOMAIN替换成cluster.local sed -i 's/$DNS_DOMAIN/cluster.local/g' ./kubedns-controller.yaml ls *.yaml kubedns-cm.yaml kubedns-controller.yaml kubedns-sa.yaml kubedns-svc.yaml kubectl create -f .

2.dashboard插件

下载部署文件

wget https://raw.githubusercontent.com/kubernetes/dashboard/v1.8.1/src/deploy/recommended/kubernetes-dashboard.yaml

修改部署文件

kind: Service apiVersion: v1 metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard namespace: kube-system spec: # 新增 type: NodePort ports: - port: 443 targetPort: 8443 # 新增 nodePort: 8510 selector: k8s-app: kubernetes-dashboard

创建pod

kubectl create -f kubernetes-dashboard.yaml

部署认证服务

cat > ./kubernetes-dashboard-admin.rbac.yaml << EOF kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1beta1 metadata: name: dashboard-admin roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin subjects: - kind: ServiceAccount name: kubernetes-dashboard namespace: kube-system EOF kubectl create -f kubernetes-dashboard-admin.rbac.yaml

访问地址,目前只能火狐访问

https://192.168.25.30:8510

3.heapster插件

下载安装文件

wget https://github.com/kubernetes/heapster/archive/v1.5.0.tar.gz tar xzvf ./v1.5.0.tar.gz cd ./heapster-1.5.0/ kubectl create -f deploy/kube-config/influxdb/ kubectl create -f deploy/kube-config/rbac/heapster-rbac.yaml123456

确认所有pod都正常启动

kubectl get pods --all-namespaces

十:常用服务部署

1.nginx

部署文件:

apiVersion: v1 kind: Pod metadata: name: nginx labels: app: nginx spec: containers: - name: nginx image: registry.cn-qingdao.aliyuncs.com/k8/nginx:1.9.0 imagePullPolicy: IfNotPresent ports: - containerPort: 80 restartPolicy: Always --- apiVersion: v1 kind: Service metadata: name: nginx-service spec: type: NodePort sessionAffinity: ClientIP selector: app: nginx ports: # 将容器的80端口映射到master主机的8888端口 - port: 80 nodePort: 8888

2.mysql

部署文件:

apiVersion: v1 kind: Pod metadata: name: mysql labels: app: mysql spec: containers: - name: mysql image: mysql # 环境变量 env: - name: MYSQL_ROOT_PASSWORD value: "123456" imagePullPolicy: IfNotPresent # 容器暴露端口 ports: - containerPort: 3306 restartPolicy: Always --- apiVersion: v1 kind: Service metadata: name: mysql-service spec: type: NodePort sessionAffinity: ClientIP selector: app: mysql ports: - port: 3306 nodePort: 9306

十一:常用命令

1.查看kubelet log

  • u:执行用户
  • f:动态查看

2.查看pods信息

kubectl get pods --all-namespaces

3.查看service信息

kubectl get service --all-namespaces

4.查看server详细信息

kubeclt describe service zksvc
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值