K8S部署手册

1 安装说明

各个服务安装节点
作为集群的Master、服务环境控制相关模块、api网管控制相关模块、平台管理控制台模块
docker、etcd、api-server、scheduler、controller-manager、kubelet、flannel、docker-compose、harbor、kube-dns、kube- dashboard
服务节点、用于容器化服务部署和运行
docker、etcd、kubelet、proxy、flannel
该文档为单机部署,安装环境为64位CentOS7.3。
安装过程中可能会遇到提示没有安装依赖包,可以自己手动设置yum源,之后直接yum install 安装包 进行安装

1.1配置yum源
挂载 mount –t /dev/cdrom /mnt
创建文件夹 mkdir –p /usr/local/yum
将挂载镜像中的rpm文件夹复制到yum文件夹中

cp –r /mnt/Packages /usr/local/yum 
yum clean all
createrepo /usr/local/yum

配置yum.repo文件


```powershell
[local-resource]				--该库的名字
name=myrepo					--该库的说明
baseurl=file:///usr/local/yum	--该库的传输方式,具体路径 传输方式有(http://、ftp://、file:///)
enable=1					--启用该库,0表示不启动
gpgcheck=0					--使用gpg文件检查软件包的签名
gpgkey=					--表示gpg文件存放的位置(不用设置)
yum list  --拉取安装包列表
yum install 安装包  --rpm包的安装

1.2关闭防火墙和swap分区

```powershell
关闭防火墙 
systemctl stop firewalld.service		停止防火墙
systemctl disable firewalld.service	禁止防火墙开机启动
swapoff –a						关闭swap分区

2安装docker(每台)

上传以下rpm至服务器

强制安装docker及所需安装包
rpm -ivh --nodeps *rpm
启动docker并设置开机启动
systemctl daemon-reload
systemctl enable docker
systemctl start docker
systemctl status docker

3 创建kubernetes各个组件所需要的TLS通信证书和密钥

(密钥分发至每台)

3.1 生成文件列表

ca-key.pem、ca.pem、kubernetes-key.pem、kubernetes.pem、kube-proxy.pem、kube-proxy-key.pem、admin.pem、admin-key.pem

使用证书分类

etcd:ca.pem / kubernetes-key.pem / kubernetes.pem;
kube-apiserver:ca.pem / kubernetes-key.pem / kubernetes.pem;
kubelet:ca.pem;
kube-proxy:ca.pem / kube-proxy-key.pem / kube-proxy.pem; 
kubectl:ca.pem / admin-key.pem / admin.pem

3.2安装CFSSL

将cfssl_linux-amd64.bin、cfssl-certinfo_linux-amd64.bin、cfssljson_linux-amd64.bin移动文件至usr/local/bin下,并配置环境变量
执行以下命令
[root@localhost cfssl]# mv cfssl_linux-amd64.bin /usr/local/bin/cfssl
[root@localhost cfssl]# mv cfssljson_linux-amd64.bin /usr/local/bin/cfssljson
[root@localhost cfssl]# mv cfssl-certinfo_linux-amd64.bin /usr/local/bin/cfssl-certinfo
[root@localhost cfssl]# vi /etc/profile
添加export PATH=/root/local/bin:$PATH
[root@localhost ssl]# source /etc/profile	###立即生效
[root@localhost ssl]# chmod +x /usr/local/bin/cfssl*

3.3创建CA配置文件

[root@localhost /]# mkdir -p /root/ssl
[root@localhost /]# cd /root/ssl/
[root@localhost ssl]# cfssl print-defaults config > config.json
[root@localhost ssl]# cfssl print-defaults csr > csr.json
[root@localhost ssl]# vi ca-config.json
{
"signing":	{
"default":	{
"expiry": "8760h"
},
"profiles":	{
"kubernetes":	{
"usages":	[
"signing",
"key encipherment",
"server	auth",
"client	auth"
],
"expiry":"8760h"
}
}
}
}

3.3.1创建CA证书签名请求

[root@localhost ssl]# vi  ca-csr.json
{
"CN":"kubernetes",
"key":{
"algo":	"rsa",
"size":	2048
},
"names":[
{
"C":	"CN",
"ST":	"BeiJing",
"L":	"BeiJing",
"O":	"k8s",
"OU":	"System"
}
]
}

3.3.2生成CA证书和密钥

[root@localhost ssl]# cfssl gencert -initca ca-csr.json | cfssljson -bare ca
[root@localhost ssl]# ls
ca-config.json  ca.csr  ca-csr.json  ca-key.pem  ca.pem  config.json  csr.json
创建kubernetes证书

3.4创建kubernetes证书签名请求

[root@localhost ssl]# vi kubernetes-csr.json
{
"CN":"kubernetes",
"hosts":[
"127.0.0.1",
"169.254.6.20",
"168.254.0.1",
"kubernetes",
"kubernetes.default",
"kubernetes.default.svc",
"kubernetes.default.svc.cluster",
"kubernetes.default.svc.cluster.local"
],
"key": {
"algo":"rsa",
"size":2048
},
"names":[
{
"C":"CN",
"ST":"BeiJing",
"L":"BeiJing",
"O":"k8s",
"OU":"System"
}
]
}

3.4.1生成kubernetes证书和密钥

[root@localhost ssl]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem  -config=ca-config.json  -profile=kubernetes kubernetes-csr.json | cfssljson -bare kubernetes
2019/04/12 20:28:07 [INFO] generate received request
2019/04/12 20:28:07 [INFO] received CSR
2019/04/12 20:28:07 [INFO] generating key: rsa-2048
2019/04/12 20:28:07 [INFO] encoded CSR
2019/04/12 20:28:08 [INFO] signed certificate with serial number 341419041177557148191513412510722414927398891570
2019/04/12 20:28:08 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").

3.5创建admin证书

[root@localhost ssl]# cat admin-csr.json 
{
        "CN": "admin",
        "hosts": [],
        "key": {
                "algo": "rsa",
                "size": 2048
        },
        "names": [
         {
                "C": "CN",
                "ST": "BeiJing",
                "L": "BeiJing",
                "O": "system:masters",
                "OU": "System"
         }
        ]
}

3.5.1生成admin证书和密钥

[root@localhost ssl]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin
2019/04/12 20:33:19 [INFO] generate received request
2019/04/12 20:33:19 [INFO] received CSR
2019/04/12 20:33:19 [INFO] generating key: rsa-2048
2019/04/12 20:33:19 [INFO] encoded CSR
2019/04/12 20:33:19 [INFO] signed certificate with serial number 523319795135016522528190714496735510305838417368
2019/04/12 20:33:19 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").

3.6创建kube-proxy证书

[root@localhost ssl]# cat kube-proxy-csr.json 
{
        "CN": "system:kube-proxy",
        "hosts": [],
        "key": {
                "algo": "rsa",
                "size": 2048
        },
        "names": [
                {
                        "C": "CN",
                        "ST": "BeiJing",
                        "L": "BeiJing",
                        "O": "k8s",
                        "OU": "System"
                }
        ]
}

3.6.1生成kube-proxy证书和密钥

[root@localhost ssl]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy
2019/04/12 20:36:40 [INFO] generate received request
2019/04/12 20:36:40 [INFO] received CSR
2019/04/12 20:36:40 [INFO] generating key: rsa-2048
2019/04/12 20:36:40 [INFO] encoded CSR
2019/04/12 20:36:40 [INFO] signed certificate with serial number 322993585988121148066141818038794839711351512333
2019/04/12 20:36:40 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").

3.7使用openssl验证证书

[root@localhost ssl]# openssl x509 -noout -text -in kubernetes.pem
验证方式如下:
Issuer 与ca-csr.json一致
Subject 与kubernetes-csr.json一致
X509v3	Subject	Alternative	Name与kubernetes-csr.json一致
X509v3	Key	Usage/Extended	Key	Usage与 ca-config.jso中的kubernetes profile一致
使用cfssl-certinfo命令查看证书
[root@localhost ssl]# cfssl-certinfo -cert kubernetes.pem

3.8分发证书

[root@localhost ssl]# mkdir -p /etc/kubernetes/ssl
[root@localhost ssl]# cp *.pem /etc/kubernetes/ssl/

4 创建kubeconfig文件(密钥分发至每台)

4.1创建TLS Bootstrapping Token

添加环境变量
export BOOTSTRAP_TOKEN=$(head -c 16 /dev/urandom | od -An -t x | tr -d ' ')
[root@localhost ssl]# vi /etc/profile
[root@localhost ssl]# touch  token.csv
[root@localhost ssl]# vi token.csv 
添加
{BOOTSTRAP_TOKEN},kubelet-bootstrap,10001,"system:kubelet-bootstrap"
分发至所有节点
[root@localhost ssl]# cp token.csv /etc/kubernetes/

4.2创建kubeconfig文件

4.2.1生成bootstrap.kubeconfig

设置集群参数
[root@localhost kubernetes]# kubectl config set-cluster kubernetes \
--certificate-authority=/etc/kubernetes/ssl/ca.pem \
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=bootstrap.kubeconfig

设置客户端认证参数
[root@localhost kubernetes]# kubectl config set-credentials kubelet-bootstrap \
--token=${BOOTSTRAP_TOKEN} \
--kubeconfig=bootstrap.kubeconfig

设置上下文参数
[root@localhost kubernetes]# kubectl config set-context default \
--cluster=kubernetes \
--user=kubelet-bootstrap \
--kubeconfig=bootstrap.kubeconfig

设置默认上下文
[root@localhost kubernetes]# kubectl config use-context default --kubeconfig=bootstrap.kubeconfig

4.2.2生成kube-proxy.kubeconfig

–embed-certs为true表示将certificate-authority证书写到bootstrap.kubeconfig中
设置客户端认证参数是没有指定密钥和证书,后续有 kube-apiserver自动生成

 [root@localhost kubernetes]#export KUBE_APISERVER="https://169.254.6.20:6443"
设置集群参数
[root@localhost kubernetes]#kubectl config set-cluster kubernetes \
--certificate-authority=/etc/kubernetes/ssl/ca.pem \
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=kube-proxy.kubeconfig
设置客户端认证参数
[root@localhost kubernetes]#kubectl config set-credentials kube-proxy \
--client-certificate=/etc/kubernetes/ssl/kube-proxy.pem \
--client-key=/etc/kubernetes/ssl/kube-proxy-key.pem \
--embed-certs=true \
--kubeconfig=kube-proxy.kubeconfig
设置上下文参数
 [root@localhost kubernetes]#kubectl config set-context default \
--cluster=kubernetes \
--user=kube-proxy \
--kubeconfig=kube-proxy.kubeconfig
	设置默认上下文
[root@localhost kubernetes]#kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig

4.2.3创建 kubectl.kubeconfig文件

[root@localhost /]# chmod a+x /usr/bin/kube*
[root@localhost /]# export KUBE_APISERVER="https://169.254.6.20:6443"
设置集群参数
[root@localhost /]# kubectl config set-cluster kubernetes \
> --certificate-authority=/etc/kubernetes/ssl/ca.pem \
> --embed-certs=true \
> --server=${KUBE_APISERVER}
执行结果:Cluster "kubernetes" set.
设置客户端认证参数
[root@localhost /]# kubectl config set-credentials admin \
> --client-certificate=/etc/kubernetes/ssl/admin.pem \
> --embed-certs=true \
> --client-key=/etc/kubernetes/ssl/admin-key.pem
执行结果:User "admin" set.
设置上下文参数
[root@localhost /]# kubectl config set-context kubernetes \
> --cluster=kubernetes \
> --user=admin
执行结果:Context "kubernetes" set.
设置默认上下文
[root@localhost /]# kubectl config use-context kubernetes
执行结果:Switched to context "kubernetes".
以上配置会直接可以通过一下命令查看
[root@localhost /]# cat ~/.kube/config

5 部署etcd集群

5.1上传并解压

[root@localhost etcd]# tar -zxvf etcd-v3.1.7-linux-amd64.tar.gz 
[root@localhost etcd-v3.1.7-linux-amd64]# mv etcd* /usr/local/bin/
export ETCD_NAME=sz-pg-oam-docker-test-001.tendcloud.com
export INTERNAL_IP=169.254.6.20

5.2 编辑etct.service启动文件及conf配置文件
5.2.1编辑启动文件

[root@localhost etcd]#vi /erc/systemd/system/etcd.service
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
Documentation=https://github.com/coreos
[Service]
Type=notify
WorkingDirectory=/var/lib/etcd/
EnvironmentFile=-/etc/etcd/etcd.conf
ExecStart=/usr/local/bin/etcd \
--name=sz-pg-oam-docker-test-001.tendcloud.com \
--cert-file=/etc/kubernetes/ssl/kubernetes.pem  \
--key-file=/etc/kubernetes/ssl/kubernetes-key.pem \
--peer-cert-file=/etc/kubernetes/ssl/kubernetes.pem \
--peer-key-file=/etc/kubernetes/ssl/kubernetes-key.pem \
--trusted-ca-file=/etc/kubernetes/ssl/ca.pem \
--peer-trusted-ca-file=/etc/kubernetes/ssl/ca.pem \
--initial-advertise-peer-urls   https://169.254.6.20:2380 \
--listen-peer-urls      https://169.254.6.20:2380 \
--listen-client-urls    https://169.254.6.20:2379,https://127.0.0.1:2739 \
--advertise-client-urls https://169.254.6.20:2379 \
--initial-cluster-token etcd-cluster-0 \
--initial-cluster       sz-pg-oam-docker-test-001.tendcloud.com=https://169.254.6.20:2380 \
--initial-cluster-state new \
--data-dir=/var/lib/etcd
Restart=on-failure
RestartSec=5
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target

5.2.2编辑配置文件

[root@localhost etcd]#mkdir -p /etc/etcd/
[root@localhost etcd]#vi /etd/etcd/etcd.conf
[member]
ETCD_NAME="sz-pg-oam-docker-test-001.tendcloud.com"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://0.0.0.0:2380"
ETCD_LISTEN_CLIENT_URLS="https://0.0.0.0:2379"

[cluster]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://169.254.6.20:2380"
ETCD_INITIAL_CLUSTER="sz-pg-oam-docker-test-001.tendcloud.com=https://169.254.6.20:2380"
ETCD_INITIAL_CLUSTER_STATE="new"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster-0"
ETCD_ADVERTISE_CLIENT_URLS="https://169.254.6.20:2379"

5.3启动并添加开机启动

systemctl daemon-reload
systemctl enable etcd
systemctl start etcd
systemctl stop etcd

5.4查看集群配置

etcdctl --ca-file=/etc/kubernetes/ssl/ca.pem --cert-file=/etc/kubernetes/ssl/kubernetes.pem --key-file=/etc/kubernetes/ssl/kubernetes-key.pem cluster-health

etcdctl --ca-file=/etc/kubernetes/ssl/ca.pem --cert-file=/etc/kubernetes/ssl/kubernetes.pem --key-file=/etc/kubernetes/ssl/kubernetes-key.pem  member list 

6Harbor安装(master节点)

上传harbor-v1.5.2.tgz和docker-compose-Linux-x86_64.bin

6.1安装docker-compose

[root@localhost harbor]# cp docker-compose-Linux-x86_64 /usr/local/kubernetes/bin/docker-compose
[root@localhost harbor]# chmod +x /usr/local/kubernetes/bin/docker-compose

6.2安装配置harbor

[root@localhost harbor]#tar xf harbor-offline-installer-v1.5.0.tgz

6.2.1配置harbor.cfg

修改此处为本地IP
hostname = 10.10.10.1
修改登陆密码
harbor_admin_password = Harbor12345

6.2.2安装

###一定要进入harbor目录,日志放在/var/log/harbor/
[root@localhost harbor]# cd /usr/local/kubernetes/harbor/         
[root@localhost harbor]# ./prepare 
[root@localhost harbor]# ./install.sh
查看生成的镜像是否启动
[root@localhost harbor]# docker ps        ###状态为up

6.2.3加入认证

在docker启动文件中添加 --insecure-registry=IP /
在daemon.json中添加 {"insecure-registries": ["IP"]}

6.2.4测试
6.2.4.1docker命令登陆
[root@localhost harbor]# docker login 10.10.10.1 ###账号:admin 密码:Harbor12345
Username: admin
Password:
Login Succeeded

7 部署kubernetes master 节点

节点包括
kube-apiserver
kube-scheduler
kube-controller-manager
上传并解压 kubernetes-server-linux-amd64.tar.gz

[root@localhost /]#tar -zxvf  kubernetes-server-linux-amd64.tar.gz
复制二进制文件至/usr/local/bin/[root@localhost bin]# cp -r {kube-apiserver,kube-controller-manager,kube-scheduler,kubectl,kube-proxy,kubelet} /usr/local/bin/

7.1配置启动kube-apiserver
7.1.1配置启动文件

[root@localhost /]# vi /usr/lib/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes API Service
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target
After=etcd.service
[Service]
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/apiserver
ExecStart=/usr/local/bin/kube-apiserver \
				$KUBE_LOGTOSTDERR \
				$KUBE_LOG_LEVEL \
				$KUBE_ETCD_SERVERS \
				$KUBE_API_ADDRESS \
				$KUBE_API_PORT \
				$KUBELET_PORT \
				$KUBE_ALLOW_PRIV \
				$KUBE_SERVICE_ADDRESSES \
				$KUBE_ADMISSION_CONTROL \
				$KUBE_API_ARGS
Restart=on-failure
Type=notify
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target

7.1.2配置config

该配置文件被kube-apiserver/kube-controller-manager/kube-scheduler/kubelet/kubeproxy使用
[root@localhost /]# vi /etc/kubernetes/config
KUBE_LOGTOSTDERR="--logtostderr=true"
KUBE_LOG_LEVEL="--v=0"
KUBE_ALLOW_PRIV="--allow-privileged=true"
KUBE_MASTER="--master=http://169.254.6.20:8080"

7.1.3配置 apiserver

[root@localhost /]# vi /etc/kubernetes/apiserver
KUBE_API_ADDRESS="--advertise-address=169.254.6.20 --bind-address=0.0.0.0 --insecure-bind-address=0.0.0.0"
KUBE_ETCD_SERVERS="--etcd-servers=http://169.254.6.20:2379"
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"
KUBE_ADMISSION_CONTROL="--admission-control=ServiceAccount,NamespaceLifecycle,NamespaceExists,LimitRanger,ResourceQuota"
KUBE_API_ARGS="--authorization-mode=RBAC --runtime-config=rbac.authorization.k8s.io/v1beta1 --kubelet-https=true --experimental-bootstrap-token-auth --token-auth-file=/etc/kubernetes/token.csv --service-node-port-range=30000-32767 --tls-cert-file=/etc/kubernetes/ssl/kubernetes.pem --tls-private-key-file=/etc/kubernetes/ssl/kubernetes-key.pem --client-ca-file=/etc/kubernetes/ssl/ca.pem --service-account-key-file=/etc/kubernetes/ssl/ca-key.pem --etcd-cafile=/etc/kubernetes/ssl/ca.pem --etcd-certfile=/etc/kubernetes/ssl/kubernetes.pem --etcd-keyfile=/etc/kubernetes/ssl/kubernetes-key.pem --enable-swagger-ui=true --apiserver-count=3 --audit-log-maxage=30 --audit-log-maxbackup=3 --audit-log-maxsize=100 --audit-log-path=/var/lib/audit.log --event-ttl=1h"

7.1.4启动并设置开机启动

systemctl daemon-reload
systemctl enable kube-apiserver
systemctl start kube-apiserver
systemctl status kube-apiserver

7.2 配置启动kube-controller-manager
7.2.1配置启动文件

[root@localhost /]# vi /usr/lib/systemd/system/kube-controller-manager.service
Description=Kubernetes Controller Manager
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
[Service]
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/controller-manager
ExecStart=/usr/bin/local/kube-controller-manager \
    $KUBE_LOGTOSTDERR \
    $KUBE_LOG_LEVEL \
    $KUBE_MASTER \
    $KUBE_CONTROLLER_MANAGER_ARGS
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target

7.2.2编辑配置文件controller-manager

[root@localhost /]# vi /etc/kubernetes/controller-manager
KUBE_CONTROLLER_MANAGER_ARGS="--address=127.0.0.1 --service-cluster-ip-range=10.254.0.0/16 --cluster-name=kubernetes --cluster-signing-cert-file=/etc/kubernetes/ssl/ca.pem --cluster-signing-key-file=/etc/kubernetes/ssl/ca-key.pem  --service-account-private-key-file=/etc/kubernetes/ssl/ca-key.pem --root-ca-file=/etc/kubernetes/ssl/ca.pem --leader-elect=true"

7.2.3启动并设置开机启动

systemctl daemon-reload
systemctl enable kube-controller-manager
systemctl start kube-controller-manager
systemctl status kube-controller-manager

7.3配置启动kube-scheduler
7.3.1配置启动文件kube-scheduler

[root@localhost /]# vi /usr/lib/systemd/system/kube-kube-scheduler.service
[Unit]
Description=Kubernetes Scheduler Plugin
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
[Service]
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/scheduler
ExecStart=/usr/local/bin/kube-scheduler \
            $KUBE_LOGTOSTDERR \
            $KUBE_LOG_LEVEL \
            $KUBE_MASTER \
            $KUBE_SCHEDULER_ARGS
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target

7.3.2编辑配置文件scheduler

[root@localhost /]# vi /etc/kubernetes/scheduler
KUBE_SCHEDULER_ARGS="--leader-elect=true --address=127.0.0.1"

7.3.3启动并设置开机启动

systemctl daemon-reload
systemctl enable kube-kube-scheduler
systemctl start kube-kube-scheduler
systemctl status kube-kube-scheduler

7.4查看master节点功能

[root@localhost kubernetes]# kubectl get componentstatuses
NAME                 STATUS    MESSAGE              ERROR
etcd-0               Healthy   {"health": "true"}   
scheduler            Healthy   ok                   
controller-manager   Healthy   ok  

8部署kubernetes node节点

8.1配置启动Flanneld
8.1.1配置前操作

[root@localhost flannel]# ls
flannel-v0.7.1-linux-amd64.tar.gz
[root@localhost flannel]# tar -zxvf flannel-v0.7.1-linux-amd64.tar.gz 
[root@localhost k8s]# mv flannel-v0.7.1-linux-amd64.tar.gz flannel/
[root@localhost k8s]# cd flannel/
移动文件至
[root@localhost bin]# cp -r flanneld /usr/bin/
[root@localhost bin]# cp -r mk-docker-opts.sh /usr/libexec/flanneld/

8.1.2配置启动文件

[root@localhost bin]# vi /usr/lib/systemd/system/flanneld.service
[Unit]
Description=Flanneld overlay address etcd agent
After=network.target
After=network-online.target
Wants=network-online.target
After=etcd.service
Before=docker.service

[Service]
Type=notify
EnvironmentFile=/etc/sysconfig/flanneld
EnvironmentFile=-/etc/sysconfig/docker-network
ExecStart=/usr/local/bin/flanneld \
  -etcd-endpoints=${FLANNEL_ETCD_ENDPOINTS} \
  -etcd-prefix=${FLANNEL_ETCD_PREFIX} \
  $FLANNEL_OPTIONS
ExecStartPost=/usr/libexec/flannel/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/docker
Restart=on-failure

[Install]
WantedBy=multi-user.target
RequiredBy=docker.service

8.1.3编辑配置文件flanneld

[root@localhost sysconfig]# vi /etc/sysconfig/flanneld
FLANNEL_ETCD_ENDPOINTS="http://169.254.6.20:2379"
FLANNEL_ETCD_PREFIX="/kube-centos/network"
FLANNEL_OPTIONS="-etcd-cafile=/etc/kubernetes/ssl/ca.pem -etcd-certfile=/etc/kubernetes/ssl/kubernetes.pem -etcd-keyfile=/etc/kubernetes/ssl/kubernetes-key.pem"

8.1.4设置etcd-key

[root@localhost sysconfig]# etcdctl mkdir /atomic.io/network
[root@localhost sysconfig]# etcdctl mk /atomic.io/network/config  "{ \"Network\": \"172.17.0.0/16\", \"SubnetLen\": 24, \"Backend\": { \"Type\": \"vxlan\" } }"
上面IP与docker本身的IP地址一个网段

8.1.5添加docker配置

因为docker要使用flanneld,在docker启动文件中添加中加入
EnvironmentFile=-/etc/sysconfig/flanneld
EnvironmentFile=-/run/flannel/subnet.env
--bip=${FLANNEL_SUBNET}

8.1.6启动flanneld并重启docker

systemctl daemon-reload
systemctl enable flanneld
systemctl start flanneld
systemctl status flanneld
systemctl daemon-reload 
systemctl restart docker.service

8.1.7验证

使用ifconfig查看网路配置
flannel地址与docker在同一个网段即可

8.2安装配置kubelet
8.2.1安装配置前操作

kubelet启动需要向kube-apiserver发送TLS bootstrapping请求,需要将bootstrap token文件中的kubelet-bootstrap用户授予system:node-bootstrapper权限之后kubelet才有创建认证的请求的权限(certificate signing requests)

[root@localhost sysconfig]# kubectl create clusterrolebinding kubelet-bootstrap \
###赋予system:node-bootstrapper角色
[root@localhost sysconfig]# --clusterrole=system:node-bootstrapper \				
###指定/etc/kubernetes/token.csv中的用户名并写入etc/kubernetes/bootstrap.kubeconfig
[root@localhost sysconfig]# --user=kubelet-bootstrap	

[root@localhost sysconfig]# kubectl create clusterrolebinding kubelet-bootstrap \
[root@localhost sysconfig]#> --clusterrole=system:node-bootstrapper \
[root@localhost sysconfig]#> --user=kubelet-bootstrap
clusterrolebinding "kubelet-bootstrap" created

移动文件至/usr/local/bin/
[root@localhost bin]# cp -r {kube-proxy,kubelet} /usr/local/bin/
创建kubelet文件
mkdir -p /var/lib/kubelet
8.2.2创建kubelet.service启动文件
[root@localhost sysconfig]# vi /usr/lib/systemd/system/kubelet.service
[Unit]
Description=Kubernetes Kubelet Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=docker.service
Requires=docker.service
[Service]
WorkingDirectory=/var/lib/kubelet
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/kubelet
ExecStart=/usr/local/bin/kubelet \
            $KUBE_LOGTOSTDERR \
            $KUBE_LOG_LEVEL \
            $KUBELET_API_SERVER \
            $KUBELET_ADDRESS \
            $KUBELET_PORT \
            $KUBELET_HOSTNAME \
            $KUBE_ALLOW_PRIV \
            $KUBELET_POD_INFRA_CONTAINER \
            $KUBELET_ARGS
Restart=on-failure
[Install]
WantedBy=multi-user.target

8.2.3编辑配置文件kubectl

[root@localhost sysconfig]# vi /etc/sysconfig/kubelet
KUBELET_ADDRESS="--address=169.254.6.20"
KUBELET_HOSTNAME="--hostname-override=169.254.6.20"
KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=k8s.gcr.io/pause-amd64:3.1
KUBELET_ARGS="--cgroup-driver=systemd --cluster-dns=10.254.0.2 --experimental-bootstrap-kubeconfig=/etc/kubernetes/bootstrap.kubeconfig --kubeconfig=/etc/kubernetes/kubelet.kubeconfig --require-kubeconfig --cert-dir=/etc/kubernetes/ssl --cluster-domain=cluster.local --hairpin-mode promiscuous-bridge --serialize-image-pulls=false --allow-privileged=true --runtime-cgroups=/systemd/system.slice --kubelet-cgroups=/systemd/system.slice"

8.2.4启动并设置开机启动

systemctl daemon-reload
systemctl enable kubelet
systemctl start kubelet
systemctl status kubelet

8.3安装配置kube-proxy
安装conntrack-tools.x86_64

[root@localhost cfssl]# yum install conntrack-tools.x86_64

8.3.1创建kube-proxy.service启动文件

[root@localhost sysconfig]# vi /usr/lib/systemd/system/kube-proxy.service
[Unit]
Description=Kubernetes Kube-Proxy Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target

[Service]
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/proxy
ExecStart=/usr/local/bin/kube-proxy \
        $KUBE_LOGTOSTDERR \
        $KUBE_LOG_LEVEL \
        $KUBE_MASTER \
        $KUBE_PROXY_ARGS
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

8.3.2创建proxy配置文件

[root@localhost sysconfig]# vi /etc/sysconfig/kube-proxy
KUBE_PROXY_ARGS="--bind-address=169.254.6.20 --hostname-override=169.254.6.20 --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig --cluster-cidr=10.254.0.0/16"

8.3.3启动并设置开机启动

systemctl daemon-reload
systemctl enable kube-proxy
systemctl start kube-proxy
systemctl status kube-proxy

8.4创建nginx服务进行测试
8.4.1推送docker镜像之harbor中
标记镜像
docker tag <169.254.6.20/library/images>
推送镜像
dockers push 169.254.6.20/library/images
8.4.2创建nginx

执行下列命令创建并运行
[root@localhost cfssl]# kubectl run nginx --replicas=2 --labels="run=load-balancer-example" --image=169.254.6.20/library/vmware/nginx-photon  --port=80
[root@localhost cfssl]# kubectl expose deployment nginx --type=NodePort --name=example-service
使用命令查看服务信息
[root@localhost cfssl]#  kubectl describe svc example-service
Name:                     example-service
Namespace:                default
Labels:                   run=load-balancer-example
Annotations:              <none>
Selector:                 run=load-balancer-example
Type:                     NodePort
IP:                       10.254.21.167
Port:                     <unset>  80/TCP
TargetPort:               80/TCP
NodePort:                 <unset>  32262/TCP
Endpoints:                172.17.86.2:80,172.17.86.3:80
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>
使用kubectl命令查看该服务的pod、service、deployment为running

8.4.3登陆验证
curl 172.17.86.2:80或者通过浏览器登陆本机ip: 32262验证
8.4.4kubectl常用命令

产看服务
kubectl get svc
查看部署的容器
kubectl get deployment 
删除
kubectl delete 类型  名称
产看日志
kubectl describe 类型 名称
修改参数
kubectl edit 类型/名称 -n default
类型为(pod svc ec deployment)

9安装配置kubedns(master节点)

9.1推送镜像至harbor

标记镜像
docker tag k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64:1.14.8  169.254.6.20/library/k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64:1.14.8
docker tag k8s.gcr.io/k8s-dns-sidecar-amd64:1.14.8  169.254.6.20/library/k8s.gcr.io/k8s-dns-sidecar-amd64:1.14.8
docker tag k8s.gcr.io/k8s-dns-kube-dns-amd64:1.14.8  169.254.6.20/library/k8s.gcr.io/k8s-dns-kube-dns-amd64:1.14.8
推送至harbor
docker push 169.254.6.20/library/k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64:1.14.8
docker push 169.254.6.20/library/k8s.gcr.io/k8s-dns-sidecar-amd64:1.14.8
docker push 169.254.6.20/library/k8s.gcr.io/k8s-dns-kube-dns-amd64:1.14.8

9.2配置yaml文件

配置KUBEDNS需要kubedns-cm.yaml  kubedns-sa.yaml  kubedns-controller.yaml  kubedns-svc.yaml文件,修改yaml配置文件中使用的镜像为本地镜像
vi kubedns-cm.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: kube-dns
  namespace: kube-system
  labels:
    addonmanager.kubernetes.io/mode: EnsureExists
vi kubedns-sa.yaml 
apiVersion: v1
kind: ServiceAccount
metadata:
  name: kube-dns
  namespace: kube-system
  labels:
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile

vi kubedns-controller.yaml 
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: kube-dns
  namespace: kube-system
  labels:
    k8s-app: kube-dns
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
spec:
  strategy:
    rollingUpdate:
      maxSurge: 10%
      maxUnavailable: 0
  selector:
    matchLabels:
      k8s-app: kube-dns
  template:
    metadata:
      labels:
        k8s-app: kube-dns
      annotations:
        scheduler.alpha.kubernetes.io/critical-pod: ''
    spec:
      tolerations:
      - key: "CriticalAddonsOnly"
        operator: "Exists"
      volumes:
      - name: kube-dns-config
        configMap:
          name: kube-dns
          optional: true
      containers:
      - name: kubedns
        image: 169.254.6.20/library/k8s.gcr.io/k8s-dns-kube-dns-amd64:1.14.8
        resources:
          limits:
            memory: 170Mi
          requests:
            cpu: 100m
            memory: 70Mi
        livenessProbe:
          httpGet:
            path: /healthcheck/kubedns
            port: 10054
            scheme: HTTP
          initialDelaySeconds: 60
          timeoutSeconds: 5
          successThreshold: 1
          failureThreshold: 5
        readinessProbe:
          httpGet:
            path: /readiness
            port: 8081
            scheme: HTTP
          # we poll on pod startup for the Kubernetes master service and
          # only setup the /readiness HTTP server once that's available.
          initialDelaySeconds: 3
          timeoutSeconds: 5
        args:
        - --domain=cluster.local.
        - --dns-port=10053
        - --config-dir=/kube-dns-config
        - --v=2
        - --kube-master-url=http://169.254.6.20:8080  ###新增配置不添加会导致网络拒绝链接
        #__PILLAR__FEDERATIONS__DOMAIN__MAP__
        env:
        - name: PROMETHEUS_PORT
          value: "10055"
        ports:
        - containerPort: 10053
          name: dns-local
          protocol: UDP
        - containerPort: 10053
          name: dns-tcp-local
          protocol: TCP
        - containerPort: 10055
          name: metrics
          protocol: TCP
        volumeMounts:
        - name: kube-dns-config
          mountPath: /kube-dns-config
      - name: dnsmasq
        image: 169.254.6.20/library/k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64:1.14.8
        livenessProbe:
          httpGet:
            path: /healthcheck/dnsmasq
            port: 10054
            scheme: HTTP
          initialDelaySeconds: 60
          timeoutSeconds: 5
          successThreshold: 1
          failureThreshold: 5
        args:
        - -v=2
        - -logtostderr
        - -configDir=/etc/k8s/dns/dnsmasq-nanny
        - -restartDnsmasq=true
        - --
        - -k
        - --cache-size=1000
        - --log-facility=-
        - --server=/cluster.local./127.0.0.1#10053
        - --server=/in-addr.arpa/127.0.0.1#10053
        - --server=/ip6.arpa/127.0.0.1#10053
        ports:
        - containerPort: 53
          name: dns
          protocol: UDP
        - containerPort: 53
          name: dns-tcp
          protocol: TCP
        # see: https://github.com/kubernetes/kubernetes/issues/29055 for details
        resources:
          requests:
            cpu: 150m
            memory: 20Mi
        volumeMounts:
        - name: kube-dns-config
          mountPath: /etc/k8s/dns/dnsmasq-nanny
      - name: sidecar
        image: 169.254.6.20/library/k8s.gcr.io/k8s-dns-sidecar-amd64:1.14.8
        livenessProbe:
          httpGet:
            path: /metrics
            port: 10054
            scheme: HTTP
          initialDelaySeconds: 60
          timeoutSeconds: 5
          successThreshold: 1
          failureThreshold: 5
        args:
        - --v=2
        - --logtostderr
        - --probe=kubedns,127.0.0.1:10053,kubernetes.default.svc.cluster.local.,5,A
        - --probe=dnsmasq,127.0.0.1:53,kubernetes.default.svc.cluster.local.,5,A
        ports:
        - containerPort: 10054
          name: metrics
          protocol: TCP
        resources:
          requests:
            memory: 20Mi
            cpu: 10m
      dnsPolicy: Default  # Don't use cluster DNS.
      serviceAccountName: kube-dns

vi kubedns-svc.yaml
apiVersion: v1
kind: Service
metadata:
  name: kube-dns
  namespace: kube-system
  labels:
    k8s-app: kube-dns
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
    kubernetes.io/name: "KubeDNS"
spec:
  selector:
    k8s-app: kube-dns
  clusterIP: 10.254.0.2
  ports:
  - name: dns
    port: 53
    protocol: UDP
  - name: dns-tcp
    port: 53
    protocol: TCP

9.3执行所有定义文件

命令及执行结果
[root@localhost kubedns]# kubectl create -f .
configmap "kube-dns" created
deployment "kube-dns" created
serviceaccount "kube-dns" created
service "kube-dns" created
删除
kubectl delete -f .

9.4使用kubectl产看kubedns状态
kubectl get svc -n kube-system
kubectl get deployment -n kube-system
kubectl get pods -n kube-system
以上命令执行结果状态均为running则表示正常
查看日志是否有报错
kubectl describe pod -n kube-system pod名称

10 安装dashboard插件(master节点)

10.1推送镜像至harbor

将镜像上传至docker
docker load -i kubernetes-dashboard-amd64-v1.8.3.tar.gz
标记并上传至Harbor中
docker tag k8s.gcr.io/kubernetes-dashboard-amd64:v1.8.3 169.254.6.20/library/k8s.gcr.io/kubernetes-dashboard-amd64:v1.8.3
docker push 169.254.6.20/library/k8s.gcr.io/kubernetes-dashboard-amd64:v1.8.3

10.2编辑yaml文件

文件中的dashboard镜像为本地镜像地址需要修改为对应的镜像地址和版本:

vi dashboard-controller.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: kubernetes-dashboard
  namespace: kube-system
  labels:
    k8s-app: kubernetes-dashboard
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
spec:
  selector:
    matchLabels:
      k8s-app: kubernetes-dashboard
  template:
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
      annotations:
        scheduler.alpha.kubernetes.io/critical-pod: ''
    spec:
      serviceAccountName: dashboard
      containers:
      - name: kubernetes-dashboard
        image: 169.254.6.20/library/k8s.gcr.io/kubernetes-dashboard-amd64:v1.8.3
        resources:
          limits:
            cpu: 100m
            memory: 50Mi
          requests:
            cpu: 100m
            memory: 50Mi
        ports:
        - containerPort: 9090
        args:
          - --apiserver-host=http://169.254.6.20:8080    ###新增配置不添加会导致网络拒绝链接
        livenessProbe:
          httpGet:
            path: /
            port: 9090
          initialDelaySeconds: 30
          timeoutSeconds: 30
      tolerations:
      - key: "CriticalAddonsOnly"
        operator: "Exists"
vi dashboard-service.yaml
指定端口类型为 NodePort,这样外界可以通过地址 nodeIP:nodePort 访问 dashboard
apiVersion: v1
kind: Service
metadata:
  name: kubernetes-dashboard
  namespace: kube-system
  labels:
    k8s-app: kubernetes-dashboard
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
spec:
  type: NodePort 
  selector:
    k8s-app: kubernetes-dashboard
  ports:
  - port: 80
    targetPort: 9090

vidashboard-rbac.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: dashboard
  namespace: kube-system

---

kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: dashboard
subjects:
  - kind: ServiceAccount
    name: dashboard
    namespace: kube-system
roleRef:
  kind: ClusterRole
  name: cluster-admin
  apiGroup: rbac.authorization.k8s.io

10.3执行所有定义文件

命令及执行结果
[root@localhost kubedns]# kubectl create -f .

10.4使用kubectl查看dashboard状态

kubectl get svc -n kube-system
kubectl get deployment -n kube-system
kubectl get pods -n kube-system
以上命令执行结果状态均为running则表示正常
查看日志是否有报错
kubectl describe pod -n kube-system pod名称

10.5验证
http://169.254.6.20:端口 登陆成功则证明部署成功

11 全部服务及配置文件

11.1服务

systemctl start docker
systemctl start etcd
systemctl start kube-apiserver
systemctl start kube-scheduler
systemctl start kube-controller-manager
systemctl start flanneld
systemctl start docker
systemctl start kubelet
systemctl start kube-proxy

12 k8S 常用命令:

查看指定pod 的输出
kubectl describe  pod dlsupd-0 

查看所有pod 的信息
kubectl get pod -o wide  

查看指定pod的日志
kubectl logs kernel-0 kernel 

cd 到pod项目目录
删除pod
kubectl delete -f .
重新加载pod
kubectl apply -f .

查看所有pod命名空间
kubectl get po --all-namespaces 
查看指定pod的sts
kubectl get sts front-reverse 
查看指定pod的输出
kubectl describe  sts front-reverse 

登陆进入k8s 进入 pod
kubectl exec -it paas-oss-0 -c paas-oss sh
查看生成的配置文件
kubectl edit pod web-0(pod名称) 

当没有pod 的 yaml 配置文件的时候,用这个命令重启
kubectl get pod -n apm filebeat-xpdqr -o yaml | kubectl replace --force -f -

查看当前所有node
kubectl get nodes
kubectl config use-context context-WL0005-admin
kubectl config get-clusters
kubectl config delete-cluster AC0013
kubectl config get-contexts
  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
好的,下面是一份markdown格式的Kubernetes部署详细部署手册。 # Kubernetes部署详细部署手册 ## 什么是KubernetesKubernetes是一种容器编排平台,用于自动化应用程序的部署、扩展和管理。它提供了一种容器编排机制,可以轻松地管理容器化应用程序的生命周期。 ## Kubernetes的优势 - 自动化应用程序的部署、扩展和管理 - 提供高可用性和容错性 - 提供自动化升级和回滚机制 - 支持多云和混合云环境 ## Kubernetes的核心组件 Kubernetes由多个核心组件组成,包括: - API Server:提供Kubernetes API,用于管理和控制Kubernetes集群。 - Controller Manager:用于管理Kubernetes控制器,包括副本集、服务和水平自动伸缩器等。 - Scheduler:用于将容器调度到集群中的节点上。 - Kubelet:运行在每个节点上,用于管理Pod并与API Server交互。 - Container Runtime:负责在节点上运行容器,如Docker或rkt。 - etcd:用于存储Kubernetes集群的配置数据。 ## Kubernetes部署流程 以下是一个简单的Kubernetes部署流程,包括以下步骤: 1. 部署etcd集群,并配置etcd集群的访问方式。 2. 部署API Server、Controller Manager和Scheduler组件,并配置它们之间的通信。 3. 部署Kubelet组件,并为每个节点配置Kubelet参数。 4. 部署Container Runtime,并为每个节点配置容器运行环境。 5. 部署Kubernetes控制器,包括副本集、服务和水平自动伸缩器等。 6. 部署应用程序的Pod和Service。 ## Kubernetes部署工具 以下是一些常用的Kubernetes部署工具: - kubeadm:用于快速部署一个Kubernetes集群。 - kops:用于在云平台上自动化部署和管理Kubernetes集群。 - kubespray:用于在物理机或虚拟机上自动化部署和管理Kubernetes集群。 ## Kubernetes的实践建议 以下是一些Kubernetes的实践建议: - 了解Kubernetes的基本概念和架构,熟悉Kubernetes的核心组件和API。 - 使用Kubernetes提供的命令行工具或Web UI来管理和监控Kubernetes集群。 - 使用标准的Docker镜像来部署应用程序,遵循最佳实践。 - 使用Kubernetes提供的资源限制和请求机制来优化资源利用率。 - 使用Kubernetes提供的卷和持久卷机制来管理数据存储。 ## 结论 Kubernetes是一种重要的容器编排平台,可以帮助团队自动化应用程序的部署、扩展和管理。通过理解Kubernetes的基本概念和架构,使用Kubernetes提供的命令行工具或Web UI来管理和监控Kubernetes集群,以及遵循最佳实践来部署应用程序,可以实现更高效、更可靠的容器化应用程序部署

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值