Kubernetes v1.10.0安装指南(一)

00.安装准备

本操作文档建议在重装系统之后的新机器上进行操作。

操作系统

CentOS7以上版本,内核:3.10以上版本

[root@master /]# lsb_release -a
LSB Version:    :core-4.1-amd64:core-4.1-noarch
Distributor ID: CentOS
Description:    CentOS Linux release 7.5.1804 (Core) 
Release:        7.5.1804
Codename:       Core

建议:重装系统时,选择系统镜像版本为centos7.1_64bit_en_basic

Linux版本

uname -a
Linux master 3.10.0-229.el7.x86_64 #1 SMP Fri Mar 6 11:36:42 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux

各组件版本

安装之前请根据yum源的配置,检查各个版本是否存在且版本号正确

kubernetes:1.10.3
docker:1.13.1/docker-1.13.1-75.git8633870.el7.centos.x86_64/linux/amd64
etcd:etcdctl version: 3.2.22/API version: 2
flannel:0.7.1

01.环境规划与公共处理

环境规划

安装版本Kubernetes v1.10.1

节点IP地址组件
master10.66.77.221kube-apiserver; kube-controller-manager;
kube-scheduler; kube-proxy; docker; flannel; etcd
node110.66.77.222kubelet; kube-proxy; docker; flannel
node210.66.77.223kubelet; kube-proxy; docker; flannel
镜像仓库10.66.77.227:10050None

公共处理

本环节操作需要在所有机子操作(master,etcd和nodes)

关闭防火墙

systemctl stop firewalld.service 
systemctl disable firewalld.service

说明:第一个命令关闭防火墙,第二个命令使开机不启动防火墙(永久关闭)

永久关闭SElinux

vi /etc/selinux/config 
SELINUX=disabled

对应修改每台主机的hostname

vi /etc/hostname

说明:对应机子修改为对应的名字。例如master机子,该文件内容修改为‘master’

增加HOSTS

vi /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain
10.66.77.221  master etcd
10.66.77.222 node1
10.66.77.223 node2

关闭Swap

swapoff -a && sysctl -w vm.swappiness=0            
rm -rf $(swapon -s | awk 'NR>1{print $1}')              
sed -i 's/.*swap.*/#&/' /etc/fstab               
free -m          
systemctl daemon-reload               

上述命令解释依照顺序解释如下:

  • 卸载swap
  • 删除swap内容
  • 注释掉swap的挂载
  • 使用free -m确认swap已经关闭
  • 配置修改生效

CentOS的http代理设置(可选项)

vi /etc/profile
https_proxy=test@test.com
http_proxy=test@test.com
export https_proxy
export http_proxy
source /etc/profile

重启机器

reboot

02.安装docker

安装Docker

yum install docker -y

给docker单独设代理

docker代理设置参考文档

创建目录及配置文件

mkdir /etc/systemd/system/docker.service.d
vi /etc/systemd/system/docker.service.d/http-proxy.conf
[Service]
Environment="HTTP_PROXY=http://test@test.com/"
Environment="NO_PROXY=localhost,127.0.0.1,registry.docker-cn.com,hub-mirror.c.163.com,10.66.77.227:10050"

HTTPS配置为可选配置,添加后存在docker运行报错风险。添加方式增加以下描述:

Environment="HTTPS_PROXY=https://test@test.com/"

修改存储类型及禁用SElinux

文件格式修改参考文档

systemctl stop docker                       
rm -rf /var/lib/docker                      
vi /etc/sysconfig/docker-storage             
DOCKER_STORAGE_OPTIONS="--storage-driver devicemapper"
vi /etc/sysconfig/docker                   
OPTIONS='--log-driver=journald --signature-verification=false'                    

设置Docker私有仓库镜像以及配置代理

目的:能从公网pull到镜像,能将镜像push到内网私有仓库

准备:私有镜像仓库已配置完毕10.66.77.227:10050
访问地址为:http://10.66.77.227:10080

说明:以下文档只需添加

–add-registry=10.66.77.227:10050 --insecure-registry=10.66.77.227:10050

vi /usr/lib/systemd/system/docker.service

ExecStart=/usr/bin/dockerd-current --add-registry=10.66.77.227:10050 --insecure-registry=10.66.77.227:10050 \
          --add-runtime docker-runc=/usr/libexec/docker/docker-runc-current \
          --default-runtime=docker-runc \
          --exec-opt native.cgroupdriver=systemd \
          --userland-proxy-path=/usr/libexec/docker/docker-proxy-current \
          --init-path=/usr/libexec/docker/docker-init-current \
          --seccomp-profile=/etc/docker/seccomp.json \
          $OPTIONS \
          $DOCKER_STORAGE_OPTIONS \
          $DOCKER_NETWORK_OPTIONS \
          $ADD_REGISTRY \
          $BLOCK_REGISTRY \
          $INSECURE_REGISTRY \
          $REGISTRIES

systemctl daemon-reload                    
systemctl restart docker

测试运行

docker run busybox echo "Hello world"

Docker安装成功检测

通过如下命令检查docker安装是否运行

systemctl status docker | grep active
Active: active (running) since Thu 2018-12-13 03:04:32 CST; 1 day 8h ago

通过如下命令检查docker镜像仓库以及代理是否正确:

docker info | grep http

docker info | grep Registries

docker version检查版本是否正确

docker vision

Client:
Version:         1.13.1
API version:     1.26
Package version: docker-1.13.1-75.git8633870.el7.centos.x86_64
Go version:      go1.9.4
Git commit:      8633870/1.13.1
Built:           Fri Sep 28 19:45:08 2018
OS/Arch:         linux/amd64

Server:
Version:         1.13.1
API version:     1.26 (minimum version 1.12)
Package version: docker-1.13.1-75.git8633870.el7.centos.x86_64
Go version:      go1.9.4
Git commit:      8633870/1.13.1
Built:           Fri Sep 28 19:45:08 2018
OS/Arch:         linux/amd64
Experimental:    false

03.自签TLS证书

若无特别说明,本节操作仅在master上进行

安装证书生成工具cfssl

若未配置CentOS7代理或配置失败,可以从windows环境依照链接下载目标文件再拷贝进master。

mkdir /data/ssl -p

wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64

chmod +x cfssl_linux-amd64 cfssljson_linux-amd64 cfssl-certinfo_linux-amd64

mv cfssl_linux-amd64 /usr/local/bin/cfssl
mv cfssljson_linux-amd64 /usr/local/bin/cfssljson
mv cfssl-certinfo_linux-amd64 /usr/bin/cfssl-certinfo

cd /data/ssl/

创建certificate脚本文件

创建以下sh文件时,需要修改server-csr.json文件中关于hosts的描述,
其中的Host为Master,ETCD,以及集群服务网路第一个IP,如下:

"127.0.0.1",     #本机地址
"10.66.77.221",  #添加规划的MASTER IP
"10.66.77.221",  #添加规划的ETCD IP 
"10.255.0.1"   # 集群服务网络IP
vim  certificate.sh
cat > ca-config.json <<EOF
{
"signing": {
    "default": {
    "expiry": "87600h"
    },
    "profiles": {
    "kubernetes": {
        "expiry": "87600h",
        "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ]
    }
    }
}
}
EOF

cat > ca-csr.json <<EOF
{
    "CN": "kubernetes",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Beijing",
            "ST": "Beijing",
            "O": "k8s",
            "OU": "System"
        }
    ]
}
EOF

cfssl gencert -initca ca-csr.json | cfssljson -bare ca -

#-----------------------

cat > server-csr.json <<EOF
{
    "CN": "kubernetes",
    "hosts": [
    "127.0.0.1",
    "10.66.77.221",
    "10.255.0.1",
    "kubernetes",
    "kubernetes.default",
    "kubernetes.default.svc",
    "kubernetes.default.svc.cluster",
    "kubernetes.default.svc.cluster.local"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "BeiJing",
            "ST": "BeiJing",
            "O": "k8s",
            "OU": "System"
        }
    ]
}
EOF

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server

#-----------------------

cat > admin-csr.json <<EOF
{
"CN": "admin",
"hosts": [],
"key": {
    "algo": "rsa",
    "size": 2048
},
"names": [
    {
    "C": "CN",
    "L": "BeiJing",
    "ST": "BeiJing",
    "O": "system:masters",
    "OU": "System"
    }
]
}
EOF

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin

#-----------------------

cat > kube-proxy-csr.json <<EOF
{
"CN": "system:kube-proxy",
"hosts": [],
"key": {
    "algo": "rsa",
    "size": 2048
},
"names": [
    {
    "C": "CN",
    "L": "BeiJing",
    "ST": "BeiJing",
    "O": "k8s",
    "OU": "System"
    }
]
}
EOF

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy

注意:需要修改hosts中的IP地址

生成证书

sh certificate.sh

查看结果,生成以下证书

admin.csr              #暂时不用
admin-csr.json         #暂时不用
admin-key.pem          #暂时不用
admin.pem              #暂时不用
ca-config.json         #临时文件,用于生成根证书
ca.csr                 #暂时不用
ca-csr.json            #临时文件,用于生成根证书
ca-key.pem             #根证书私钥
ca.pem                 #根证书
kube-proxy.csr         #暂时不用
kube-proxy-csr.json    #临时文件,用于生成NODE节点kube-proxy访问证书
kube-proxy-key.pem     #NODE节点kube-proxy私钥
kube-proxy.pem         #NODE节点kube-proxy证书
server.csr             #暂时不用
server-csr.json        #用于生成,NODE,MASTER,ETCD节点证书
server-key.pem         #NODE,MASTER,ETCD节点私钥
server.pem             #NODE,MASTER,ETCD节点证书

部署预备

将所需文件拷贝至node节点,首先需要在集群内所有机子创建目录

mkdir /opt/kubernetes/{bin,cfg,ssl}  -p    #这一操作需要在所有master、etcd和node节点执行

cd /data/ssl
chmod 777 *
cp ca*pem  server*pem  /opt/kubernetes/ssl/

scp -r /opt/kubernetes/*  root@10.66.77.222:/opt/kubernetes
scp -r /opt/kubernetes/*  root@10.66.77.223:/opt/kubernetes

04.部署ETCD

若无特殊说明,以下操作仅在Etcd机子上操作。

安装Etcd

yum install etcd -y

配置etcd

无需修改,以下文件仅做存档。

vi /usr/lib/systemd/system/etcd.service
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target

[Service]
Type=notify
WorkingDirectory=/var/lib/etcd/
EnvironmentFile=-/etc/etcd/etcd.conf
User=etcd
# set GOMAXPROCS to number of processors
ExecStart=/bin/bash -c "GOMAXPROCS=$(nproc) /usr/bin/etcd --name=\"${ETCD_NAME}\" --data-dir=\"${ETCD_DATA_DIR}\" --listen-client-urls=\"${ETCD_LISTEN_CLIENT_URLS}\""
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

需要配置以下文件,
修改项为:

vi /etc/etcd/etcd.conf
ETCD_LISTEN_CLIENT_URLS="https://localhost:2379,https://10.66.77.221:2379"  ---10.66.77.221 etcd主机地址
ETCD_ADVERTISE_CLIENT_URLS="https://10.66.77.221:2379"

#[Security]
ETCD_CERT_FILE="/opt/kubernetes/ssl/server.pem"
ETCD_KEY_FILE="/opt/kubernetes/ssl/server-key.pem"
ETCD_TRUSTED_CA_FILE="/opt/kubernetes/ssl/ca.pem"

/etc/etcd/etcd.conf文件配置存档如下:

vi /etc/etcd/etcd.conf
#[Member]
#ETCD_CORS=""
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
#ETCD_WAL_DIR=""
#ETCD_LISTEN_PEER_URLS="https://localhost:2380"
ETCD_LISTEN_CLIENT_URLS="https://localhost:2379,https://10.66.77.221:2379"
#ETCD_MAX_SNAPSHOTS="5"
#ETCD_MAX_WALS="5"
ETCD_NAME="default"
#ETCD_SNAPSHOT_COUNT="100000"
#ETCD_HEARTBEAT_INTERVAL="100"
#ETCD_ELECTION_TIMEOUT="1000"
#ETCD_QUOTA_BACKEND_BYTES="0"
#ETCD_MAX_REQUEST_BYTES="1572864"
#ETCD_GRPC_KEEPALIVE_MIN_TIME="5s"
#ETCD_GRPC_KEEPALIVE_INTERVAL="2h0m0s"
#ETCD_GRPC_KEEPALIVE_TIMEOUT="20s"
#
#[Clustering]
#ETCD_INITIAL_ADVERTISE_PEER_URLS="https://localhost:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://10.66.77.221:2379"
#ETCD_DISCOVERY=""
#ETCD_DISCOVERY_FALLBACK="proxy"
#ETCD_DISCOVERY_PROXY=""
#ETCD_DISCOVERY_SRV=""
#ETCD_INITIAL_CLUSTER="default=https://localhost:2380"
#ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
#ETCD_INITIAL_CLUSTER_STATE="new"
#ETCD_STRICT_RECONFIG_CHECK="true"
#ETCD_ENABLE_V2="true"
#
#[Proxy]
#ETCD_PROXY="off"
#ETCD_PROXY_FAILURE_WAIT="5000"
#ETCD_PROXY_REFRESH_INTERVAL="30000"
#ETCD_PROXY_DIAL_TIMEOUT="1000"
#ETCD_PROXY_WRITE_TIMEOUT="5000"
#ETCD_PROXY_READ_TIMEOUT="0"
#
#[Security]
ETCD_CERT_FILE="/opt/kubernetes/ssl/server.pem"
ETCD_KEY_FILE="/opt/kubernetes/ssl/server-key.pem"
#ETCD_CLIENT_CERT_AUTH="false"
ETCD_TRUSTED_CA_FILE="/opt/kubernetes/ssl/ca.pem"
#ETCD_AUTO_TLS="false"
#ETCD_PEER_CERT_FILE=""
#ETCD_PEER_KEY_FILE=""
#ETCD_PEER_CLIENT_CERT_AUTH="false"
#ETCD_PEER_TRUSTED_CA_FILE=""
#ETCD_PEER_AUTO_TLS="false"
#
#[Logging]
ETCD_DEBUG="true"
#ETCD_LOG_PACKAGE_LEVELS=""
#ETCD_LOG_OUTPUT="default"
#
#[Unsafe]
#ETCD_FORCE_NEW_CLUSTER="false"
#
#[Version]
#ETCD_VERSION="false"
#ETCD_AUTO_COMPACTION_RETENTION="0"
#
#[Profiling]
#ETCD_ENABLE_PPROF="false"
#ETCD_METRICS="basic"
#
#[Auth]
#ETCD_AUTH_TOKEN="simple"

运行etcd

systemctl daemon-reload
systemctl enable etcd
systemctl restart etcd

查看集群状态

说明:

  • 由于引入了证书,https访问需要携带证书与秘钥才能执行etcdctl命令;
  • 查看时需要修改endpoints中的地址为规划中的etcd地址
cd /opt/kubernetes/ssl

etcdctl \
--ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem \
--endpoints="https://10.66.77.221:2379" \
cluster-health

配置etcd内网信息

说明:

  • 配置时需要修改endpoints中的地址为规划中的etcd地址
cd /opt/kubernetes/ssl

etcdctl \
> --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem \
> --endpoints="https:/10.66.77.221:2379" \
> set /k8s/network/config '{"Network": "10.255.0.0/16"}'

05.部署flannel

以下操作需要在master和所有node节点分别执行。

安装flannel

yum install flannel -y

配置flannel

无需修改,以下文件仅做存档。

备注:引用FLANNEL_OPTIONS变量,否则配置将不会生效

    vi /usr/lib/systemd/system/flanneld.service
    [Unit]
    Description=Flanneld overlay address etcd agent
    After=network.target
    After=network-online.target
    Wants=network-online.target
    After=etcd.service
    Before=docker.service

    [Service]
    Type=notify
    EnvironmentFile=/etc/sysconfig/flanneld
    EnvironmentFile=-/etc/sysconfig/docker-network
    ExecStart=/usr/bin/flanneld-start $FLANNEL_OPTIONS
    ExecStartPost=/usr/libexec/flannel/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/docker
    Restart=on-failure

    [Install]
    WantedBy=multi-user.target
    WantedBy=docker.service

需要配置以下文件

vi /etc/sysconfig/flanneld
# Flanneld configuration options

# etcd url location.  Point this to the server where etcd runs
FLANNEL_ETCD_ENDPOINTS="https://10.66.77.221:2379"

# etcd config key.  This is the configuration key that flannel queries
# For address range assignment
FLANNEL_ETCD_PREFIX="/k8s/network"

# Any additional options that you want to pass
FLANNEL_OPTIONS="--etcd-endpoints=https://10.66.77.221:2379 \
                -etcd-cafile=/opt/kubernetes/ssl/ca.pem \
                -etcd-certfile=/opt/kubernetes/ssl/server.pem \
                -etcd-keyfile=/opt/kubernetes/ssl/server-key.pem"

无需修改,以下文件仅做存档。

    vi /etc/sysconfig/docker-network
    # /etc/sysconfig/docker-network
    DOCKER_NETWORK_OPTIONS=

启动Flannel

systemctl daemon-reload
systemctl enable flanneld
systemctl restart flanneld
systemctl restart docker

Flannel安装成功检测

通过如下命令检查是否安装成功:

systemctl status flanneld | grep active
Active: active (running) since Thu 2018-12-13 03:04:17 CST; 1 day 8h ago

通过ip addr命令可以查看生成flannel0这个配置项,网段为:10.255.78.0/16

ip addr
454: flannel0: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 1472 qdisc pfifo_fast state UNKNOWN qlen 500
    link/none 
    inet 10.255.78.0/16 scope global flannel0
    valid_lft forever preferred_lft forever

将配置传输至其他节点

快捷操作,将配置文件直接拷贝至其他需要配置的节点

scp /etc/sysconfig/flanneld root@10.66.77.222:/etc/sysconfig/

查看flannel配置

说明:

  • 在所有master和node配置完了flannel之后进行查看;
  • 查看时需要修改endpoints中的IP地址,改为规划中etcd的IP。
cd /opt/kubernetes/ssl
etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://10.66.77.221:2379"  ls /k8s/network/subnets

/k8s/network/subnets/10.255.25.0-24
/k8s/network/subnets/10.255.39.0-24
/k8s/network/subnets/10.255.90.0-24

etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://10.66.77.221:2379"  get /k8s/network/subnets/10.255.25.0-24

{"PublicIP":"10.64.70.227"}

netstat -antp | grep flanneld
tcp        0      0 10.64.70.227:58198      10.64.70.227:2379       ESTABLISHED 79362/flanneld
tcp        0      0 10.64.70.227:58196      10.64.70.227:2379       ESTABLISHED 79362/flanneld
tcp        0      0 10.64.70.227:58197      10.64.70.227:2379       ESTABLISHED 79362/flanneld
tcp        0      0 10.64.70.227:58199      10.64.70.227:2379       ESTABLISHED 79362/flannel

06.创建Node节点kubeconfig与安装Master

以下操作仅在master执行。

证书预备脚本文件

master节点操作
说明:

  • 注意修改KUBE_APISERVER的masterIP地址
cd /data/ssl/
vim kubeconfig.sh 

# TLS Bootstrapping Token
export BOOTSTRAP_TOKEN=$(head -c 16 /dev/urandom | od -An -t x | tr -d ' ')
cat > token.csv <<EOF
${BOOTSTRAP_TOKEN},kubelet-bootstrap,10001,"system:kubelet-bootstrap"
EOF

#----------------------

export KUBE_APISERVER=https://10.66.77.221:6443

kubectl config set-cluster kubernetes \
--certificate-authority=./ca.pem \
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=bootstrap.kubeconfig


kubectl config set-credentials kubelet-bootstrap \
--token=${BOOTSTRAP_TOKEN} \
--kubeconfig=bootstrap.kubeconfig


kubectl config set-context default \
--cluster=kubernetes \
--user=kubelet-bootstrap \
--kubeconfig=bootstrap.kubeconfig


kubectl config use-context default --kubeconfig=bootstrap.kubeconfig

#----------------------

kubectl config set-cluster kubernetes \
--certificate-authority=./ca.pem \
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=kube-proxy.kubeconfig

kubectl config set-credentials kube-proxy \
--client-certificate=./kube-proxy.pem \
--client-key=./kube-proxy-key.pem \
--embed-certs=true \
--kubeconfig=kube-proxy.kubeconfig

kubectl config set-context default \
--cluster=kubernetes \
--user=kube-proxy \
--kubeconfig=kube-proxy.kubeconfig

kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig

master 安装

yum install kubernetes-master-1.10.0 -y

生成证书文件

sh kubeconfig.sh

目录下应该存在以下文件

admin.csr
admin-csr.json
admin-key.pem
admin.pem
bootstrap.kubeconfig  //新生成项
ca-config.json
ca.csr
ca-csr.json
ca-key.pem
ca.pem
certificate.sh
kubeconfig.sh
kube-proxy.csr
kube-proxy-csr.json
kube-proxy-key.pem
kube-proxy.kubeconfig    //新生成项
kube-proxy.pem
server.csr
server-csr.json
server-key.pem
server.pem
token.csv     //新生成项

Node配置预备

cp token.csv /opt/kubernetes/cfg/
chmod +x /opt/kubernetes/cfg/token.csv

scp *kubeconfig root@10.66.77.222:/opt/kubernetes/cfg
scp *kubeconfig root@10.66.77.223:/opt/kubernetes/cfg
  • 1
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值