二进制方式搭建kubernetes集群V1.18.5

6 篇文章 0 订阅
3 篇文章 0 订阅

1.搭建前准备工作

1-1.机器环境的准备工作

关闭selinux和firewalld(也可以使用fire-cmd放开某些规则)

 sudo swapoff -a
 sudo systemctl stop firewalld
 sudo systemctl disable firewalld
 setenforce 0
 sed -i 's/^SELINUX=enforcing$/SELINUX=disabled/' /etc/selinux/config

关闭swap分区

swapoff -a

多线程下载工具mwget安装(可选)

wget http://jaist.dl.sourceforge.net/project/kmphpfm/mwget/0.1/mwget_0.1.0.orig.tar.bz2

yum install -y bzip2 gcc-c++ openssl-devel.x86_64 intltool

tar -jxvf mwget_0.1.0.orig.tar.bz2

cd mwget_0.1.0.orig

./configure

make && make install

下载cfssl工具

wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
chmod +x cfssl_linux-amd64
mv cfssl_linux-amd64 /usr/local/bin/cfssl
wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
chmod +x cfssljson_linux-amd64
mv cfssljson_linux-amd64 /usr/local/bin/cfssljson
wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
chmod +x cfssl-certinfo_linux-amd64
mv cfssl-certinfo_linux-amd64 /usr/local/bin/cfssl-certinfo	

下载ETCD服务相关二进制文件

mwget https://github.com/etcd-io/etcd/releases/download/v3.4.9/etcd-v3.4.9-linux-amd64.tar.gz

tar -zxvf etcd-v3.4.9-linux-amd64.tar.gz

cd etcd-v3.4.9-linux-amd64

cp etcd etcdctl /usr/local/bin/

下载kubernetes组件的二进制文件

wget https://dl.k8s.io/v1.18.5/kubernetes-server-linux-amd64.tar.gz

你也可以使用docker的方式去启动,例子如下,需要的:

docker run \
  -p 2379:2379 \
  -p 2380:2380 \
  --mount type=bind,source=/tmp/etcd-data.tmp,destination=/etcd-data \
  --name etcd-gcr-v3.4.9 \
  gcr.io/etcd-development/etcd:v3.4.9 \
  /usr/local/bin/etcd \
  --name s1 \
  --data-dir /etcd-data \
  --listen-client-urls http://0.0.0.0:2379 \
  --advertise-client-urls http://0.0.0.0:2379 \
  --listen-peer-urls http://0.0.0.0:2380 \
  --initial-advertise-peer-urls http://0.0.0.0:2380 \
  --initial-cluster s1=http://0.0.0.0:2380 \
  --initial-cluster-token tkn \
  --initial-cluster-state new \
  --log-level info \
  --logger zap \
  --log-outputs stderr

安装docker

yum install -y yum-utils device-mapper-persistent-data lvm2
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
yum install docker-ce -y

修改cgroup为systemd

mkdir /etc/docker
cat <<EOF > /etc/docker/daemon.json
{
"exec-opts": ["native.cgroupdriver=systemd"]
}
EOF
systemctl restart docker
systemctl enable docker	

1-2.集群需要使用的证书签发

1-2-1.签发CA证书

我们将证书压缩为一套,使用同一个ca方便管理:

编辑CA配置文件:

示例查看方式: cfssl print-defaults config

编辑内容如下

cat > ca-config.json <<EOF
	{
	  "signing": {
	    "default": {
	      "expiry": "87600h"
	    },
	    "profiles": {
	      "kubernetes": {
	        "usages": [
	            "signing",
	            "key encipherment",
	            "server auth",
	            "client auth"
	        ],
	        "expiry": "87600h"
	      }
	    }
	  }
	}
EOF

编辑CA的CSR请求内容文件:

示例查看方式:cfssl print-defaults csr

编辑内容如下:

cat > ca-csr.json <<EOF
{
	  "CN": "kubernetes",
	  "key": {
	    "algo": "rsa",
	    "size": 2048
	  },
	  "names": [
	    {
	      "C": "CN",
	      "ST": "BeiJing",
	      "L": "BeiJing",
	      "O": "k8s",
	      "OU": "System"
	    }
	  ]
}
EOF

生成CA证书和对应的私钥

cfssl gencert -initca ca-csr.json | cfssljson -bare ca

pem转crt(可选操作,非DER(二进制格式)直接拷贝重新命名也可以。)

openssl x509 -in ca.pem -out ca.crt  

openssl rsa -in ca-key.pem -out ca.key  

可以使用vimdiff查看其实对于pem格式转换后前后内容一样的

vimdiff ca-key.pem ca.key

vimdiff ca.pem ca.crt

可以使用openssl命令去检查我们签发的证书是否正确

查看秘钥:

openssl x509 -in ca-key.key -noout -text

查看证书文件

openssl x509 -noout -text -in ca.crt

查看CSR信息

openssl req -noout -text -in ca.csr

1-2-1.为etcd签发证书

etcd的启动参数需要制定以下证书文件:

  • etcd对外提供服务的server证书和私钥
  • etcd集群中各节点通信使用的peer证书及私钥以及验证peer的CA证书
  • etcd客户端访问etcd服务端用于验证的CA证书

这里我们将server证书/peer证书/客户端证书复用为一个,然后使用我们上一步签发的CA证书作为etcd的CA证书,这里为了方便我们将三个节点的ip地址一次性写入进去,后面直接拷贝到其他的节点即可。

创建etcd的CSR请求文件

mkdir etcd && cd etcd


cat > etcd-csr.json << EOF
{
  "CN": "etcd",
  "hosts": [
    "127.0.0.1",
    "172.17.216.11",
	"172.17.216.12",
	"172.17.216.13"
  ],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "Beijing",
      "L": "XS",
      "O": "k8s",
      "OU": "System"
    }
  ]
}
EOF	

签发复合证书:

cfssl gencert \
    -ca=../ca.pem \
    -ca-key=../ca-key.pem \
    -config=../ca-config.json \
    -profile=kubernetes etcd-csr.json | cfssljson -bare etcd	

验证一下证书:

openssl verify  -CAfile ../ca.pem etcd.pem

转换格式(可选操作)

openssl x509 -in etcd.pem -out etcd.crt  

openssl rsa -in etcd-key.pem -out etcd.key  

1-2-2.签发kubernetes证书

mkdir kubernetes && cd kubernetes

创建CSR请求文件:

cat > kubernetes-csr.json <<EOF
	{
	    "CN": "kubernetes",
	    "hosts": [
	      "127.0.0.1",
    	  "172.17.216.11",
		  "172.17.216.12",
		  "172.17.216.13",
	      "10.96.0.1",
	      "kubernetes",
	      "kubernetes.default",
	      "kubernetes.default.svc",
	      "kubernetes.default.svc.cluster",
	      "kubernetes.default.svc.cluster.local"
	    ],
	    "key": {
	        "algo": "rsa",
	        "size": 2048
	    },
	    "names": [
	        {
	            "C": "CN",
	            "ST": "BeiJing",
	            "L": "BeiJing",
	            "O": "k8s",
	            "OU": "System"
	        }
	    ]
	}
EOF

如果 hosts 字段不为空则需要指定授权使用该证书的 IP 或域名列表,由于该证书后续也可以被 etcd 集群和 kubernetes master 集群使用,所以上面分别指定了 etcd 集群、kubernetes master 集群的主机 IP 和 kubernetes 服务的服务 IP(一般是 kube-apiserver 指定的 service-cluster-ip-range 网段的第一个IP,如 10.96.0.1)。

生成私钥并签发证书

cfssl gencert \
        -ca=../ca.pem \
        -ca-key=../ca-key.pem \
        -config=../ca-config.json \
        -profile=kubernetes kubernetes-csr.json | cfssljson -bare kubernetes

转换格式(可选操作)

openssl x509 -in kubernetes.pem -out kubernetes.crt  

openssl rsa -in kubernetes-key.pem -out kubernetes.key  

1-2-3.签发admin证书

mkdir admin && cd admin

创建CSR请求中间文件:

cat > admin-csr.json <<EOF
	{
	  "CN": "admin",
	  "hosts": [],
	  "key": {
	    "algo": "rsa",
	    "size": 2048
	  },
	  "names": [
	    {
	      "C": "CN",
	      "ST": "BeiJing",
	      "L": "BeiJing",
	      "O": "system:masters",
	      "OU": "System"
	    }
	  ]
	}
EOF

签发证书并生成私钥

 cfssl gencert \
        -ca=../ca.pem \
        -ca-key=../ca-key.pem \
        -config=../ca-config.json \
        -profile=kubernetes admin-csr.json | cfssljson -bare admin

修改格式(可选)

openssl x509 -in admin.pem -out admin.crt  

openssl rsa -in admin-key.pem -out admin.key  

1-2-4.签发kube-proxy证书

mkdir kube-proxy && cd kube-proxy

创建证书请求文件:

cat > kube-proxy-csr.json <<EOF
	{
	  "CN": "system:kube-proxy",
	  "hosts": [],
	  "key": {
	    "algo": "rsa",
	    "size": 2048
	  },
	  "names": [
	    {
	      "C": "CN",
	      "ST": "BeiJing",
	      "L": "BeiJing",
	      "O": "system:kube-proxy",
	      "OU": "System"
	    }
	  ]
	}
EOF

签发kube-proxy证书并生成私钥:

cfssl gencert \
        -ca=../ca.pem \
        -ca-key=../ca-key.pem \
        -config=../ca-config.json \
        -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy

修改格式(可选)

openssl x509 -in kube-proxy.pem -out kube-proxy.crt  

openssl rsa -in kube-proxy-key.pem -out kube-proxy.key  

1-2-5.签发calico证书

mkdir calico && cd calico

创建证书请求文件:

cat > calico-csr.json <<EOF	
{
  "CN": "calico",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
	  "C": "CN",
	  "ST": "BeiJing",
	  "L": "BeiJing",
	  "O": "k8s",
	  "OU": "System"
    }
  ]
}
EOF

签发证书并生成私钥:

cfssl gencert \
        -ca=../ca.pem \
        -ca-key=../ca-key.pem \
        -config=../ca-config.json \
        -profile=kubernetes calico-csr.json | cfssljson -bare calico

修改格式(可选)

openssl x509 -in calico.pem -out calico.crt  

openssl rsa -in calico-key.pem -out calico.key  

1-2-6.签发kube-controller-manager证书

mkdir kube-controller-manager && cd kube-controller-manager

创建请求文件:

cat > kube-controller-manager-csr.json <<EOF	
{
  "CN": "system:kube-controller-manager",
  "hosts": [
    "127.0.0.1",
    "172.17.216.11",
    "172.17.216.12",
    "172.17.216.13",
    "localhost"
  ],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "BeiJing",
      "L": "BeiJing",
      "O": "system:kube-controller-manager",
      "OU": "System"
    }
  ]
}
EOF

签发证书并生成私钥:

cfssl gencert \
        -ca=../ca.pem \
        -ca-key=../ca-key.pem \
        -config=../ca-config.json \
        -profile=kubernetes kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager

修改格式(可选)

openssl x509 -in kube-controller-manager.pem -out kube-controller-manager.crt  

openssl rsa -in kube-controller-manager-key.pem -out kube-controller-manager.key 

拷贝证书到各个节点:

scp -r ssl/ root@172.17.216.12:/opt/
scp -r ssl/ root@172.17.216.13:/opt/

1-2-7.签发kube-scheduler证书

mkdir kube-scheduler && cd kube-scheduler

生成请求中间文件:

cat > kube-scheduler-csr.json <<EOF    
{
  "CN": "system:kube-scheduler",
  "hosts": [
    "127.0.0.1",
    "172.17.216.11",
    "172.17.216.12",
    "172.17.216.13",
    "localhost"
  ],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "BeiJing",
      "L": "BeiJing",
      "O": "system:kube-scheduler",
      "OU": "System"
    }
  ]
}
EOF

签发证书并生成私钥:

cfssl gencert \
        -ca=../ca.pem \
        -ca-key=../ca-key.pem \
        -config=../ca-config.json \
        -profile=kubernetes kube-scheduler-csr.json | cfssljson -bare kube-scheduler

修改格式(可选)

openssl x509 -in kube-scheduler.pem -out kube-scheduler.crt  

openssl rsa -in kube-scheduler-key.pem -out kube-scheduler.key 

1-2-8.生成serviceaccount认证秘钥

openssl ecparam -name secp521r1 -genkey -noout -out sa.key
openssl ec -in sa.key -outform PEM -pubout -out sa.pub

1-2-9.签发front-proxy证书

签发front-proxy根证书:

cat <<EOF > front-proxy-ca-csr.json
{
    "CN": "front-proxy-ca",
    "key": {
        "algo": "rsa",
        "size": 2048
    }
}
EOF

生成CA和秘钥

cfssl gencert \
  -initca front-proxy-ca-csr.json | cfssljson -bare front-proxy-ca

修改格式(可选)

openssl x509 -in front-proxy-ca.pem -out front-proxy-ca.crt  

openssl rsa -in front-proxy-ca-key.pem -out front-proxy-ca.key 

签发front-proxy-client证书

编辑请求中间文件

cat <<EOF > front-proxy-client-csr.json
{
    "CN": "front-proxy-client",
    "key": {
        "algo": "rsa",
        "size": 2048
    }
}
EOF

生成 front-proxy-client证书

cfssl gencert \
  -ca=front-proxy-ca.pem \
  -ca-key=front-proxy-ca-key.pem \
  -config=../ca-config.json \
  -profile=kubernetes \
  front-proxy-client-csr.json | cfssljson -bare front-proxy-client

修改格式(可选)

openssl x509 -in front-proxy-client.pem -out front-proxy-client.crt  

openssl rsa -in front-proxy-client-key.pem -out front-proxy-client.key 

1-2-10.签发apiserver-kubelet-client证书

生成证书请求文件

cat <<EOF > kube-apiserver-kubelet-client-csr.json
{
    "CN": "kube-apiserver-kubelet-client",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "BeiJing",
            "ST": "BeiJing",
            "O": "system:masters"
        }
    ]
}
EOF

签署证书

cfssl gencert \
  -ca=ca.pem \
  -ca-key=ca-key.pem \
  -config=ca-config.json \
  -profile=kubernetes \
  kube-apiserver-kubelet-client-csr.json | cfssljson -bare kube-apiserver-kubelet-client

修改格式(可选)

openssl x509 -in kube-apiserver-kubelet-client.pem -out kube-apiserver-kubelet-client.crt  

openssl rsa -in kube-apiserver-kubelet-client-key.pem -out kube-apiserver-kubelet-client.key 

1-2-10.签发apiserver-kubelet-client证书-非复用主节点

生成请求证书文件:

cat <<EOF > kube-apiserver-kubelet-client-csr.json
{
    "CN": "kube-apiserver-kubelet-client",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "BeiJing",
            "ST": "BeiJing",
            "O": "system:nodes"
        }
    ]
}
EOF

签署证书

cfssl gencert \
  -ca=ca.pem \
  -ca-key=ca-key.pem \
  -config=ca-config.json \
  -profile=kubernetes \
  kube-apiserver-kubelet-client-csr.json | cfssljson -bare kube-apiserver-kubelet-client

修改格式(可选)

openssl x509 -in kube-apiserver-kubelet-client.pem -out kube-apiserver-kubelet-client.crt  

openssl rsa -in kube-apiserver-kubelet-client-key.pem -out kube-apiserver-kubelet-client.key 

1-3.安装etcd数据库

我们需要安装etcd数据库,为kubernetes提供存储服务。

创建etcd的启动用户和工作目录(每个etcd节点都要执行)

groupadd -r etcd
useradd -r -g etcd -s /sbin/nologin etcd
mkdir /data/etcd
chown -R etcd:etcd /data/etcd/

编辑etcd的service文件

vi /usr/lib/systemd/system/etcd.service

内容如下:

[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
Documentation=https://github.com/coreos
[Service]
User=etcd
Group=etcd
Type=notify
WorkingDirectory=/data/etcd
EnvironmentFile=-/data/etcd/etcd.conf
ExecStart=/usr/local/bin/etcd \
	 --name ${ETCD_INFRA_NAME} \
	 --cert-file=/opt/ssl/etcd/etcd.crt \
	 --key-file=/opt/ssl/etcd/etcd.key \
	 --peer-cert-file=/opt/ssl/etcd/etcd.crt \
	 --peer-key-file=/opt/ssl/etcd/etcd.key \
	 --trusted-ca-file=/opt/ssl/ca.crt \
	 --peer-trusted-ca-file=/opt/ssl/ca.crt \
	 --initial-advertise-peer-urls ${ETCD_INITIAL_ADVERTISE_PEER_URLS_BK} \
	 --listen-peer-urls ${ETCD_LISTEN_PEER_URLS_BK} \
	 --listen-client-urls ${ETCD_LISTEN_CLIENT_URLS_BK},http://127.0.0.1:2379 \
	 --advertise-client-urls ${ETCD_ADVERTISE_CLIENT_URLS_BK} \
	 --initial-cluster-token ${ETCD_INITIAL_CLUSTER_TOKEN_BK} \
	 --initial-cluster etcd1=https://172.17.216.11:2380,etcd2=https://172.17.216.12:2380,etcd3=https://172.17.216.13:2380 \
	 --initial-cluster-state new \
	 --snapshot-count=10000 \
	 --data-dir=ETCD_DATA_DIR
Restart=on-failure
RestartSec=5
LimitNOFILE=65536
	
[Install]
WantedBy=multi-user.target

我们将公共的配置直接写在了service文件中,个性化的配置抽离到单独的config文件中:

创建配置文件

vi etcd.conf

内容如下:

# [member]
ETCD_INFRA_NAME="etcd1"
#etcd集群节点通信用的绑定地址
ETCD_LISTEN_PEER_URLS_BK="https://172.17.216.11:2380"
ETCD_LISTEN_CLIENT_URLS_BK="https://172.17.216.11:2379"
	
#[cluster]
ETCD_INITIAL_ADVERTISE_PEER_URLS_BK="https://172.17.216.11:2380"
ETCD_INITIAL_CLUSTER_TOKEN_BK="etcd-cluster"
ETCD_ADVERTISE_CLIENT_URLS_BK="https://172.17.216.11:2379"

这里注意一下由于我的etcd版本是3.4.0以上的版本所以环境变量配置文件不能和参数同名,否则会导致检测不通过,无法启动etcd服务。

重新加载配置文件

systemctl daemon-reload

尝试启动etcd集群

systemctl start etcd

逐步启动etcd节点之后尝试使用客户端进行连接

export ETCD_OPTIONS='--cacert=ca.pem --cert=etcd/etcd.pem --key=etcd/etcd.key --endpoints="https://172.17.216.11:2379,https://172.17.216.12:2379,https://172.17.216.13:2379"'


etcdctl $ETCD_OPTIONS endpoint health


etcdctl $ETCD_OPTIONS endpoint status

1-4.配置kubectl

生成随机token

head -c 16 /dev/urandom | od -An -t x | tr -d ' '
echo "8afdf3c4eb7c74018452423c29433609,kubelet-bootstrap,10001,\"system:kubelet-bootstrap\"" > token.csv

生成admin.conf

export KUBE_APISERVER="https://172.17.216.11:6443"

kubectl config set-cluster kubernetes --certificate-authority=ca.crt --embed-certs=true --server=${KUBE_APISERVER}
kubectl config set-credentials admin --client-certificate=admin/admin.crt --embed-certs=true --client-key=admin/admin.key
kubectl config set-context kubernetes --cluster=kubernetes --user=admin
kubectl config use-context kubernetes

2.主节点搭建

2-1.安装kube-apiserver服务

编写apiserver配置文件:

vi /usr/lib/systemd/system/kube-apiserver.service

内容如下:

[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target
After=etcd.service

[Service]
EnvironmentFile=-/opt/k8s/config
EnvironmentFile=-/opt/k8s/apiserver
ExecStart=/usr/local/bin/kube-apiserver \
	    $KUBE_LOGTOSTDERR \
	    $KUBE_LOG_LEVEL \
	    $KUBE_ETCD_SERVERS \
	    $KUBE_API_ADDRESS \
	    $KUBE_API_PORT \
	    $KUBELET_PORT \
	    $KUBE_ALLOW_PRIV \
	    $KUBE_SERVICE_ADDRESSES \
	    $KUBE_ADMISSION_CONTROL \
	    $KUBE_API_ARGS
Restart=on-failure
Type=notify
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

编写apiserver配置文件内容如下:

###
# kubernetes system config
#
# The following values are used to configure the kube-apiserver
#

# The address on the local server to listen to.
KUBE_API_ADDRESS="--advertise-address=0.0.0.0"

# The port on the local server to listen on.
KUBE_API_PORT="--secure-port=6443 --insecure-port=0 "

# Comma separated list of nodes in the etcd cluster
KUBE_ETCD_SERVERS="--etcd-servers=https://172.17.216.11:2379,https://172.17.216.12:2379,https://172.17.216.13:2379"

# Address range to use for services
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.96.0.0/12"

# default admission control policies
KUBE_ADMISSION_CONTROL="--enable-admission-plugins=NodeRestriction"

# Add your own!
KUBE_API_ARGS="--authorization-mode=Node,RBAC \
    --client-ca-file=/opt/ssl/ca.crt \
    --enable-bootstrap-token-auth=true \
    --etcd-cafile=/opt/ssl/ca.crt \
    --etcd-certfile=/opt/ssl/etcd/etcd.crt \
    --etcd-keyfile=/opt/ssl/etcd/etcd.key \
    --kubelet-client-certificate=/opt/ssl/kubernetes/kubernetes.crt \
    --kubelet-client-key=/opt/ssl/kubernetes/kubernetes.key \
    --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname \
    --proxy-client-cert-file=/opt/ssl/front-proxy/front-proxy-client.crt \
    --proxy-client-key-file=/opt/ssl/front-proxy/front-proxy-client.key \
    --requestheader-allowed-names=front-proxy-client \
    --requestheader-client-ca-file=/opt/ssl/front-proxy/front-proxy-ca.crt \
    --requestheader-extra-headers-prefix=X-Remote-Extra- \
    --requestheader-group-headers=X-Remote-Group \
    --requestheader-username-headers=X-Remote-User\
    --service-account-key-file=/opt/ssl/sa.pub \
    --tls-cert-file=/opt/ssl/kubernetes/kubernetes.crt \
    --tls-private-key-file=/opt/ssl/kubernetes/kubernetes.key \
    --token-auth-file=/opt/ssl/token.csv \
    --audit-log-path=/var/lib/audit.log"

编写公共config文件

###
# kubernetes system config
#
# The following values are used to configure various aspects of all
# kubernetes services, including
#
#   kube-apiserver.service
#   kube-controller-manager.service
#   kube-scheduler.service
#   kubelet.service
#   kube-proxy.service
# logging to stderr means we get it in the systemd journal
KUBE_LOGTOSTDERR="--logtostderr=true"

# journal message level, 0 is debug
KUBE_LOG_LEVEL="--v=0"

# Should this cluster be allowed to run privileged docker containers
KUBE_ALLOW_PRIV="--allow-privileged=true"

尝试启动kube-apiserver

systemctl daemon-reload
systemctl start kube-apiserver

2-2.安装kube-controller-manager服务

编写service文件

vi /usr/lib/systemd/system/kube-controller-manager.service

编写内容如下:

[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/GoogleCloudPlatform/kubernetes

[Service]
EnvironmentFile=-/opt/k8s/config
EnvironmentFile=-/opt/k8s/controller-manager
ExecStart=/usr/local/bin/kube-controller-manager \
	    $KUBE_LOGTOSTDERR \
	    $KUBE_LOG_LEVEL \
	    $KUBE_MASTER \
	    $KUBE_CONTROLLER_MANAGER_ARGS
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

编写config环境配置文件

###
# The following values are used to configure the kubernetes controller-manager

# defaults from config and apiserver should be adequate

# Add your own!
KUBE_CONTROLLER_MANAGER_ARGS="--bind-address=127.0.0.1 \
    --allocate-node-cidrs=true \
    --authentication-kubeconfig=/opt/k8s/auth/controller-manager.conf \
    --authorization-kubeconfig=/opt/k8s/auth/controller-manager.conf \
    --client-ca-file=/opt/ssl/ca.crt \
    --cluster-cidr=10.212.0.0/16 \
    --cluster-signing-cert-file=/opt/ssl/ca.crt \
    --cluster-signing-key-file=/opt/ssl/ca.key \
    --controllers=*,bootstrapsigner,tokencleaner \
    --kubeconfig=/opt/k8s/auth/controller-manager.conf \
    --leader-elect=true \
    --node-cidr-mask-size=24 \
    --requestheader-client-ca-file=/opt/ssl/front-proxy/front-proxy-ca.crt \
    --root-ca-file=/opt/ssl/ca.crt \
    --service-account-private-key-file=/opt/ssl/sa.key \
    --use-service-account-credentials=true \
    --service-cluster-ip-range=10.96.0.0/12"

生成controller-manager连接配置文件:

cat > controller-manager.conf << EOF
apiVersion: v1
kind: Config
clusters:
- name: kubernetes
  cluster:
    server: https://172.17.216.11:6443
    certificate-authority-data: $( openssl base64 -A -in /opt/ssl/ca.crt )
users:
- name: system:kube-controller-manager
  user:
    client-certificate-data: $( openssl base64 -A -in //opt/ssl/kube-controller-manager/kube-controller-manager.crt )
    client-key-data: $( openssl base64 -A -in /opt/ssl/kube-controller-manager/kube-controller-manager.key )
contexts:
- context:
    cluster: kubernetes
    user: system:kube-controller-manager
  name: system:kube-controller-manager@kubernetes
current-context: system:kube-controller-manager@kubernetes
EOF

尝试启动kube-controller-manager

systemctl daemon-reload
systemctl start kube-controller-manager

2-3.安装kube-scheduler服务

编辑配置文件:

vi /usr/lib/systemd/system/kube-scheduler.service

配置内容如下:

[Unit]
Description=Kubernetes Scheduler Plugin
Documentation=https://github.com/GoogleCloudPlatform/kubernetes

[Service]
EnvironmentFile=-/opt/k8s/config
EnvironmentFile=-/opt/k8s/scheduler
ExecStart=/usr/local/bin/kube-scheduler \
	    $KUBE_LOGTOSTDERR \
	    $KUBE_LOG_LEVEL \
	    $KUBE_MASTER \
	    $KUBE_SCHEDULER_ARGS
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

编写config配置文件:

###
# kubernetes scheduler config

# default config should be adequate

# Add your own!
KUBE_SCHEDULER_ARGS="--address=127.0.0.1 \
    --authentication-kubeconfig=/opt/k8s/auth/scheduler.conf \
    --authorization-kubeconfig=/opt/k8s/auth/scheduler.conf \
    --kubeconfig=/opt/k8s/auth/scheduler.conf \
    --leader-elect=true"

生成scheduler.conf认证授权文件:

cat >/opt/k8s/auth/scheduler.conf <<EOF
apiVersion: v1
kind: Config
clusters:
- name: kubernetes
  cluster:
    server: https://172.17.216.11:6443
    certificate-authority-data: $( openssl base64 -A -in /opt/ssl/ca.crt ) 
users:
- name: system:kube-scheduler
  user:
    client-certificate-data: $( openssl base64 -A -in /opt/ssl/kube-scheduler/kube-scheduler.crt ) 
    client-key-data: $( openssl base64 -A -in /opt/ssl/kube-scheduler/kube-scheduler.key ) 
contexts:
- context:
    cluster: kubernetes
    user: system:kube-scheduler
  name: system:kube-scheduler@kubernetes
current-context: system:kube-scheduler@kubernetes
EOF

检查一下各个组件的状态:

kubectl get cs

NAME                 STATUS    MESSAGE             ERROR
scheduler            Healthy   ok                  
controller-manager   Healthy   ok                  
etcd-0               Healthy   {"health":"true"}   
etcd-1               Healthy   {"health":"true"}   
etcd-2               Healthy   {"health":"true"} 

3.安装kubelet服务

mkdir /var/lib/kubelet/

编辑kubelet服务文件

vi /usr/lib/systemd/system/kubelet.service

编辑内容如下:

[Unit]
Description=Kubernetes Kubelet Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=docker.service
Requires=docker.service

[Service]
WorkingDirectory=/var/lib/kubelet
ExecStart=/usr/local/bin/kubelet \
	    --logtostderr=true \
	    --v=0 \
        --pod-infra-container-image=registry.aliyuncs.com/google_containers/pause:3.2 \
        --cni-bin-dir=/opt/cni/bin \
		--network-plugin=cni \
        --config=/var/lib/kubelet/config.yaml \
        --cgroup-driver=systemd \
        --kubeconfig=/opt/kubernetes/auth/kubelet.conf \
        --bootstrap-kubeconfig=/opt/k8s/auth/bootstrap.kubeconfig 
        
Restart=on-failure
KillMode=process
RestartSec=10

[Install]
WantedBy=multi-user.target

编辑config.yaml配置文件

mkdir /etc/kubernetes/manifests -p

内容如下:

address: 0.0.0.0
apiVersion: kubelet.config.k8s.io/v1beta1
authentication:
  anonymous:
    enabled: false
  webhook:
    cacheTTL: 2m0s
    enabled: true
  x509:
    clientCAFile: /opt/ssl/ca.crt
authorization:
  mode: Webhook
  webhook:
    cacheAuthorizedTTL: 5m0s
    cacheUnauthorizedTTL: 30s
cgroupDriver: cgroupfs
cgroupsPerQOS: true
clusterDNS:
- 10.96.0.10
clusterDomain: cluster.local
configMapAndSecretChangeDetectionStrategy: Watch
containerLogMaxFiles: 5
containerLogMaxSize: 10Mi
contentType: application/vnd.kubernetes.protobuf
cpuCFSQuota: true
cpuCFSQuotaPeriod: 100ms
cpuManagerPolicy: none
cpuManagerReconcilePeriod: 10s
enableControllerAttachDetach: true
enableDebuggingHandlers: true
enforceNodeAllocatable:
- pods
eventBurst: 10
eventRecordQPS: 5
evictionHard:
  imagefs.available: 15%
  memory.available: 100Mi
  nodefs.available: 10%
  nodefs.inodesFree: 5%
evictionPressureTransitionPeriod: 5m0s
failSwapOn: false
fileCheckFrequency: 20s
hairpinMode: promiscuous-bridge
healthzBindAddress: 127.0.0.1
healthzPort: 10248
httpCheckFrequency: 20s
imageGCHighThresholdPercent: 85
imageGCLowThresholdPercent: 80
imageMinimumGCAge: 2m0s
iptablesDropBit: 15
iptablesMasqueradeBit: 14
kind: KubeletConfiguration
kubeAPIBurst: 10
kubeAPIQPS: 5
makeIPTablesUtilChains: true
maxOpenFiles: 1000000
maxPods: 110
nodeLeaseDurationSeconds: 40
nodeStatusUpdateFrequency: 10s
oomScoreAdj: -999
podPidsLimit: -1
port: 10250
registryBurst: 10
registryPullQPS: 5
resolvConf: /etc/resolv.conf
rotateCertificates: true
runtimeRequestTimeout: 2m0s
serializeImagePulls: true
staticPodPath: /etc/kubernetes/manifests
streamingConnectionIdleTimeout: 4h0m0s
syncFrequency: 1m0s
volumeStatsAggPeriod: 1m0s

引导token的方式要求客户端向api-server发起请求时告诉他你的用户名和token,
并且这个用户是具有一个特定的角色:system:node-bootstrapper,
所以需要先将 bootstrap token 文件中的 kubelet-bootstrap 用户赋予这个特定角色,
然后 kubelet 才有权限发起创建认证请求。 在主节点执行下面命令:

kubectl create clusterrolebinding kubelet-bootstrap \
         --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap

创建kubelet的bootstrap文件

export KUBE_APISERVER="https://172.17.216.11:6443"


kubectl config set-cluster kubernetes \
        --certificate-authority=/opt/ssl/ca.crt  \
        --embed-certs=true \
        --server=${KUBE_APISERVER} \
        --kubeconfig=bootstrap.kubeconfig

kubectl config set-credentials kubelet-bootstrap \
        --token=8afdf3c4eb7c74018452423c29433609 \
        --kubeconfig=bootstrap.kubeconfig

kubectl config set-context default \
        --cluster=kubernetes \
        --user=kubelet-bootstrap \
        --kubeconfig=bootstrap.kubeconfig

kubectl config use-context default --kubeconfig=bootstrap.kubeconfig  

mv bootstrap.kubeconfig /opt/k8s/auth/

尝试启动kubelet

systemctl daemon-reload
systemctl start kubelet 

批准approve请求

kubectl get csr|grep 'Pending' | awk '{print $1}'| xargs kubectl certificate approve

4.安装kube-proxy服务

编辑service文件:

vi /usr/lib/systemd/system/kube-proxy.service

内容如下:

[Unit]
Description=Kubernetes Kube-Proxy Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target

[Service]
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/proxy
ExecStart=/usr/local/bin/kube-proxy \
	    --logtostderr=true \
	    --v=0 \
	    --config=/var/lib/kube-proxy/config.yaml
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

创建yaml文件

apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: 0.0.0.0
clientConnection:
  acceptContentTypes: ""
  burst: 10
  contentType: application/vnd.kubernetes.protobuf
  kubeconfig: /opt/k8s/auth/kube-proxy.conf
  qps: 5
clusterCIDR: 10.212.0.0/16
configSyncPeriod: 15m0s
conntrack:
  max: null
  maxPerCore: 32768
  min: 131072
  tcpCloseWaitTimeout: 1h0m0s
  tcpEstablishedTimeout: 24h0m0s
enableProfiling: false
healthzBindAddress: 0.0.0.0:10256
hostnameOverride: ""
iptables:
  masqueradeAll: false
  masqueradeBit: 14
  minSyncPeriod: 0s
  syncPeriod: 30s
ipvs:
  excludeCIDRs: null
  minSyncPeriod: 0s
  scheduler: ""
  syncPeriod: 30s
kind: KubeProxyConfiguration
metricsBindAddress: 127.0.0.1:10249
mode: ipvs
nodePortAddresses: null
oomScoreAdj: -999
portRange: ""

创建config证书文件

cat > kube-proxy.conf << EOF
apiVersion: v1
kind: Config
clusters:
- name: kubernetes
  cluster:
    server: https://172.17.216.11:6443
    certificate-authority-data: $( openssl base64 -A -in /opt/ssl/ca.crt ) 
users:
- name: system:kube-proxy
  user:
    client-certificate-data: $( openssl base64 -A -in /opt/ssl/kube-proxy/kube-proxy.crt ) 
    client-key-data: $( openssl base64 -A -in /opt/ssl/kube-proxy/kube-proxy.key ) 
contexts:
- context:
    cluster: kubernetes
    user: system:kube-proxy
  name: system:kube-proxy@kubernetes
current-context: system:kube-proxy@kubernetes
EOF

创建ipvs脚本

/etc/sysconfig/modules/ipvs.modules

#!/bin/bash
ipvs_mods_dir="/usr/lib/modules/$(uname -r)/kernel/net/netfilter/ipvs"
for i in $(ls $ipvs_mods_dir | grep -o "^[^.]*"); do
    /sbin/modinfo -F filename $i  &> /dev/null
    if [ $? -eq 0 ]; then
        /sbin/modprobe $i
    fi
done

启动kube-proxy

systemctl daemon-reload
systemctl start kube-proxy

如果报错

Failed to delete stale service IP 10.96.0.10 connections, error: error deleting connecti...ound in $PATH
Hint: Some lines were ellipsized, use -l to show in full.

解决方案

安装conntrack重启kube-proxy即可

yum -y install conntrack
systemctl restart kube-proxy

5.网络插件搭建

5-1.下载CNI网络插件

mkdir /opt/cni/bin && cd /opt/cni/bin
wget https://github.com/containernetworking/plugins/releases/download/v0.8.1/cni-plugins-linux-amd64-v0.8.1.tgz
tar -zxvf cni-plugins-linux-amd64-v0.8.1.tgz

5-2.部署网络插件flannel

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

5-3.部署CoreDNS

拉取镜像:

docker pull registry.cn-hangzhou.aliyuncs.com/aaron89/coredns:1.6.6
docker tag registry.cn-hangzhou.aliyuncs.com/aaron89/coredns:1.6.6 coredns/coredns:1.6.6
wget https://raw.githubusercontent.com/coredns/deployment/master/kubernetes/coredns.yaml.sed
wget https://raw.githubusercontent.com/coredns/deployment/master/kubernetes/deploy.sh

bash deploy.sh -i 10.96.0.10 -r "10.96.0.0/12" -s -t coredns.yaml.sed | kubectl apply -f -

启动busybox尝试一下是否可以正常解析

cat<< EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
  name: busybox
  namespace: default
spec:
  containers:
  - name: busybox
    image: busybox:1.28
    command:
      - sleep
      - "3600"
    imagePullPolicy: IfNotPresent
  restartPolicy: Always
EOF

尝试解析一下:

kubectl exec -ti busybox -- nslookup kubernetes

尝试部署一个pod

kubectl run kubernetes-bootcamp --image=nginx --port=80

尝试部署一个service

kubectl expose pod kubernetes-bootcamp --type="NodePort" --target-port=80 --port=80

5-4.为网络插件flannel赋予网络策略权限

下载calico的部署文件

curl https://docs.projectcalico.org/v3.10/manifests/canal.yaml -O

修改network为10.212.0.0/16后安装

kubectl apply -f canal.yaml
  • 1
    点赞
  • 3
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值