CentOS 7二进制安装Kubernetes 1.10.7集群

一、集群规划

    集群节点为1主2从,如下表如示:

    master:

master:master(192.168.8.201)
组件版本路径
etcd3.3.8/usr/bin
docker18.03.1-ce/usr/bin
flannel0.10.0/opt/flannel/bin
kubernetes1.10.5

/usr/bin

kube-apiserver、

kube-controller-manager、

kube-scheduler

    node:

node:slave-i(192.168.8.211),slave-ii(192.168.8.221)
组件版本路径
etcd3.3.8/usr/bin
docker18.03.1-ce/usr/bin
flannel0.10.0/opt/flannel/bin
kubernetes1.10.5

/usr/bin

kubelet、

kube-proxy

二、安装包下载

    etcd:下载地址:https://github.com/coreos/etcd/releases/,下载3.3.8版本;

    flannel:下载地址:https://github.com/coreos/flannel/releases/,下载v0.10.0版本

    kubernetes:下载地地址:https://github.com/kubernetes/kubernetes/releases,下载1.10.5版本

    docker直接使用aliyun镜像安装,因此不用下载单独下载安装包。

三、服务器设置

1、主机名修改

    将主节点主机名修改为master,两从机主机名修改为slave-i和slave-ii。分别在master(192.168.8.201),slave-i(192.168.8.211),slave-ii(192.168.8.221)执行以下命令永久修改主机名:

    hostnamectl --static set-hostname master
    hostnamectl --static set-hostname slave-i
    hostnamectl --static set-hostname slave-ii

    同时修改三个节点的/etc/hosts文件,增加以下三条:

    192.168.8.201 master
    192.168.8.211 slave-i
    192.168.8.221 slave-ii

    并增加127.0.0.1的主机名,master节点增加127.0.0.1 master,slave-i节点增加127.0.0.1 slave-i,slave-ii节点增加127.0.0.1 slave-ii,如下图如

    master:

    

    slave-i:

    

    slave-ii:

    

2、防火墙设置

    如主机中未安装iptables,在三个主机中执行以下命令安装:

    yum install iptables-services

    执行iptables -L -n -v命令可以查看iptables配置,执行以下命令永久关闭三个主机的iptables:

    chkconfig iptables off

    同时关闭三个主机的iptables和firewalld并设置开机不启动,执行以下命令:

    systemctl stop iptables
    systemctl disable iptables
    systemctl stop firewalld
    systemctl disable firewalld

    执行systemctl status iptables和systemctl status firewalld可以查看防火墙已经关闭。

3、selinux配置

    可以通过getenforce或/usr/sbin/sestatus命令查看SELINUX配置,通过修改/etc/selinux/config文件中的SELINUX配置项来关闭selinux。

    SELINUX=disabled

    如图:

    

4、关闭Swap分区

    执行free -m命令可以查看swap分区情况。修改/etc/fstab配置文件,注释掉swap配置行。如图:

    

5、时钟同步

    执行以下命令安装ntpdate:

    yum install ntpdate

    执行以下命令同步时针:

    ntpdate us.pool.ntp.org

四、安装etcd集群

    从https://github.com/coreos/etcd/releases/下载etcd3.3.8版本,将其解压,并将二进制文件拷贝到/usr/bin目录下:

    cp etcd etcdctl /usr/bin

    创建相关文件夹:

    mkdir -p /var/lib/etcd /etc/etcd

    以上两个命令在三个主机节点上都要执行。接下来配置三个主机节点的etcd配置文件,每个etcd服务器上只有两个配置文件:/usr/lib/systemd/system/etcd.service 和 /etc/etcd/etcd.conf

1、节点1(etcd-i)

    /usr/lib/systemd/system/etcd.service 

[Unit]

Description=Etcd Server

After=network.target


[Service]

Type=notify

WorkingDirectory=/var/lib/etcd/

EnvironmentFile=/etc/etcd/etcd.conf

ExecStart=/usr/bin/etcd


[Install]

WantedBy=multi-user.target

    /etc/etcd/etcd.conf 

# [member]

# 节点名称

ETCD_NAME=etcd-i

# 数据存放位置

ETCD_DATA_DIR="/var/lib/etcd/default.etcd"

# 监听其他Etcd实例的地址

ETCD_LISTEN_PEER_URLS="http://192.168.8.201:2380"

# 监听客户端地址

ETCD_LISTEN_CLIENT_URLS="http://192.168.8.201:2379,http://127.0.0.1:2379"


#[cluster]

# 通知其他Etcd实例地址

ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.168.8.201:2380"

# 初始化集群内节点地址

ETCD_INITIAL_CLUSTER="etcd-i=http://192.168.8.201:2380,etcd-ii=http://192.168.8.211:2380,etcd-iii=http://192.168.8.221:2380"

# 初始化集群状态,new表示新建

ETCD_INITIAL_CLUSTER_STATE="new"

# 初始化集群token

ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster-token"

# 通知客户端地址

ETCD_ADVERTISE_CLIENT_URLS="http://192.168.8.201:2379,http://127.0.0.1:2379"

2、节点2(etcd-ii)

    /usr/lib/systemd/system/etcd.service

 
[Unit]

Description=Etcd Server

After=network.target


[Service]

Type=notify

WorkingDirectory=/var/lib/etcd/

EnvironmentFile=/etc/etcd/etcd.conf

ExecStart=/usr/bin/etcd


[Install]

WantedBy=multi-user.target

    /etc/etcd/etcd.conf

 
# [member]

# 节点名称

ETCD_NAME=etcd-ii

# 数据存放位置

ETCD_DATA_DIR="/var/lib/etcd/default.etcd"

# 监听其他Etcd实例的地址

ETCD_LISTEN_PEER_URLS="http://192.168.8.211:2380"

# 监听客户端地址

ETCD_LISTEN_CLIENT_URLS="http://192.168.8.211:2379,http://127.0.0.1:2379"


#[cluster]

# 通知其他Etcd实例地址

ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.168.8.211:2380"

# 初始化集群内节点地址

ETCD_INITIAL_CLUSTER="etcd-i=http://192.168.8.201:2380,etcd-ii=http://192.168.8.211:2380,etcd-iii=http://192.168.8.221:2380"

# 初始化集群状态,new表示新建

ETCD_INITIAL_CLUSTER_STATE="new"

# 初始化集群token

ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster-token"

# 通知客户端地址

ETCD_ADVERTISE_CLIENT_URLS="http://192.168.8.211:2379,http://127.0.0.1:2379"

3、节点3(etcd-iii)

   /usr/lib/systemd/system/etcd.service

 
[Unit]

Description=Etcd Server

After=network.target


[Service]

Type=notify

WorkingDirectory=/var/lib/etcd/

EnvironmentFile=/etc/etcd/etcd.conf

ExecStart=/usr/bin/etcd


[Install]

WantedBy=multi-user.target

    /etc/etcd/etcd.conf

 
# [member]

# 节点名称

ETCD_NAME=etcd-iii

# 数据存放位置

ETCD_DATA_DIR="/var/lib/etcd/default.etcd"

# 监听其他Etcd实例的地址

ETCD_LISTEN_PEER_URLS="http://192.168.8.221:2380"

# 监听客户端地址

ETCD_LISTEN_CLIENT_URLS="http://192.168.8.221:2379,http://127.0.0.1:2379"


#[cluster]

# 通知其他Etcd实例地址

ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.168.8.221:2380"

# 初始化集群内节点地址

ETCD_INITIAL_CLUSTER="etcd-i=http://192.168.8.201:2380,etcd-ii=http://192.168.8.211:2380,etcd-iii=http://192.168.8.221:2380"

# 初始化集群状态,new表示新建

ETCD_INITIAL_CLUSTER_STATE="new"

# 初始化集群token

ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster-token"

# 通知客户端地址

ETCD_ADVERTISE_CLIENT_URLS="http://192.168.8.221:2379,http://127.0.0.1:2379"

    etcd集群的主从节点关系与kubernetes集群的主从节点关系不是同的,etcd的配置文件只是表示三个etcd节点,etcd集群在启动和运行过程中会选举出主节点。因此,配置文件中体现的只是三个节点etcd-i,etcd-ii,etcd-iii。配置好三个节点的配置文件后,便可以启动etcd集群了,执行以下命令启动集群:

    systemctl daemon-reload
    systemctl start etcd.service

    执行命令时,不能等一台完全执行成功了再去下一台执行,因为etcd启动后会进行选举leader投票,如果各etcd启动间隔过大,会导致etcd集群启动失败。启动后可以执行以下命令检测集群:

    etcdctl member list
    etcdctl cluster-health

    在node-i上执行etcdctl member list,输出如下,至此,etcd集群已经搭建完成

    

五、docker安装

    docker安装参照阿里云教程安装,这里不再赘述,三个主机都安装。参照地址:https://yq.aliyun.com/articles/110806?spm=5176.8351553.0.0.44f01991b2jQwh

六、flannel安装

1、安装flannel

    从https://github.com/coreos/flannel/releases/下载flannel v0.10.0版本,将其解压到/opt/flannel/bin/。执行如下命令:

    mkdir -p /opt/flannel/bin/
    tar -xzvf flannel-v0.10.0-linux-amd64.tar.gz -C /opt/flannel/bin/

    flannel包含flanneld 和 mk-docker-opts.sh两个可执行文件,接下来配置flannel配置文件:

    /usr/lib/systemd/system/flannel.service

 
[Unit]

Description=Flanneld overlay address etcd agent

After=network.target

After=network-online.target

Wants=network-online.target

After=etcd.service

Before=docker.service


[Service]

Type=notify

ExecStart=/opt/flannel/bin/flanneld -etcd-endpoints=http://192.168.8.201:2379,http://192.168.8.211:2379,http://192.168.8.221:2379 -etcd-prefix=coreos.com/network

ExecStartPost=/opt/flannel/bin/mk-docker-opts.sh -d /etc/docker/flannel_net.env -c

Restart=on-failure


[Install]

WantedBy=multi-user.target

RequiredBy=docker.service

    flannel服务依赖etcd,必须先安装好etcd,并配置etcd服务地址-etcd-endpoints,-etcd-prefix是etcd存储的flannel网络配置的键前缀,执行以下命令设置flannel网络配置:

    etcdctl mk /coreos.com/network/config '{"Network":"172.18.0.0/16", "SubnetMin": "172.18.1.0", "SubnetMax": "172.18.254.0",  "Backend": {"Type": "vxlan"}}'

    flannel服务依赖flannel镜像,所以要先下载flannel镜像,执行以下命令从阿里云下载,并创建镜像tag:

    docker pull registry.cn-beijing.aliyuncs.com/k8s_images/flannel:v0.10.0-amd64
    docker tag registry.cn-beijing.aliyuncs.com/k8s_images/flannel:v0.10.0-amd64 quay.io/coreos/flannel:v0.10.0

    接下来启动flannel服务,执行以下命令:

    systemctl daemon-reload
    systemctl start flannel.service

2、修改docker配置

    在flannel配置文件中包含以下配置项:

    ExecStartPost=/opt/flannel/bin/mk-docker-opts.sh -d /etc/docker/flannel_net.env -c

    设置该配置后,会在flannel启动后执行mk-docker-opts.sh,并生成/etc/docker/flannel_net.env文件。flannel会修改docker网络,flannel_net.env是flannel生成的docker配置参数,因此,还要修改docker配置项。

    docker的配置文件在/usr/lib/systemd/system/docker.service中,docker.service配置如下,修改的配置项如下:

    After:flannel启动之后再启动docker;

    EnvironmentFile:配置docker的启动参数,由flannel生成;

    ExecStart:增加docker启动参数;

    ExecStartPost:在docker启动之后执行,会修改主机的iptables路由规则。    

 
[Unit]

Description=Docker Application Container Engine

Documentation=https://docs.docker.com

# After=network-online.target firewalld.service

After=network-online.target flannel.service

Wants=network-online.target


[Service]

Type=notify

# the default is not to use systemd for cgroups because the delegate issues still

# exists and systemd currently does not support the cgroup feature set required

# for containers run by docker

EnvironmentFile=/etc/docker/flannel_net.env

ExecStart=/usr/bin/dockerd $DOCKER_OPTS

ExecReload=/bin/kill -s HUP $MAINPID

ExecStartPost=/usr/sbin/iptables -P FORWARD ACCEPT

# Having non-zero Limit*s causes performance problems due to accounting overhead

# in the kernel. We recommend using cgroups to do container-local accounting.

LimitNOFILE=infinity

LimitNPROC=infinity

LimitCORE=infinity

# Uncomment TasksMax if your systemd version supports it.

# Only systemd 226 and above support this version.

#TasksMax=infinity

TimeoutStartSec=0

# set delegate yes so that systemd does not reset the cgroups of docker containers

Delegate=yes

# kill only the docker process, not all processes in the cgroup

KillMode=process

# restart the docker process if it exits prematurely

Restart=on-failure

StartLimitBurst=3

StartLimitInterval=60s


[Install]

WantedBy=multi-user.target

    修改好配置文件后,执行以下命令启动flannel,并重启docker。

    systemctl daemon-reload
    systemctl start flannel.service
    systemctl restart docker.service

    启动后可以执行ifconfig查看,已经多了flannel网络。

    

    在三个主机节点都要安装flannel和配置docker参数,并可以通过以下命令验证flannel和docker运行情况:

    systemctl status flannel
    systemctl status docker

    etcd、flannel、docker安装完成之后,便可以开始安装kubernetes集群了,在安装kubernetes集群之前先安装ca证书。

七、安装CA证书

    (1)、为kube-apiserver生成一个数字证书,并用CA证书进行签名。
    (2)、为kube-apiserver进程配置证书相关的启动参数,包括CA证书(用于验证客户端证书的签名真伪、自己的经过CA签名后的证书及私钥)。
    (3)、为每个访问Kubernetes API Server的客户端(如kube-controller-manager、kube-scheduler、kubelet、kube-proxy及调用API Server的客户端程序kubectl等)进程生成自己的数字证书,也都用CA证书进行签名,在相关程序的启动参数中增加CA证书、自己的证书等相关参数。

    要生成的证书如下表:

根证书和私钥ca.crt、ca.key
kube-apiserver证书和私钥server.crt、server.key
kube-controller-manager/kube-scheduler证书和私钥cs_client.crt、cs_client.key
kubelet/kube-proxy证书和私钥kubelet_client.crt、kubelet_client.key

1、master节点

    创建证书目录: 

    mkdir -p /etc/kubernetes/ca

    执行以下命令生成相关证书和私钥:

(1)、生成根证书和私钥

    openssl genrsa -out ca.key 2048
    openssl req -x509 -new -nodes -key ca.key -subj "/CN=master" -days 5000 -out ca.crt

    /CN为master 主机名

(2)、生成kube-apiserver证书和私钥

    新建master_ssl.conf文件,配置如下:

 
[req]

req_extensions = v3_req

distinguished_name = req_distinguished_name

[req_distinguished_name]

[ v3_req ]

basicConstraints = CA:FALSE

keyUsage = nonRepudiation, digitalSignature, keyEncipherment

subjectAltName = @alt_names

[alt_names]

DNS.1 = kubernetes

DNS.2 = kubernetes.default

DNS.3 = kubernetes.default.svc

DNS.4 = kubernetes.default.svc.cluster.local

DNS.5 = master #master hostname

IP.1 = 172.18.0.1 #master clusterip 可通过kubectl get service获取

IP.2 = 192.168.8.201 #master ip

    执行如下命令生成证书和私钥:

 
  1. openssl genrsa -out server.key 2048

  2. openssl req -new -key server.key -subj "/CN=master" -config master_ssl.conf -out server.csr

  3. openssl x509 -req -in server.csr -CA ca.crt -CAkey ca.key -CAcreateserial -days 5000 -extensions v3_req -extfile master_ssl.conf -out server.crt

    /CN为master 主机名

(3)、生成kube-controller-manager/kube-scheduler证书和私钥

    openssl genrsa -out cs_client.key 2048
    openssl req -new -key cs_client.key -subj "/CN=master" -out cs_client.csr
    openssl x509 -req -in cs_client.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out cs_client.crt -days 5000

    /CN为master 主机名

2、node1节点(slave-i)

    创建证书目录:

    mkdir -p /etc/kubernetes/ca

    将master节点的根证书和私钥拷贝到该目录下,执行以下命令生成证书和私钥:

    openssl genrsa -out kubelet_client.key 2048
    openssl req -new -key kubelet_client.key -subj "/CN=192.168.8.211" -out kubelet_client.csr
    openssl x509 -req -in kubelet_client.csr  -CA ca.crt -CAkey ca.key -CAcreateserial -out kubelet_client.crt -days 5000

     /CN为slave-i 的IP地址

3、node2节点(slave-ii)

    创建证书目录:

    mkdir -p /etc/kubernetes/ca

    将master节点的根证书和私钥拷贝到该目录下,执行以下命令生成证书和私钥:

    openssl genrsa -out kubelet_client.key 2048
    openssl req -new -key kubelet_client.key -subj "/CN=192.168.8.221" -out kubelet_client.csr
    openssl x509 -req -in kubelet_client.csr  -CA ca.crt -CAkey ca.key -CAcreateserial -out kubelet_client.crt -days 5000

     /CN为slave-ii 的IP地址

八、安装kubernetes集群

    安装好CA证书后,开始搭建kubernetes集群。

    master节点安装kube-apiserver、kube-controller-manager、kube-scheduler;node节点安装kubelet、kube-proxy。

1、master节点

    从https://github.com/kubernetes/kubernetes/releases下载kubernetes v1.10.5版本。如下图,点击CHANGELOG-1.10.md链接到下载页面。

    

    下载kubernetes-server-linux-amd64.tar.gz服务端安装包,将其解压,并将kubernetes/server/bin下的二进制文件拷贝到/usr/bin目录下。

    cp `ls|egrep -v "*.tar|*_tag"` /usr/bin/

    创建日志目录

    mkdir -p /var/log/kubernetes

(1)、配置kube-apiserver

    /usr/lib/systemd/system/kube-apiserver.service

 
[Unit]

Description=Kubernetes API Server

Documentation=https://github.com/GoogleCloudPlatform/kubernetes

After=etcd.service

Wants=etcd.service


[Service]

EnvironmentFile=/etc/kubernetes/apiserver.conf

ExecStart=/usr/bin/kube-apiserver $KUBE_API_ARGS

Restart=on-failure

Type=notify

LimitNOFILE=65536


[Install]

WantedBy=multi-user.target

    /etc/kubernetes/apiserver.conf,--etcd-servers连接到etcd集群,关闭的非安全端口8080,并用secure-port开启安全端口6443,client-ca-file、tls-private-key-file、tls-cert-file配置CA证书,enable-admission-plugins开启准入权限。

 
KUBE_API_ARGS="\

--storage-backend=etcd3 \

--etcd-servers=http://192.168.8.201:2379,http://192.168.8.211:2379,http://192.168.8.221:2379 \

--bind-address=0.0.0.0 \

--secure-port=6443 \

--service-cluster-ip-range=172.18.0.0/16 \

--service-node-port-range=1-65535 \

--kubelet-port=10250 \

--advertise-address=192.168.8.201 \

--allow-privileged=false \

--client-ca-file=/etc/kubernetes/ca/ca.crt \

--tls-private-key-file=/etc/kubernetes/ca/server.key \

--tls-cert-file=/etc/kubernetes/ca/server.crt \

--enable-admission-plugins=NamespaceLifecycle,LimitRanger,NamespaceExists,SecurityContextDeny,ServiceAccount,DefaultStorageClass,ResourceQuota \

--logtostderr=true \

--log-dir=/var/log/kubernets \

--v=2"

(2)、配置kube-controller-manager

    配置kube-cs-config.yaml,kube-controller-manager和kube-scheduler都会使用该文件,文件里配置了CA证书,配置如下:

 
apiVersion: v1
kind: Config
users:
- name: kubelet
  user:
    client-certificate: /etc/kubernetes/ca/cs_client.crt
    client-key: /etc/kubernetes/ca/cs_client.key
clusters:
- name: default-cluster
  cluster:
    certificate-authority: /etc/kubernetes/ca/ca.crt
contexts:
- context:
    cluster: default-cluster
    user: kubelet
  name: default-context
current-context: default-context

    /usr/lib/systemd/system/kube-controller-manager.service

 
[Unit]

Description=Kubernetes Controller Manager

Documentation=https://github.com/GoogleCloudPlatform/kubernetes

After=kube-apiserver.service

Requires=kube-apiserver.service


[Service]

EnvironmentFile=/etc/kubernetes/controller-manager.conf

ExecStart=/usr/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_ARGS

Restart=on-failure

LimitNOFILE=65536


[Install]

WantedBy=multi-user.target

    /etc/kubernetes/controller-manager.conf,master连接到master节点,service-account-private-key-file、root-ca-file、cluster-signing-cert-file、cluster-signing-key-file配置CA证书,kubeconfig是配置文件。

 
KUBE_CONTROLLER_MANAGER_ARGS="\

--master=https://192.168.8.201:6443 \

--service-account-private-key-file=/etc/kubernetes/ca/server.key \

--root-ca-file=/etc/kubernetes/ca/ca.crt \

--cluster-signing-cert-file=/etc/kubernetes/ca/ca.crt \

--cluster-signing-key-file=/etc/kubernetes/ca/ca.key \

--kubeconfig=/etc/kubernetes/kube-cs-config.yaml \

--logtostderr=true \

--log-dir=/var/log/kubernetes \

--v=2"

(3)、配置kube-scheduler

    /usr/lib/systemd/system/kube-scheduler.service

 
[Unit]

Description=Kubernetes Scheduler

Documentation=https://github.com/GoogleCloudPlatform/kubernetes

After=kube-apiserver.service

Requires=kube-apiserver.service


[Service]

User=root

EnvironmentFile=/etc/kubernetes/scheduler.conf

ExecStart=/usr/bin/kube-scheduler $KUBE_SCHEDULER_ARGS

Restart=on-failure

LimitNOFILE=65536


[Install]

WantedBy=multi-user.target

    /etc/kubernetes/scheduler.conf,master连接到master节点,kubeconfig是配置文件。

 
KUBE_SCHEDULER_ARGS="\

--master=https://192.168.8.201:6443 \

--kubeconfig=/etc/kubernetes/kube-cs-config.yaml \

--logtostderr=true \

--log-dir=/var/log/kubernetes \

--v=2"

    配置好配置文件后,执行以下命令启动master节点:

    systemctl daemon-reload
    systemctl start kube-apiserver.service
    systemctl start kube-controller-manager.service
    systemctl start kube-scheduler.service

    启动后可执行以下命令查看启动日志信息:

    journalctl -xeu kube-apiserver --no-pager
    journalctl -xeu kube-controller-manager --no-pager
    journalctl -xeu kube-scheduler --no-pager

2、node节点

    在server安装包的bin目录下已经包含了node节点的二进制文件,执行以下命令将二进制文件拷贝到/usr/bin目录下:

    cp kubectl kubelet kube-proxy /usr/bin/

    创建日志目录

    mkdir -p /var/log/kubernetes

    创建/etc/sysctl.d/k8s.conf文件

    touch /etc/sysctl.d/k8s.conf

    配置k8s.conf文件如下

 
net.bridge.bridge-nf-call-ip6tables = 1

net.bridge.bridge-nf-call-iptables = 1

    配置kube-config.yaml,kubelet和kube-proxy都会使用该文件,kube-config.yaml中的参数要与master节点的kube-cs-config.yaml中的参数对应,并配置CA证书,配置如下:

 
apiVersion: v1
kind: Config
users:
- name: kubelet
  user:
    client-certificate: /etc/kubernetes/ca/kubelet_client.crt
    client-key: /etc/kubernetes/ca/kubelet_client.key
clusters:
- cluster:
    certificate-authority: /etc/kubernetes/ca/ca.crt
    server: https://192.168.8.201:6443
  name: default-cluster
contexts:
- context:
    cluster: default-cluster
    user: kubelet
  name: default-context
current-context: default-context
preferences: {}

(1)、配置kubelet

    /usr/lib/systemd/system/kubelet.service

 
[Unit]

Description=Kubelet Server

Documentation=https://github.com/GoogleCloudPlatform/kubernetes

After=docker.service

Requires=docker.service


[Service]

EnvironmentFile=/etc/kubernetes/kubelet.conf

ExecStart=/usr/bin/kubelet $KUBELET_ARGS

Restart=on-failure


[Install]

WantedBy=multi-user.target

    /etc/kubernetes/kubelet.conf,hostname-override配置node我名称,这里使用node节点的IP,slave-i的IP为192.168.8.211,slave-ii的IP为192.168.8.221。pod-infra-container-image指定pause镜像,kubeconfig为配置文件。

 
KUBELET_ARGS="\

--kubeconfig=/etc/kubernetes/kube-config.yaml \

--pod-infra-container-image=gcr.io/google_containers/pause-amd64:3.0 \

--hostname-override=192.168.8.211 \

--fail-swap-on=false \

--logtostderr=true \

--log-dir=/var/log/kubernetes \

--v=2"

(2)、配置kube-proxy

    /usr/lib/systemd/system/kube-proxy.service

 
[Unit]

Description=Kube-Proxy Server

Documentation=https://github.com/GoogleCloudPlatform/kubernetes

After=network.target

Requires=network.service


[Service]

EnvironmentFile=/etc/kubernetes/proxy.conf

ExecStart=/usr/bin/kube-proxy $KUBE_PROXY_ARGS

Restart=on-failure

LimitNOFILE=65536


[Install]

WantedBy=multi-user.target

    /etc/kubernetes/proxy.conf,hostname-override配置node我名称。要与kubelet对应,kubelet配置了,则kube-proxy也要配置。这里使用node节点的IP,slave-i的IP为192.168.8.211,slave-ii的IP为192.168.8.221。master连接master服务,kubeconfig为配置文件。

 
KUBE_PROXY_ARGS="\

--master=https://192.168.8.201:6443 \

--hostname-override=192.168.8.211 \

--kubeconfig=/etc/kubernetes/kube-config.yaml \

--logtostderr=true \

--log-dir=/var/log/kubernetes \

--v=2"

    kubelet服务依赖二pause镜像,在启动kubelet前先要下载该镜像,执行以下命令下载和创建镜像tag:

 
  1. docker pull registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0

  2. docker tag registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0 gcr.io/google_containers/pause-amd64:3.0

    配置好配置文件后,执行以下命令启动master节点:

    systemctl daemon-reload
    systemctl start kubelet.service
    systemctl start kube-proxy.service

    启动后可执行以下命令查看启动日志信息:

    journalctl -xeu kubelet --no-pager
    journalctl -xeu kube-proxy --no-pager

    node节点启动后,可以在master节点查看node节点已经注册到集群里了,执行以下命令查看:

    

九、测试集群

    这里启动了个nginx服务来测试集群。

1、创建rc

    创建nginx-rc.yaml文件,配置如下,imagePullPolicy: IfNotPresent会下载nginx镜像:

 
apiVersion: v1
kind: ReplicationController
metadata:
  name: nginx-rc
  labels:
    name: nginx-rc
spec:
  replicas: 2
  selector:
    name: nginx-pod
  template:
    metadata:
      labels: 
        name: nginx-pod
    spec:
      containers:
      - name: nginx
        image: nginx
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 80

2、创建service

    创建nginx-svc.yaml文件,配置如下,NodePort可以将服务端口映射到pod所在主机的端口上,因此可以在pod所在主机上访问服务:

  
apiVersion: v1
kind: Service
metadata:
  name: nginx-service
  labels: 
    name: nginx-service
spec:
  type: NodePort
  ports:
  - port: 80
    protocol: TCP
    targetPort: 80
    nodePort: 30081
  selector:
    name: nginx-pod

    执行以下命令启动rc和service:

    kubectl create -f nginx-rc.yaml

    kubectl create -f nginx-svc.yaml

    在master中执行以下命令,可以查看pod创建情况:

    kubectl get pod -o wide

    可以看到pod已经创建,并分配在node中

    

   可以在集群外,访问slave-i和slave-ii的30081端口访问nginx。

    http://192.168.8.211:30081/
    http://192.168.8.221:30081/

    返回nginx主页

    

转自:https://blog.csdn.net/sealir/article/details/81070924

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值