kubernetes.v1.11.0

环境要求

Kubernetes的版本为 V1.11.0
  kube-apiserver
  kube-scheduler
  kube-controller-manager
  etcd
  kubectl
  kubelet
  kube-proxy
etcd 版本为 V3.3.8
docker 版本 18.03.1-ce

kubernetes 1.11.0

安装版本地址:
https://github.com/kubernetes/kubernetes/releases/tag/v1.11.0

二进制下载地址:
https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.11.md#v1110

wget https://dl.k8s.io/v1.11.0/kubernetes-server-linux-amd64.tar.gz
sha256 b8a8a88afd8a40871749b2362dbb21295c6a9c0a85b6fc87e7febea1688eb99e

tar xvf kubernetes-server-linux-amd64.tar.gz

服务器上需要的二进制文件并不在下载的 tar 包中,需要解压tar包,然后执行cluster/get-kube-binaries.sh。

系统初始化

systemctl stop firewalld
systemctl disable firewalld
systemctl status firewalld

vi /etc/sysconfig/selinux
SELINUX=disabled

swapoff -a
vi /etc/fstab
#/swap none swap sw 0 0

# 配置sysctl.conf
vi /etc/sysctl.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
# 执行以下命令使修改生效
modprobe br_netfilter
sysctp -p


master

etcd 下载
https://github.com/etcd-io/etcd/blob/master/CHANGELOG-3.2.md

wget https://github.com/etcd-io/etcd/releases/download/v3.3.13/etcd-v3.3.13-linux-amd64.tar.gz
wget https://github.com/etcd-io/etcd/releases/download/v3.2.11/etcd-v3.2.11-linux-amd64.tar.gz

tar xvf etcd-v3.2.11-linux-amd64.tar.gz
cd etcd-v3.2.11-linux-amd64
cp etcd etcdctl /usr/bin/

vi /usr/lib/systemd/system/etcd.service
[Unit]
Description=etcd.service
 
[Service]
Type=notify
TimeoutStartSec=0
Restart=always
WorkingDirectory=/var/lib/etcd
EnvironmentFile=-/etc/etcd/etcd.conf
ExecStart=/usr/bin/etcd
 
[Install]
WantedBy=multi-user.target


mkdir -p /var/lib/etcd && mkdir -p /etc/etcd/
vi /etc/etcd/etcd.conf
ETCD_NAME=ETCD Server
ETCD_DATA_DIR="/var/lib/etcd/"
ETCD_LISTEN_CLIENT_URLS=http://0.0.0.0:2379
ETCD_ADVERTISE_CLIENT_URLS="http://192.168.0.101:2379"

# 启动etcd
systemctl daemon-reload
systemctl start etcd.service
systemctl enable etcd.service
systemctl status etcd.service

查看etcd状态是否正常
etcdctl cluster-health
member 8e9e05c52164694d is healthy: got healthy result from http://127.0.0.1:2379
cluster is healthy

创建 etcd 网络

etcdctl mk /atomic.io/network/config '{"Network":"172.17.0.0/16"}

master kube-apiserver

添加配置文件
vi /usr/lib/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes API Server
After=etcd.service
Wants=etcd.service
 
[Service]
EnvironmentFile=/etc/kubernetes/apiserver
ExecStart=/usr/bin/kube-apiserver  \
        $KUBE_ETCD_SERVERS \
        $KUBE_API_ADDRESS \
        $KUBE_API_PORT \
        $KUBE_SERVICE_ADDRESSES \
        $KUBE_ADMISSION_CONTROL \
        $KUBE_API_LOG \
        $KUBE_API_ARGS
Restart=on-failure
Type=notify
LimitNOFILE=65536
 
[Install]
WantedBy=multi-user.target

# 创建配置文件
cp -r kube-apiserver /usr/bin/
vi /etc/kubernetes/apiserver
KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0"
KUBE_API_PORT="--port=8080"
KUBELET_PORT="--kubelet-port=10250"
KUBE_ETCD_SERVERS="--etcd-servers=http://192.168.0.101:2379"	# 先改成 127.0.0.1 启动正常后 再使用 IP
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.0.0.0/24"
KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota"
KUBE_API_ARGS=""


# 启动服务
systemctl daemon-reload
systemctl start kube-apiserver.service
systemctl enable kube-apiserver.service
systemctl status kube-apiserver.service

# 查看启动是否成功
netstat -tnlp | grep kube
tcp6       0      0 :::6443                 :::*                    LISTEN      27378/kube-apiserve
tcp6       0      0 :::8080                 :::*                    LISTEN      10144/kube-apiserve 

master 安装kube-controller-manager

vi /usr/lib/systemd/system/kube-controller-manager.service
[Unit]
Description=Kubernetes Scheduler
After=kube-apiserver.service
Requires=kube-apiserver.service
 
[Service]
EnvironmentFile=-/etc/kubernetes/controller-manager
ExecStart=/usr/bin/kube-controller-manager \
        $KUBE_MASTER \
        $KUBE_CONTROLLER_MANAGER_ARGS
Restart=on-failure
LimitNOFILE=65536
 
[Install]
WantedBy=multi-user.target

# 创建配置文件
cp -r kube-controller-manager /usr/bin/
vi /etc/kubernetes/controller-manager
KUBE_MASTER="--master=http://192.168.0.101:8080"
KUBE_CONTROLLER_MANAGER_ARGS=" "

# 启动服务
systemctl daemon-reload
systemctl restart kube-controller-manager.service
systemctl enable kube-controller-manager.service
systemctl status kube-controller-manager.service


# 验证服务状态
netstat -lntp | grep kube-controll
tcp6       0      0 :::10252                :::*                    LISTEN      10163/kube-controll

master 安装 kube-scheduler

vi /usr/lib/systemd/system/kube-scheduler.service
[Unit]
Description=Kubernetes Scheduler
After=kube-apiserver.service
Requires=kube-apiserver.service

[Service]
User=root
EnvironmentFile=/etc/kubernetes/scheduler
ExecStart=/usr/bin/kube-scheduler \
        $KUBE_MASTER \
        $KUBE_SCHEDULER_ARGS
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

# 修改配置
cp -r kube-scheduler /usr/bin/
vi /etc/kubernetes/scheduler
KUBE_MASTER="--master=http://192.168.0.101:8080"
KUBE_SCHEDULER_ARGS="--logtostderr=true --log-dir=/home/log/kubernetes --v=2"

# 启动服务
systemctl daemon-reload
systemctl start kube-scheduler.service
systemctl enable kube-scheduler.service
systemctl status kube-scheduler.service


# 验证服务状态
netstat -lntp | grep kube-schedule
tcp6       0      0 :::10251                :::*                    LISTEN      10179/kube-schedule 

查看状态

kubectl get cs
NAME                 STATUS    MESSAGE              ERROR
scheduler            Healthy   ok
controller-manager   Healthy   ok
etcd-0               Healthy   {"health": "true"}

到这里Master节点就配置完毕。

配置集群网络

Flannel可以使整个集群的docker容器拥有唯一的内网IP,并且多个node之间的docker0可以互相访问。
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/v0.10.0/Documentation/kube-flannel.yml

集群验证

kubectl get nodes
NAME                    STATUS    ROLES     AGE       VERSION
localhost.localdomain   Ready     <none>    37m       v1.11.0

node

node 环境配置

docker-ce-1.18.1
kubelet
kube-proxy

node 配置

1.拷贝kubelet kube-proxy
cp kubernetes/server/bin/kubelet /usr/bin/
cp kubernetes/server/bin/kube-proxy /usr/bin/

2.安装kube-proxy 服务
vi /usr/lib/systemd/system/kube-proxy.service
[Unit]
Description=Kubernetes Proxy
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target

[Service]
EnvironmentFile=/etc/kubernetes/config
EnvironmentFile=/etc/kubernetes/proxy
ExecStart=/usr/bin/kube-proxy \
                $KUBE_LOGTOSTDERR \
                $KUBE_LOG_LEVEL \
                $KUBE_MASTER \
                $KUBE_PROXY_ARGS
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target


mkdir -p /etc/kubernetes
vi /etc/kubernetes/proxy
KUBE_PROXY_ARGS=""

vi /etc/kubernetes/config
KUBE_LOGTOSTDERR="--logtostderr=true"
KUBE_LOG_LEVEL="--v=0"
KUBE_ALLOW_PRIV="--allow_privileged=false"
KUBE_MASTER="--master=http://192.168.3.8:8080"

# 启动服务
systemctl daemon-reload
systemctl start kube-proxy.service
systemctl enable kube-proxy.service
systemctl status kube-proxy.service
netstat -lntp | grep kube-proxy
tcp        0      0 127.0.0.1:10249         0.0.0.0:*               LISTEN      26899/kube-proxy
tcp6       0      0 :::10256                :::*                    LISTEN      26899/kube-proxy

node kubelet 服务

vi /usr/lib/systemd/system/kubelet.service
[Unit]
Description=Kubernetes Kubelet Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=docker.service
Requires=docker.service
 
[Service]
WorkingDirectory=/var/lib/kubelet
EnvironmentFile=/etc/kubernetes/kubelet
ExecStart=/usr/bin/kubelet $KUBELET_ARGS
Restart=on-failure
KillMode=process
 
[Install]
WantedBy=multi-user.target


mkdir -p /var/lib/kubelet
vi /etc/kubernetes/kubelet
KUBELET_ADDRESS="--address=0.0.0.0"
KUBELET_HOSTNAME="--hostname-override=127.0.0.1-node"
KUBELET_API_SERVER="--api-servers=http://127.0.0.1:8080"
KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.cn-shenzhen.aliyuncs.com/pod-infrastructure-1/pod-infrastructure:latest"
KUBELET_ARGS="--enable-server=true --enable-debugging-handlers=true --fail-swap-on=false --kubeconfig=/var/lib/kubelet/kubeconfig"

# 创建配置文件
vi /var/lib/kubelet/kubeconfig
apiVersion: v1
kind: Config
users:
- name: kubelet
clusters:
- name: kubernetes
  cluster:
    server: http://127.0.0.1:8080
contexts:
- context:
    cluster: kubernetes
    user: kubelet
  name: service-account-context
current-context: service-account-context


# 启动kubelet并进习验证
swapoff -a
systemctl daemon-reload
systemctl start kubelet.service
systemctl enable kubelet.service
systemctl status kubelet.service
netstat -tnlp | grep kubelet

tcp        0      0 127.0.0.1:45094         0.0.0.0:*               LISTEN      27003/kubelet
tcp        0      0 127.0.0.1:10248         0.0.0.0:*               LISTEN      27003/kubelet
tcp6       0      0 :::10250                :::*                    LISTEN      27003/kubelet
tcp6       0      0 :::10255                :::*                    LISTEN      27003/kubelet

转载于:https://my.oschina.net/xyh592/blog/3082026

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值