简介:
Docker:是一个开源的应用容器引擎,可以为应用创建一个轻量级的、可移植的、自给自足的容器。
Kubernetes:由Google开源的Docker容器集群管理系统,为容器化的应用提供资源调度、部署运行、服务发现、扩容缩容等功能。
Etcd:由CoreOS开发并维护的一个高可用的键值存储系统,主要用于共享配置和服务发现。
Flannel:Flannel是 CoreOS 团队针对 Kubernetes 设计的一个覆盖网络(Overlay Network)工具,其目的在于帮助每一个使用 Kuberentes 的主机拥有一个完整的子网。
目标:
本文主要介绍Kunbernetes(以下简称k8s)集群的搭建。
本文包括:
- etcd集群的搭建;
- docker安装和配置(简单介绍);
- flannel安装和配置(简单介绍);
- k8s集群部署;
准备工作:
主机 | 运行服务 | 角色 |
---|
172.20.30.19(centos7.1) | etcd docker flannel kube-apiserver kube-controller-manager kube-scheduler | k8s-master |
172.20.30.21(centos7.1) | etcd docker flannel kubelet kube-proxy | minion |
172.20.30.18(centos7.1) | etcd docker flannel kubelet kube-proxy | minion |
172.20.30.20(centos7.1) | etcd docker flannel kubelet kube-proxy | minion |
安装:
下载好etcd、docker、flannel的rpm安装包,例如:
etcd:
etcd-2.2.5-2.el7.0.1.x86_64.rpm
flannel:
flannel-0.5.3-9.el7.x86_64.rpm
docker:
device-mapper-1.02.107-5.el7_2.5.x86_64.rpm docker-selinux-1.10.3-44.el7.centos.x86_64.rpm
device-mapper-event-1.02.107-5.el7_2.5.x86_64.rpm libseccomp-2.2.1-1.el7.x86_64.rpm
device-mapper-event-libs-1.02.107-5.el7_2.5.x86_64.rpm lvm2-2.02.130-5.el7_2.5.x86_64.rpm
device-mapper-libs-1.02.107-5.el7_2.5.x86_64.rpm lvm2-libs-2.02.130-5.el7_2.5.x86_64.rpm
device-mapper-persistent-data-0.5.5-1.el7.x86_64.rpm oci-register-machine-1.10.3-44.el7.centos.x86_64.rpm
docker-1.10.3-44.el7.centos.x86_64.rpm oci-systemd-hook-1.10.3-44.el7.centos.x86_64.rpm
docker-common-1.10.3-44.el7.centos.x86_64.rpm yajl-2.0.4-4.el7.x86_64.rpm
docker-forward-journald-1.10.3-44.el7.centos.x86_64.rpm
etcd和flannel的安装比较简单,没有依赖关系。docker的安装因为有依赖关系,需要先安装docker的依赖包,才能安装成功。此处不是本文的重点,不做赘述。
四台机器上,都必须安装etcd,docker,和flannel
下载kubernetes 1.3版本的二进制包,点击下载
下载完成后执行一下操作,以在 172.20.30.19上为例:
- # tar zxvf kubernetes1.3.tar.gz # 解压二进制包
- # cd kubernetes/server
- # tar zxvf kubernetes-server-linux-amd64.tar.gz # 解压master所需的安装包
- # cd kubernetes/server/bin/
- # cp kube-apiserver kube-controller-manager kubectl kube-scheduler /usr/bin #把master需要的程序,拷贝到/usr/bin下,也可以设置环境变量达到相同目的
- # scp kubelet kube-proxy root@172.20.30.21:~ # 把minion需要的程序,scp发送到minion上
- # scp kubelet kube-proxy root@172.20.30.19:~
- # scp kubelet kube-proxy root@172.20.30.20:~
配置和部署:
1. etcd的配置和部署
修改四台机器中etcd的etcd配置:
- # [member]
- ETCD_NAME="etcd-2"
- ETCD_DATA_DIR="/data/etcd/"
- #ETCD_WAL_DIR=""
- #ETCD_SNAPSHOT_COUNT="10000"
- #ETCD_HEARTBEAT_INTERVAL="100"
- #ETCD_ELECTION_TIMEOUT="1000"
- #ETCD_LISTEN_PEER_URLS="http://localhost:2380" # 去掉默认的配置
- ETCD_LISTEN_PEER_URLS="http://0.0.0.0:7001"
- #ETCD_LISTEN_CLIENT_URLS="http://localhost:2379" # 去掉默认的配置
- ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:4001"
- #ETCD_MAX_SNAPSHOTS="5"
- #ETCD_MAX_WALS="5"
- #ETCD_CORS=""
- #
- #[cluster]
- #ETCD_INITIAL_ADVERTISE_PEER_URLS="http://localhost:2380"
- ETCD_INITIAL_ADVERTISE_PEER_URLS="http://172.20.30.21:7001"
- # if you use different ETCD_NAME (e.g. test), set ETCD_INITIAL_CLUSTER value for this name, i.e. "test=http://..."
- #ETCD_INITIAL_CLUSTER="default=http://localhost:2380"
- ETCD_INITIAL_CLUSTER="etcd-1=http://172.20.30.19:7001,etcd-2=http://172.20.30.21:7001,etcd-3=http://172.20.30.18:7001,etcd-4=http://172.20.30.20:7001" # 此处的含义为,要配置包含有4台机器的etcd集群
- ETCD_INITIAL_CLUSTER_STATE="new"
- #ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
- #ETCD_ADVERTISE_CLIENT_URLS="http://localhost:2379"
- ETCD_ADVERTISE_CLIENT_URLS="http://172.20.30.21:4001"
- #ETCD_DISCOVERY=""
- #ETCD_DISCOVERY_SRV=""
- #ETCD_DISCOVERY_FALLBACK="proxy"
- #ETCD_DISCOVERY_PROXY=""
- #
- #[proxy]
- #ETCD_PROXY="off"
- #ETCD_PROXY_FAILURE_WAIT="5000"
- #ETCD_PROXY_REFRESH_INTERVAL="30000"
- #ETCD_PROXY_DIAL_TIMEOUT="1000"
- #ETCD_PROXY_WRITE_TIMEOUT="5000"
- #ETCD_PROXY_READ_TIMEOUT="0"
- #
- #[security]
- #ETCD_CERT_FILE=""
- #ETCD_KEY_FILE=""
- #ETCD_CLIENT_CERT_AUTH="false"
- #ETCD_TRUSTED_CA_FILE=""
- #ETCD_PEER_CERT_FILE=""
- #ETCD_PEER_KEY_FILE=""
- #ETCD_PEER_CLIENT_CERT_AUTH="false"
- #ETCD_PEER_TRUSTED_CA_FILE=""
- #
- #[logging]
- #ETCD_DEBUG="false"
- # examples for -log-package-levels etcdserver=WARNING,security=DEBUG
- #ETCD_LOG_PACKAGE_LEVELS=""
修改四台机器中etcd的服务配置: /usr/lib/systemd/system/etcd.service。修改后的文件内容为:
- [Unit]
- Description=Etcd Server
- After=network.target
- After=network-online.target
- Wants=network-online.target
-
- [Service]
- Type=notify
- WorkingDirectory=/var/lib/etcd/
- EnvironmentFile=-/etc/etcd/etcd.conf
- User=etcd
- # set GOMAXPROCS to number of processors
- ExecStart=/bin/bash -c "GOMAXPROCS=$(nproc) /usr/bin/etcd --name=\"${ETCD_NAME}\" --data-dir=\"${ETCD_DATA_DIR}\" --listen-client-urls=\"${ETCD_LISTEN_CLIENT_URLS}\" --listen-peer-urls=\"${ETCD_LISTEN_PEER_URLS}\" --advertise-client-urls=\"${ETCD_ADVERTISE_CLIENT_URLS}\" --initial-advertise-peer-urls=\"${ETCD_INITIAL_ADVERTISE_PEER_URLS}\" --initial-cluster=\"${ETCD_INITIAL_CLUSTER}\" --initial-cluster-state=\"${ETCD_INITIAL_CLUSTER_STATE}\""
- Restart=on-failure
- LimitNOFILE=65536
-
- [Install]
- WantedBy=multi-user.target
在每台机器上执行:
- # systemctl enable etcd.service
- # systemctl start etcd.service
然后选择一台机器,在其上执行:
- # etcdctl set /cluster "example-k8s"
再选取另外一台机器,执行:
如果返回
“example-k8s”,则etcd集群部署成功。
2. docker的配置和部署
docker的配置修改比较简单,主要是添加本地的 register地址:
在每台机器的docker配置(路径为
/etc/sysconfig/docker)中,均增加以下配置项:
- ADD_REGISTRY="--add-registry docker.midea.registry.hub:10050"
- DOCKER_OPTS="--insecure-registry docker.midea.registry.hub:10050"
- INSECURE_REGISTRY="--insecure-registry docker.midea.registry.hub:10050"
以上配置项为本地 register 的地址和服务端口,在docker的服务启动项中有用。具体register的搭建,请参考上一篇文章。
修改四台机器中docker的服务启动配置项
/usr/lib/systemd/system/docker.service。修改[Serive]项下的
ExecStart 的值。
修改后,服务启动配置内容为:
- [Unit]
- Description=Docker Application Container Engine
- Documentation=http://docs.docker.com
- After=network.target
- Wants=docker-storage-setup.service
-
- [Service]
- Type=notify
- NotifyAccess=all
- EnvironmentFile=-/etc/sysconfig/docker
- EnvironmentFile=-/etc/sysconfig/docker-storage
- EnvironmentFile=-/etc/sysconfig/docker-network
- Environment=GOTRACEBACK=crash
- ExecStart=/bin/sh -c 'exec -a docker</strong></span> /usr/bin/docker-current daemon \ #注意,在centos7上,此处是个坑。docker启动的时候,systemd是无法获取到docker的pid,可能会导致后面的flannel服务无法启动,需要加上红色部分,让systemd能抓取到 docker的pid
- --exec-opt native.cgroupdriver=systemd \
- $OPTIONS \
- $DOCKER_STORAGE_OPTIONS \
- $DOCKER_NETWORK_OPTIONS \
- $ADD_REGISTRY \ # 加入本地register服务路径
- $BLOCK_REGISTRY \
- $INSECURE_REGISTRY \
- 2>&1 | /usr/bin/forward-journald -tag docker'
- LimitNOFILE=1048576
- LimitNPROC=1048576
- LimitCORE=infinity
- TimeoutStartSec=0
- MountFlags=slave
- Restart=on-abnormal
- StandardOutput=null
- StandardError=null
-
- [Install]
- WantedBy=multi-user.target
分别在每台机器上执行
- # systemctl enable docker.service
- # systemctl start docker
检测docker的运行状态很简单,执行
查看是否能正常列出运行中的容器的各个元数据项即可(此时没有container运行,只列出各个元数据项):
- # docker ps
- CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
3. flannel的配置和部署
修改flannel的配置文件 /etc/sysconfig/flanneld
把etcd的服务地址和端口,flannel配置子网的信息,以及日志路径等添加到配置文件中。
因为每台机器上,都有etcd在运行,因此etcd的服务地址和端口,填写本机的即可。etcd会自动同步到etcd集群中的其它节点上。
修改完成后,文件内容:
- # Flanneld configuration options
-
- # etcd url location. Point this to the server where etcd runs
- FLANNEL_ETCD="http://172.20.30.21:4001"
-
- # etcd config key. This is the configuration key that flannel queries
- # For address range assignment
- FLANNEL_ETCD_KEY="/k8s/network" #这是一个目录,etcd中的目录
-
- # Any additional options that you want to pass
- FLANNEL_OPTIONS="--logtostderr=false --log_dir=/var/log/k8s/flannel/ --etcd-endpoints=http://172.20.30.21:4001"
然后执行:
# etcdctl mkdir /k8s/network
该命令是在etcd上创建一个目录,再执行:
# etcdctl set /k8s/network/config '{"Network":"172.100.0.0/16"}'
该命令含义是,期望docker运行的container实例的地址,都在 172.100.0.0/16网段中
flanneld会读取/k8s/network目录中config值,然后接管docker的地址分配,并把docker和宿主机器之间的网络桥接起来。
flannel的服务启动配置不用做修改。
执行:
# systemctl enable flanneld.service
# systemctl stop docker # 暂时先关闭docker服务,启动flanneld的时候,会自动拉起docker服务
# systemctl start flanneld.service
命令执行完成,如果没有差错的话,就会顺利地拉起docker。
使用 ifconfig 查看当前系统中的网络设备,就会发现除了有本身就有的eth0和lo等网络接口之外,出现了docker0和flannel0的网络设备:
- # ifconfig
- docker0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1472
- inet 172.100.28.1 netmask 255.255.255.0 broadcast 0.0.0.0
- inet6 fe80::42:86ff:fe81:6892 prefixlen 64 scopeid 0x20<link>
- ether 02:42:86:81:68:92 txqueuelen 0 (Ethernet)
- RX packets 29 bytes 2013 (1.9 KiB)
- RX errors 0 dropped 0 overruns 0 frame 0
- TX packets 25 bytes 1994 (1.9 KiB)
- TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
-
- eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
- inet 172.20.30.21 netmask 255.255.255.0 broadcast 172.20.30.255
- inet6 fe80::f816:3eff:fe43:21ac prefixlen 64 scopeid 0x20<link>
- ether fa:16:3e:43:21:ac txqueuelen 1000 (Ethernet)
- RX packets 13790001 bytes 3573763877 (3.3 GiB)
- RX errors 0 dropped 0 overruns 0 frame 0
- TX packets 13919888 bytes 1320674626 (1.2 GiB)
- TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
-
- flannel0: flags=4305<UP,POINTOPOINT,RUNNING,NOARP,MULTICAST> mtu 1472
- inet 172.100.28.0 netmask 255.255.0.0 destination 172.100.28.0
- unspec 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 txqueuelen 500 (UNSPEC)
- RX packets 0 bytes 0 (0.0 B)
- RX errors 0 dropped 0 overruns 0 frame 0
- TX packets 2 bytes 120 (120.0 B)
- TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
-
- lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
- inet 127.0.0.1 netmask 255.0.0.0
- inet6 ::1 prefixlen 128 scopeid 0x10<host>
- loop txqueuelen 0 (Local Loopback)
- RX packets 65311 bytes 5768287 (5.5 MiB)
- RX errors 0 dropped 0 overruns 0 frame 0
- TX packets 65311 bytes 5768287 (5.5 MiB)
- TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
以上描述,就部署好了基本的环境,接下来要部署和启动kubernetes服务。
4. kubenetes 部署
master:
编写如下脚本,保存为start_k8s_master.sh:
- #! /bin/sh
-
- # firstly, start etcd
- systemctl restart etcd
-
- # secondly, start flanneld
- systemctl restart flanneld
-
- # then, start docker
- systemctl restart docker
-
- # start the main server of k8s master
- nohup kube-apiserver --insecure-bind-address=0.0.0.0 --insecure-port=8080 --cors_allowed_origins=.* --etcd_servers=http:
-
- nohup kube-controller-manager --master=172.20.30.19:8080 --enable-hostpath-provisioner=false --v=1 --logtostderr=false --log_dir=/var/log/k8s/controller-manager &
-
- nohup kube-scheduler --master=172.20.30.19:8080 --v=1 --logtostderr=false --log_dir=/var/log/k8s/scheduler &
然后赋予执行权限:
- # chmod u+x start_k8s_master.sh
由于安装k8s的操作,已经把kubelet和kube-proxy发送到作为minion机器上了(我们已经悄悄地定义好了k8s集群)
因此,编写脚本,保存为start_k8s_minion.sh
- #! /bin/sh
-
- # firstly, start etcd
- systemctl restart etcd
-
- # secondly, start flanneld
- systemctl restart flanneld
-
- # then, start docker
- systemctl restart docker
-
- # start the minion
- nohup kubelet --address=0.0.0.0 --port=10250 --v=1 --log_dir=/var/log/k8s/kubelet --hostname_override=172.20.30.21 --api_servers=http:
-
- nohup kube-proxy --master=172.20.30.19:8080 --log_dir=/var/log/k8s/proxy --v=1 --logtostderr=false &
然后赋予执行权限:
- # chmod u+x start_k8s_minion.sh
发送该脚本到作为minion的主机上。
运行k8s
在作为master的主机上,执行:
在master主机上,执行:
- # kubectl get node
- NAME STATUS AGE
- 172.20.30.18 Ready 5h
- 172.20.30.20 Ready 5h
- 172.20.30.21 Ready 5h
列出以上信息,则表示k8s集群部署成功。
参考:
感谢楚哥在学习过程中的答疑。