本文是基于 http://www.cnblogs.com/LinuxGo/p/5729788.html,blog.csdn.net/flymu0808/article/details/55505216工作的基础上,针对新的版本存在的一些问题做了修改
环境信息
版本信息
组件 | 版本 |
etcd | 3.1.0 |
Flannel | 0.6.1 |
Kubernetes | 1.6.1 |
主机信息
主机 | IP | OS |
k8s-master | 10.235.118.215 | Ubuntu 16.04 |
k8s-node01 | 10.235.118.215 | Ubuntu 16.04 |
安装Docker
每台主机上安装最新版Docker Engine - https://docs.docker.com/engine/installation/Linux/ubuntu/
部署etcd集群
我们将在1台主机上安装部署etcd集群
下载etcd
在部署机上下载etcd
ETCD_VERSION=${ETCD_VERSION:-"3.1.0"}
ETCD="etcd-v${ETCD_VERSION}-linux-amd64"
curl -L https://github.com/coreos/etcd/releases/download/v${ETCD_VERSION}/${ETCD}.tar.gz -o etcd.tar.gz
tar xzf etcd.tar.gz -C /tmp
cd /tmp/etcd-v${ETCD_VERSION}-linux-amd64
sudo mkdir -p /opt/bin && sudo mv * /opt/bin
配置项说明
--advertise-client-urls
--initial-cluster-token etcd-cluster-1
--initial-cluster
--initial-cluster-state new
配置etcd服务
在每台主机上,分别创建/opt/config/etcd.conf和/lib/systemd/sy,(注意修改红色粗体处的IP地址)
/opt/config/etcd.conf
sudo mkdir -p /var/lib/etcd/
sudo mkdir -p /opt/config/
sudo cat <
/lib/systemd/system/etcd.service
[Unit]
Description=Etcd Server
Documentation=https://github.com/coreos/etcd
After=network.target
[Service]
User=root
Type=simple
EnvironmentFile=-/opt/config/etcd.conf
ExecStart=/opt/bin/etcd
Restart=on-failure
RestartSec=10s
LimitNOFILE=40000
[Install]
WantedBy=multi-user.target
然后在每台主机上运行
sudo systemctl daemon-reload
sudo systemctl enable etcd
sudo systemctl start etcd
下载Flannel
FLANNEL_VERSION=${FLANNEL_VERSION:-"0.6.1"}
curl -L https://github.com/coreos/flannel/releases/download/v${FLANNEL_VERSION}/flannel-${FLANNEL_VERSION}-linux-amd64.tar.gz flannel.tar.gz
tar xzf flannel.tar.gz -C /tmp
下载不了压缩文件就到 https://github.com/coreos/flannel/releases 找一个版本然后下载下来,
记住下载的版本号,然后一样解压,在解压前执行FLANNEL_VERSION=${FLANNEL_VERSION:-"你的版本"}
编译K8s
到 https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG.md#downloads-for-v161 下载1.6.1版本的二进制解压包
tar xzf kubernetes-server-linux-amd64.tar.gz -C /tmp
部署K8s Master
cd /tmp
cp kubernetes/server/bin/kube-apiserver \
kubernetes/server/bin/kube-controller-manager \
kubernetes/server/bin/kube-scheduler kubernetes/server/bin/kubelet kubernetes/server/bin/kube-proxy ~/kube
cp flanneld ~/kube
sudo mv ~/kube/* /opt/bin/
创建证书
在master主机上 ,运行如下命令创建证书
mkdir -p /srv/kubernetes/
cd /srv/kubernetes
export MASTER_IP=10.235.118.200
echo subjectAltName = IP:${MASTER_IP} > extfile.cnf
openssl genrsa -out ca.key 2048
openssl req -x509 -new -nodes -key ca.key -subj "/CN=${MASTER_IP}" -days 10000 -out ca.crt
openssl genrsa -out server.key 2048
openssl req -new -key server.key -subj "/CN=${MASTER_IP}" -out server.csr
openssl x509 -req -in server.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out server.crt -days 10000 -extfile extfile.cnf
配置kube-apiserver服务
我们使用如下的Service以及Flannel的网段:
SERVICE_CLUSTER_IP_RANGE=172.18.0.0/16
FLANNEL_NET=192.168.0.0/16
在master主机上,创建/lib/systemd/system/kube-apiserver.service文件,内容如下
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
After=network.target
[Service]
User=root
ExecStart=/opt/bin/kube-apiserver \
--insecure-bind-address=0.0.0.0 \
--insecure-port=8080 \
--etcd-servers=http://10.235.118.200:2379\
--logtostderr=true \
--allow-privileged=false \
--service-cluster-ip-range=172.18.0.0/16 \
--admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,SecurityContextDeny,ResourceQuota \
--service-node-port-range=30000-32767 \
--advertise-address=10.235.118.200 \
--client-ca-file=/srv/kubernetes/ca.crt \
--tls-cert-file=/srv/kubernetes/server.crt \
--tls-private-key-file=/srv/kubernetes/server.key
Restart=on-failure
Type=notify
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
配置kube-controller-manager服务
在master主机上,创建/lib/systemd/system/kube-controller-manager.service文件,内容如下
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
[Service]
User=root
ExecStart=/opt/bin/kube-controller-manager \
--master=127.0.0.1:8080 \
--root-ca-file=/srv/kubernetes/ca.crt \
--service-account-private-key-file=/srv/kubernetes/server.key \
--logtostderr=true
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
配置kuber-scheduler服务
在master主机上,创建/lib/systemd/system/kube-scheduler.service文件,内容如下
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
[Service]
User=root
ExecStart=/opt/bin/kube-scheduler \
--logtostderr=true \
--master=127.0.0.1:8080
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
配置flanneld服务
在master主机上,创建/lib/systemd/system/flanneld.service文件,内容如下
[Unit]
Description=Flanneld
Documentation=https://github.com/coreos/flannel
After=network.target
Before=docker.service
[Service]
User=root
ExecStart=/opt/bin/flanneld \
--etcd-endpoints="http://10.235.118.200:2379" \
--iface=10.235.118.200 \
--ip-masq
Restart=on-failure
Type=notify
LimitNOFILE=65536
启动服务
/opt/bin/etcdctl --endpoints="http://10.235.118.200:2379" mk /coreos.com/network/config \
'{"Network":"192.168.0.0/16", "Backend": {"Type": "vxlan"}}'
sudo systemctl daemon-reload
sudo systemctl enable kube-apiserver
sudo systemctl enable kube-controller-manager
sudo systemctl enable kube-scheduler
sudo systemctl enable flanneld
sudo systemctl start kube-apiserver
sudo systemctl start kube-controller-manager
sudo systemctl start kube-scheduler
sudo systemctl start flanneld
修改Docker服务
source /run/flannel/subnet.env
sudo sed -i "s|^ExecStart=/usr/bin/dockerd -H fd://$|ExecStart=/usr/bin/dockerd -H tcp://127.0.0.1:4243 -H unix:///var/run/docker.sock --bip=${FLANNEL_SUBNET} --mtu=${FLANNEL_MTU}|g" /lib/systemd/system/docker.service
rc=0
ip link show docker0 >/dev/null 2>&1 || rc="$?"
if [[ "$rc" -eq "0" ]]; then
ip link set dev docker0 down
ip link delete docker0
fi
sudo systemctl daemon-reload
sudo systemctl enable docker
sudo systemctl restart docker
部署K8s Node
复制程序文件
cd /tmp
cp kubernetes/server/bin/kubelet kubernetes/server/bin/kube-proxy ~/kube
cp flannel-${FLANNEL_VERSION}/flanneld ~/kube
sudo mkdir -p /opt/bin && sudo mv ~/kube/* /opt/bin/
配置Flanned以及修改Docker服务
参见Master部分相关步骤: 配置Flanneld服务,启动Flanneld服务,修改Docker服务。注意修改iface的地址
配置kubelet服务
/lib/systemd/system/kubelet.service,注意修改IP地址
[Unit]
Description=Kubernetes Kubelet
After=docker.service
Requires=docker.service
[Service]
ExecStart=/opt/bin/kubelet \
--hostname-override=10.235.118.200 \
--api-servers=http://10.235.118.200:8080 \
--logtostderr=true
Restart=on-failure
KillMode=process
[Install]
WantedBy=multi-user.target
启动服务
sudo systemctl daemon-reload
sudo systemctl enable kubelet
sudo systemctl start kubelet
配置kube-proxy服务
/lib/systemd/system/kube-proxy.service,注意修改IP地址
sudo systemctl daemon-reload
sudo systemctl enable kube-proxy
sudo systemctl start kube-proxy
配置kubectl
cd /tmp
mv kubernetes/server/bin/kubectl /usr/bin/kubectl
mkdir -p ~/.kube
vi ~/.kube/config
apiVersion: v1
clusters:
- cluster:
certificate-authority: crts/ca.crt
server: https://10.235.118.200:6443
name: minikube
- cluster:
insecure-skip-tls-verify: true
server: http://10.235.118.200:6443
name: ubuntu
contexts:
- context:
cluster: minikube
user: minikube
name: minikube
current-context: minikube
kind: Config
preferences: {}
users:
- name: minikube
user:
client-certificate: crts/server.crt
client-key: crts/server.key
done