kubernetes 1.5集群安装
系统配置:
Linux 3.10.0-327.36.3.el7.x86_64 #1 SMP Mon Oct 24 16:09:20 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
系统配置
Linux 3.10.0-327.36.3.el7.x86_64 #1 SMP Mon Oct 24 16:09:20 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
配置系统前操作
系统操作
关闭防火墙
systemctl disable firewalld
systemctl stop firewalld
设置主机名
hostnamectl –static set-hostname centos-master
关闭Selinux
/etc/selinux/config
SELINUX=disabled
如果可以访问到gcr.io,则不需要做这一步
设置hosts文件,在/etc/hosts文件中加入下面两行。
61.91.161.217 gcr.io
61.91.161.217 www.gcr.io
操作步骤
以下在所有的节点安装
安装集群软件
加入官网源
cat < /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://yum.kubernetes.io/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
EOF
如果这个源无法使用,请点击下面连接:
http://blog.csdn.net/wenwst/article/details/54582141
yum安装
yum install -y socat kubelet kubeadm kubectl kubernetes-cni
启动docker和kubelet
systemctl enable docker
systemctl start docker
systemctl enable kubelet
systemctl start kubelet
下载镜像
如果镜像下载速度过慢,可以在docker.service中加入–registry-mirror=”http://b438f72b.m.daocloud.io”。内容如下:
vi /lib/systemd/system/docker.service
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target
[Service]
Type=notify
ExecStart=/usr/bin/dockerd –registry-mirror=”http://b438f72b.m.daocloud.io”
ExecReload=/bin/kill -s HUP $MAINPID
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
TimeoutStartSec=0
Delegate=yes
KillMode=process
[Install]
WantedBy=multi-user.target
重启动docker
systemctl restart docker
systemctl status docker
镜像下载
这一步虽然很简单,最好这一步完成了以后,再进行下一步操作。
images=(kube-proxy-amd64:v1.5.1 kube-discovery-amd64:1.0 kubedns-amd64:1.9 kube-scheduler-amd64:v1.5.1 kube-controller-manager-amd64:v1.5.1 kube-apiserver-amd64:v1.5.1 etcd-amd64:3.0.14-kubeadm kube-dnsmasq-amd64:1.4 exechealthz-amd64:1.2 pause-amd64:3.0 kubernetes-dashboard-amd64:v1.5.0 dnsmasq-metrics-amd64:1.0)
for imageName in ${images[@]} ; do
docker pull jicki/$imageName
docker tag jicki/$imageName gcr.io/google_containers/$imageName
docker rmi jicki/$imageName
done
这两个是网络
docker pull weaveworks/weave-kube:1.8.2
docker pull weaveworks/weave-npc:1.8.2
这两个是监控
docker pull kubernetes/heapster:canary
docker pull kubernetes/heapster_influxdb:v0.6
docker pull gcr.io/google_containers/heapster_grafana:v3.1.1
注意–
虽然我们在这里安装下载了weaveworks/weave-kube:1.8.2 但还是要注意安装weaveworks的yaml文件中对应的版本。
特别是在安装dns时候,kubeadm会自动安装,因此没有yaml,那么使用下面的命令进行查看:
kubectl –namespace=kube-system edit deployment kube-dns
以上部分最好在每台服务器上运行
以上在每台服务器上都要执行
配置集群
Master主机操作
在master上执行命令kubeadm init –pod-network-cidr 10.245.0.0/16,这个命令用于初始化集群master。
kubeadm init –pod-network-cidr 10.245.0.0/16
*也可以加上–api-advertise-addresses=192.168.7.206,IP192.168.7.206是master主机的IP地址
输出内容如下
[kubeadm] WARNING: kubeadm is in alpha, please do not use it for production clusters.
[preflight] Running pre-flight checks
[init] Using Kubernetes version: v1.5.1
[tokens] Generated token: "60a95a.93c425347a1695ab"
[certificates] Generated Certificate Authority key and certificate.
[certificates] Generated API Server key and certificate
[certificates] Generated Service Account signing keys
[certificates] Created keys and certificates in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[apiclient] Created API client, waiting for the control plane to become ready
[apiclient] All control plane components are healthy after 81.803134 seconds
[apiclient] Waiting for at least one node to register and become ready
[apiclient] First node is ready after 2.002437 seconds
[apiclient] Creating a test deployment
[apiclient] Test deployment succeeded
[token-discovery] Created the kube-discovery deployment, waiting for it to become ready
[token-discovery] kube-discovery is ready after 22.002704 seconds
[addons] Created essential addon: kube-proxy
[addons] Created essential addon: kube-dns
Your Kubernetes master has initialized successfully!
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
http://kubernetes.io/docs/admin/addons/
You can now join any number of machines by running the following on each node:
kubeadm join --token=60a95a.93c425347a1695ab 192.168.7.206
最后提示的命令用于增加节点,需要保存下来。
kubeadm join --token=60a95a.93c425347a1695ab 192.168.7.206
所有节点主机操作
在上一个命令执行中,我们记得需要保存的最后一个命令kubeadm join –token=60a95a.93c425347a1695ab 192.168.7.206,下面我们在所有的节点主机执行这个命令。运行完后,输出内容大概如下.。
[root@centos-minion-1 kubelet]# kubeadm join --token=60a95a.93c425347a1695ab 192.168.7.206
[kubeadm] WARNING: kubeadm is in alpha, please do not use it for production clusters.
[preflight] Running pre-flight checks
[tokens] Validating provided token
[discovery] Created cluster info discovery client, requesting info from "http://192.168.7.206:9898/cluster-info/v1/?token-id=60a95a"
[discovery] Cluster info object received, verifying signature using given token
[discovery] Cluster info signature and contents are valid, will use API endpoints [https://192.168.7.206:6443]
[bootstrap] Trying to connect to endpoint https://192.168.7.206:6443
[bootstrap] Detected server version: v1.5.1
[bootstrap] Successfully established connection with endpoint "https://192.168.7.206:6443"
[csr] Created API client to obtain unique certificate for this node, generating keys and certificate signing request
[csr] Received signed certificate from the API server:
Issuer: CN=kubernetes | Subject: CN=system:node:centos-minion-1 | CA: false
Not before: 2016-12-23 07:06:00 +0000 UTC Not After: 2017-12-23 07:06:00 +0000 UTC
[csr] Generating kubelet configuration
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
Node join complete:
* Certificate signing request sent to master and response
received.
* Kubelet informed of new secure connection details.
Run 'kubectl get nodes' on the master to see this machine join.
检查
在master上面执行命令
执行命令kubectl get nodes
输出如下
NAME STATUS AGE
centos-master Ready,master 14m
centos-minion-1 Ready 5m
centos-minion-2 Ready 45s
执行命令kubectl –namespace=kube-system get pod
输出如下
NAME READY STATUS RESTARTS AGE
dummy-2088944543-9zfjl 1/1 Running 0 2d
etcd-centos-master 1/1 Running 0 2d
kube-apiserver-centos-master 1/1 Running 0 2d
kube-controller-manager-centos-master 1/1 Running 0 2d
kube-discovery-1769846148-6ldk1 1/1 Running 0 2d
kube-proxy-34q7p 1/1 Running 0 2d
kube-proxy-hqkkg 1/1 Running 1 2d
kube-proxy-nbgn3 1/1 Running 0 2d
kube-scheduler-centos-master 1/1 Running 0 2d
weave-net-kkdh9 2/2 Running 0 42m
weave-net-mtd83 2/2 Running 0 2m
weave-net-q91sr 2/2 Running 2 42m
网络安装
可以直接执行下面的命令安装
kubectl apply -f https://git.io/weave-kube
也可以把下面的文件保存为weave-daemonset.yaml安装
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: weave-net
namespace: kube-system
spec:
template:
metadata:
labels:
name: weave-net
annotations:
scheduler.alpha.kubernetes.io/tolerations: |
[
{
"key": "dedicated",
"operator": "Equal",
"value": "master",
"effect": "NoSchedule"
}
]
spec:
hostNetwork: true
hostPID: true
containers:
- name: weave
image: weaveworks/weave-kube:1.8.2
command:
- /home/weave/launch.sh
livenessProbe:
initialDelaySeconds: 30
httpGet:
host: 127.0.0.1
path: /status
port: 6784
securityContext:
privileged: true
volumeMounts:
- name: weavedb
mountPath: /weavedb
- name: cni-bin
mountPath: /opt
- name: cni-bin2
mountPath: /host_home
- name: cni-conf
mountPath: /etc
resources:
requests:
cpu: 10m
- name: weave-npc
image: weaveworks/weave-kube:1.8.2
resources:
requests:
cpu: 10m
securityContext:
privileged: true
restartPolicy: Always
volumes:
- name: weavedb
emptyDir: {}
- name: cni-bin
hostPath:
path: /opt
- name: cni-bin2
hostPath:
path: /home
- name: cni-conf
hostPath:
path: /etc
保存以后,执行命令
kubectl apply -f weave-daemonset.yaml
检查网络安装
执行下面命令
kubectl –namespace=kube-system get pod
获得如下结果
NAME READY STATUS RESTARTS AGE
dummy-2088944543-xjj21 1/1 Running 0 55m
etcd-centos-master 1/1 Running 0 55m
kube-apiserver-centos-master 1/1 Running 0 55m
kube-controller-manager-centos-master 1/1 Running 0 55m
kube-discovery-1769846148-c45gd 1/1 Running 0 55m
kube-dns-2924299975-96xms 4/4 Running 0 55m
kube-proxy-33lsn 1/1 Running 0 55m
kube-proxy-jnz6q 1/1 Running 0 55m
kube-proxy-vfql2 1/1 Running 0 20m
kube-scheduler-centos-master 1/1 Running 0 55m
weave-net-k5tlz 2/2 Running 0 19m
weave-net-q3n89 2/2 Running 0 19m
weave-net-x57k7 2/2 Running 0 19m
如果我们看到weave-net的所有pod都处于Running状态,则表明安装完成。
下一篇写如何安装 K8s UI