kubernetes-docker for centos7 集群部署

kubernetes-docker集群相关主机
docmaster :10.117.130.178
docslave1 :10.117.130.148
docslave2 :10.117.130.147

docslave1和docslave2安装部署docker
1、确保机器能够连接外网
2、添加docker安装必须的yum源
[root@docslave1 yum.repos.d]# cat docker.repo 
[docker-main]
name=Docker Repository
baseurl=https://yum.dockerproject.org/repo/main/centos/7/
enabled=1
gpgcheck=1
gpgkey=https://yum.dockerproject.org/gpg

[docker-testing]
name=Docker Repository
baseurl=https://yum.dockerproject.org/repo/testing/centos/$releasever/
enabled=0
gpgcheck=1
gpgkey=https://yum.dockerproject.org/gpg

[docker-beta]
name=Docker Repository
baseurl=https://yum.dockerproject.org/repo/beta/centos/7/
enabled=0
gpgcheck=1
gpgkey=https://yum.dockerproject.org/gpg

[docker-nightly]
name=Docker Repository
baseurl=https://yum.dockerproject.org/repo/nightly/centos/7/
enabled=0
gpgcheck=1
gpgkey=https://yum.dockerproject.org/gpg
[root@docslave1 yum.repos.d]# pwd
/etc/yum.repos.d
[root@docslave1 yum.repos.d]#

3、所有节点安装启用NTP
yum -y install ntp
systemctl start ntpd
systemctl enable ntpd
hwclock --systohc
4、修改所有节点的/etc/hosts文件
[root@docslave1 ~]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6

10.117.130.178 docmaster
10.117.130.147 docslave2
10.117.130.148 docslave1
[root@docslave1 ~]#

[root@docslave2 ~]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6

10.117.130.178 docmaster
10.117.130.147 docslave2
10.117.130.148 docslave1
[root@docslave2 ~]#

[root@docmaster ~]# cat /etc/hosts
10.117.130.178 docmaster
10.117.130.147 docslave2
10.117.130.148 docslave1
[root@docmaster ~]#

5、所有节点禁用防火墙
systemctl stop firewalld && systemctl disable firewalld

6、DOCMASTER安装:
yum -y install etcd kubernetes

修改etcd配置文件
vi /etc/etcd/etcd.conf 
ETCD_NAME=default
ETCD_DATA_DIR="/var/lib/etcd/default.etcd" 
ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379" 
ETCD_ADVERTISE_CLIENT_URLS="http://localhost:2379" 
[root@docmaster ~]# cat /etc/etcd/etcd.conf
ETCD_NAME=default
ETCD_DATA_DIR="/var/lib/etcd/default.etcd" 
ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379" 
ETCD_ADVERTISE_CLIENT_URLS="http://localhost:2379" 
[root@docmaster ~]#

修改/etc/kubernetes/apiserver配置文件
vi /etc/kubernetes/apiserver
[root@docmaster ~]# cat /etc/kubernetes/apiserver
KUBE_API_ADDRESS="--insecure-bind-address=127.0.0.1" 
KUBE_ETCD_SERVERS="--etcd-servers=http://10.117.130.178:2379" 
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.117.0.0/16" 
KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota
  1. Add your own!
    KUBE_API_ADDRESS="--address=0.0.0.0" 
    KUBE_API_PORT="--port=8080" 
    KUBELET_PORT="--kubelet_port=10250" 
    KUBE_ETCD_SERVERS="--etcd_servers=http://127.0.0.1:2379" 
    KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16" 
    KUBE_ADMISSION_CONTROL="--admission_control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota
    KUBE_API_ARGS="" 
    [root@docmaster ~]# 
    启动服务
    for SERVICES in etcd kube-apiserver kube-controller-manager kube-scheduler; do
    systemctl restart $SERVICES
    systemctl enable $SERVICES
    systemctl status $SERVICES
    done
    [root@docmaster ~]# for SERVICES in etcd kube-apiserver kube-controller-manager kube-scheduler; do

systemctl restart $SERVICES
systemctl enable $SERVICES
systemctl status $SERVICES
done

  • etcd.service - Etcd Server
    Loaded: loaded (/usr/lib/systemd/system/etcd.service; enabled; vendor preset: disabled)
    Active: active (running) since 三 2017-04-12 15:39:33 CST; 81ms ago
    Main PID: 2520 (etcd)
    CGroup: /system.slice/etcd.service
    `-2520 /usr/bin/etcd --name=default --data-dir=/var/lib/etcd/default.etcd --listen-client-urls=http://0.0.0.0:2379
    4月 12 15:39:32 docmaster etcd2520: starting server... [version: 3.1.0, cluster version: 3.1]
    4月 12 15:39:33 docmaster etcd2520: 5d0805db329c2a7e is starting a new election at term 14
    4月 12 15:39:33 docmaster etcd2520: 5d0805db329c2a7e became candidate at term 15
    4月 12 15:39:33 docmaster etcd2520: 5d0805db329c2a7e received MsgVoteResp from 5d0805db329c2a7e at term 15
    4月 12 15:39:33 docmaster etcd2520: 5d0805db329c2a7e became leader at term 15
    4月 12 15:39:33 docmaster etcd2520: raft.node: 5d0805db329c2a7e elected leader 5d0805db329c2a7e at term 15
    4月 12 15:39:33 docmaster etcd2520: ready to serve client requests
    4月 12 15:39:33 docmaster etcd2520: published {Name:default ClientURLs:[http://localhost:2379]} to cluster e44e4c3e04433807
    4月 12 15:39:33 docmaster etcd2520: serving insecure client requests on [::]:2379, this is strongly discouraged!
    4月 12 15:39:33 docmaster systemd1: Started Etcd Server.
  • kube-apiserver.service - Kubernetes API Server
    Loaded: loaded (/usr/lib/systemd/system/kube-apiserver.service; enabled; vendor preset: disabled)
    Active: active (running) since 三 2017-04-12 15:39:33 CST; 86ms ago
    Docs: https://github.com/GoogleCloudPlatform/kubernetes
    Main PID: 2557 (kube-apiserver)
    CGroup: /system.slice/kube-apiserver.service
    `-2557 /usr/bin/kube-apiserver --logtostderr=true --v=0 --etcd_servers=http://127.0.0.1:2379 --address=0.0.0.0 --port...
    4月 12 15:39:33 docmaster kube-apiserver2557: E0412 15:39:33.591364 2557 reflector.go:199] k8s.io/kubernetes/plugin/p...efused
    4月 12 15:39:33 docmaster kube-apiserver2557: E0412 15:39:33.692251 2557 reflector.go:199] pkg/controller/informers/f...efused
    4月 12 15:39:33 docmaster kube-apiserver2557: E0412 15:39:33.692311 2557 reflector.go:199] pkg/controller/informers/f...efused
    4月 12 15:39:33 docmaster kube-apiserver2557: [restful] 2017/04/12 15:39:33 log.go:30: [restful/swagger] listing is avai...erapi/
    4月 12 15:39:33 docmaster kube-apiserver2557: [restful] 2017/04/12 15:39:33 log.go:30: [restful/swagger] https://10.117....er-ui/
    4月 12 15:39:33 docmaster kube-apiserver2557: I0412 15:39:33.846164 2557 serve.go:104] Serving securely on 0.0.0.0:6443
    4月 12 15:39:33 docmaster systemd1: Started Kubernetes API Server.
    4月 12 15:39:33 docmaster kube-apiserver2557: I0412 15:39:33.846305 2557 serve.go:118] Serving insecurely on 0.0.0.0:8080
    4月 12 15:39:33 docmaster kube-apiserver2557: E0412 15:39:33.851207 2557 repair.go:159] the cluster IP 10.117.0.1 for...create
    4月 12 15:39:33 docmaster kube-apiserver2557: E0412 15:39:33.862246 2557 repair.go:159] the cluster IP 10.117.0.1 for...create
    Hint: Some lines were ellipsized, use -l to show in full.
  • kube-controller-manager.service - Kubernetes Controller Manager
    Loaded: loaded (/usr/lib/systemd/system/kube-controller-manager.service; enabled; vendor preset: disabled)
    Active: active (running) since 三 2017-04-12 15:39:33 CST; 88ms ago
    Docs: https://github.com/GoogleCloudPlatform/kubernetes
    Main PID: 2591 (kube-controller)
    CGroup: /system.slice/kube-controller-manager.service
    `-2591 /usr/bin/kube-controller-manager --logtostderr=true --v=0 --master=http://10.117.130.178:8080
    4月 12 15:39:33 docmaster systemd1: Started Kubernetes Controller Manager.
    4月 12 15:39:33 docmaster systemd1: Starting Kubernetes Controller Manager...
  • kube-scheduler.service - Kubernetes Scheduler Plugin
    Loaded: loaded (/usr/lib/systemd/system/kube-scheduler.service; enabled; vendor preset: disabled)
    Active: active (running) since 三 2017-04-12 15:39:34 CST; 95ms ago
    Docs: https://github.com/GoogleCloudPlatform/kubernetes
    Main PID: 2624 (kube-scheduler)
    CGroup: /system.slice/kube-scheduler.service
    `-2624 /usr/bin/kube-scheduler --logtostderr=true --v=0 --master=http://10.117.130.178:8080
    4月 12 15:39:34 docmaster systemd1: Started Kubernetes Scheduler Plugin.
    4月 12 15:39:34 docmaster systemd1: Starting Kubernetes Scheduler Plugin...
    [root@docmaster ~]#

主节点配置网络
etcdctl mk /atomic.io/network/config '{"Network":"172.17.0.0/16"}'
etcdctl ls /atomic.io/network/ --recursive
[root@docmaster ~]# etcdctl ls /atomic.io/network
/atomic.io/network/config
/atomic.io/network/subnets
[root@docmaster ~]# etcdctl get /atomic.io/network/config {"Network":"172.17.0.0/16"}
[root@docmaster ~]# 
[root@docmaster ~]# kubectl get nodes
NAME STATUS AGE
[root@docmaster ~]#

7、DOCSLAVE节点安装部署:
yum -y install flannel kubernetes

修改配置文件/etc/sysconfig/flanneld

[root@docslave1 ~]# cat /etc/sysconfig/flanneld
# Flanneld configuration options  


# etcd url location.  Point this to the server where etcd runs
FLANNEL_ETCD_ENDPOINTS="http://10.117.130.178:2379"


# etcd config key.  This is the configuration key that flannel queries
# For address range assignment
FLANNEL_ETCD_PREFIX="/atomic.io/network"


# Any additional options that you want to pass
#FLANNEL_OPTIONS=""


FLANNEL_ETCD="http://10.117.130.178:2379"
FLANNEL_ETCD_KEY="/atomic.io/network" 

[root@docslave1 ~]# 


[root@docslave2 ~]# cat /etc/sysconfig/flanneld
# Flanneld configuration options  


# etcd url location.  Point this to the server where etcd runs
FLANNEL_ETCD_ENDPOINTS="http://10.117.130.178:2379"


# etcd config key.  This is the configuration key that flannel queries
# For address range assignment
FLANNEL_ETCD_PREFIX="/atomic.io/network"


# Any additional options that you want to pass
#FLANNEL_OPTIONS=""
FLANNEL_ETCD="http://10.117.130.178:2379" 
FLANNEL_ETCD_KEY="/atomic.io/network"
[root@docslave2 ~]# 
修改配置文件/etc/kubernetes/config
[root@docslave1 ~]# cat /etc/kubernetes/config 
###
# kubernetes system config
#
# The following values are used to configure various aspects of all
# kubernetes services, including
#
#   kube-apiserver.service
#   kube-controller-manager.service
#   kube-scheduler.service
#   kubelet.service
#   kube-proxy.service
# logging to stderr means we get it in the systemd journal
KUBE_LOGTOSTDERR="--logtostderr=true"


# journal message level, 0 is debug
KUBE_LOG_LEVEL="--v=0"


# Should this cluster be allowed to run privileged docker containers
KUBE_ALLOW_PRIV="--allow-privileged=false"


# How the controller-manager, scheduler, and proxy find the apiserver
KUBE_MASTER="--master=http://10.117.130.178:8080"

[root@docslave1 ~]# 


[root@docslave2 ~]# cat /etc/kubernetes/config 
###
# kubernetes system config
#
# The following values are used to configure various aspects of all
# kubernetes services, including
#
#   kube-apiserver.service
#   kube-controller-manager.service
#   kube-scheduler.service
#   kubelet.service
#   kube-proxy.service
# logging to stderr means we get it in the systemd journal
KUBE_LOGTOSTDERR="--logtostderr=true"


# journal message level, 0 is debug
KUBE_LOG_LEVEL="--v=0"


# Should this cluster be allowed to run privileged docker containers
KUBE_ALLOW_PRIV="--allow-privileged=false"


# How the controller-manager, scheduler, and proxy find the apiserver
KUBE_MASTER="--master=http://10.117.130.178:8080"

[root@docslave2 ~]# 


启动服务
for SERVICES in kube-proxy kubelet docker flanneld; do
systemctl restart $SERVICES
systemctl enable $SERVICES
systemctl status $SERVICES
done

[root@docslave1 ~]# for SERVICES in kube-proxy kubelet docker flanneld; do

systemctl restart $SERVICES
systemctl enable $SERVICES
systemctl status $SERVICES
done

● kube-proxy.service - Kubernetes Kube-Proxy Server
Loaded: loaded (/usr/lib/systemd/system/kube-proxy.service; enabled; vendor preset: disabled)
Active: active (running) since Wed 2017-04-12 15:53:52 CST; 264ms ago
Docs: https://github.com/GoogleCloudPlatform/kubernetes
Main PID: 3972 (kube-proxy)
CGroup: /system.slice/kube-proxy.service
└─3972 /usr/bin/kube-proxy --logtostderr=true --v=0 --master=http://10.117.130.178:8080
Apr 12 15:53:52 docslave1 systemd1: Stopping Kubernetes Kube-Proxy Server...
Apr 12 15:53:52 docslave1 systemd1: Started Kubernetes Kube-Proxy Server.
Apr 12 15:53:52 docslave1 systemd1: Starting Kubernetes Kube-Proxy Server...
● kubelet.service - Kubernetes Kubelet Server
Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)
Active: active (running) since Wed 2017-04-12 15:53:53 CST; 243ms ago
Docs: https://github.com/GoogleCloudPlatform/kubernetes
Main PID: 4002 (kubelet)
CGroup: /system.slice/kubelet.service
└─4002 /usr/bin/kubelet --logtostderr=true --v=0 --api_servers=http://docmaster:8080 --address=0.0.0.0 --port=10250 --hostname_override=docslave1 --allow-privileged=false --pod-infra...
Apr 12 15:53:53 docslave1 systemd1: Started Kubernetes Kubelet Server.
Apr 12 15:53:53 docslave1 systemd1: Starting Kubernetes Kubelet Server...
● docker.service - Docker Application Container Engine
Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled; vendor preset: disabled)
Drop-In: /usr/lib/systemd/system/docker.service.d
└─flannel.conf
Active: active (running) since Wed 2017-04-12 15:53:56 CST; 271ms ago
Docs: http://docs.docker.com
Main PID: 4097 (dockerd-current)
CGroup: /system.slice/docker.service
├─4097 /usr/bin/dockerd-current --add-runtime docker-runc=/usr/libexec/docker/docker-runc-current --default-runtime=docker-runc --exec-opt native.cgroupdriver=systemd --userland-prox...
└─4101 /usr/bin/docker-containerd-current -l unix:///var/run/docker/libcontainerd/docker-containerd.sock --shim docker-containerd-shim --metrics-interval=0 --start-timeout 2m --state...
Apr 12 15:53:54 docslave1 dockerd-current4097: time="2017-04-12T15:53:54.860646404+08:00" level=info msg="libcontainerd: new containerd process, pid: 4101" 
Apr 12 15:53:55 docslave1 dockerd-current4097: time="2017-04-12T15:53:55.879956750+08:00" level=info msg="[graphdriver] using prior storage driver \"overlay\"" 
Apr 12 15:53:55 docslave1 dockerd-current4097: time="2017-04-12T15:53:55.883284863+08:00" level=info msg="Graph migration to content-addressability took 0.00 seconds" 
Apr 12 15:53:55 docslave1 dockerd-current4097: time="2017-04-12T15:53:55.884722319+08:00" level=info msg="Loading containers: start." 
Apr 12 15:53:55 docslave1 dockerd-current4097: time="2017-04-12T15:53:55.896417082+08:00" level=info msg="Firewalld running: false" 
Apr 12 15:53:56 docslave1 dockerd-current4097: time="2017-04-12T15:53:56.118755688+08:00" level=info msg="Loading containers: done." 
Apr 12 15:53:56 docslave1 dockerd-current4097: time="2017-04-12T15:53:56.119009316+08:00" level=info msg="Daemon has completed initialization" 
Apr 12 15:53:56 docslave1 dockerd-current4097: time="2017-04-12T15:53:56.119038582+08:00" level=info msg="Docker daemon" commit="96d83a5/1.12.6" graphdriver=overlay version=1.12.6
Apr 12 15:53:56 docslave1 dockerd-current4097: time="2017-04-12T15:53:56.151010296+08:00" level=info msg="API listen on /var/run/docker.sock" 
Apr 12 15:53:56 docslave1 systemd1: Started Docker Application Container Engine.
● flanneld.service - Flanneld overlay address etcd agent
Loaded: loaded (/usr/lib/systemd/system/flanneld.service; enabled; vendor preset: disabled)
Active: active (running) since Wed 2017-04-12 15:53:57 CST; 267ms ago
Main PID: 4268 (flanneld)
CGroup: /system.slice/flanneld.service
└─4268 /usr/bin/flanneld -etcd-endpoints=http://10.117.130.178:2379 -etcd-prefix=/atomic.io/network
Apr 12 15:53:57 docslave1 systemd1: Starting Flanneld overlay address etcd agent...
Apr 12 15:53:57 docslave1 flanneld-start4268: I0412 15:53:57.714692 4268 main.go:132] Installing signal handlers
Apr 12 15:53:57 docslave1 flanneld-start4268: I0412 15:53:57.714943 4268 manager.go:136] Determining IP address of default interface
Apr 12 15:53:57 docslave1 flanneld-start4268: I0412 15:53:57.715706 4268 manager.go:149] Using interface with name ens192 and address 10.117.130.148
Apr 12 15:53:57 docslave1 flanneld-start4268: I0412 15:53:57.715733 4268 manager.go:166] Defaulting external address to interface address (10.117.130.148)
Apr 12 15:53:57 docslave1 flanneld-start4268: I0412 15:53:57.736938 4268 local_manager.go:134] Found lease (172.17.85.0/24) for current IP (10.117.130.148), reusing
Apr 12 15:53:57 docslave1 flanneld-start4268: I0412 15:53:57.751933 4268 manager.go:250] Lease acquired: 172.17.85.0/24
Apr 12 15:53:57 docslave1 flanneld-start4268: I0412 15:53:57.753092 4268 network.go:98] Watching for new subnet leases
Apr 12 15:53:57 docslave1 flanneld-start4268: I0412 15:53:57.784671 4268 network.go:191] Subnet added: 172.17.22.0/24
Apr 12 15:53:57 docslave1 systemd1: Started Flanneld overlay address etcd agent.
[root@docslave1 ~]#

[root@docslave2 ~]# for SERVICES in kube-proxy kubelet docker flanneld; do

systemctl restart $SERVICES
systemctl enable $SERVICES
systemctl status $SERVICES
done

● kube-proxy.service - Kubernetes Kube-Proxy Server
Loaded: loaded (/usr/lib/systemd/system/kube-proxy.service; enabled; vendor preset: disabled)
Active: active (running) since Wed 2017-04-12 15:54:01 CST; 332ms ago
Docs: https://github.com/GoogleCloudPlatform/kubernetes
Main PID: 3811 (kube-proxy)
CGroup: /system.slice/kube-proxy.service
└─3811 /usr/bin/kube-proxy --logtostderr=true --v=0 --master=http://10.117.130.178:8080
Apr 12 15:54:01 docslave2 systemd1: Started Kubernetes Kube-Proxy Server.
Apr 12 15:54:01 docslave2 systemd1: Starting Kubernetes Kube-Proxy Server...
● kubelet.service - Kubernetes Kubelet Server
Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)
Active: active (running) since Wed 2017-04-12 15:54:02 CST; 180ms ago
Docs: https://github.com/GoogleCloudPlatform/kubernetes
Main PID: 3841 (kubelet)
CGroup: /system.slice/kubelet.service
└─3841 /usr/bin/kubelet --logtostderr=true --v=0 --api_servers=http://docmaster:8080 --address=0.0.0.0 --port=10250 --hostname_override=docslave2 --allow-privileged=false --pod-infra...
Apr 12 15:54:02 docslave2 systemd1: Started Kubernetes Kubelet Server.
Apr 12 15:54:02 docslave2 systemd1: Starting Kubernetes Kubelet Server...
● docker.service - Docker Application Container Engine
Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled; vendor preset: disabled)
Drop-In: /usr/lib/systemd/system/docker.service.d
└─flannel.conf
Active: active (running) since Wed 2017-04-12 15:54:04 CST; 274ms ago
Docs: http://docs.docker.com
Main PID: 3956 (dockerd-current)
CGroup: /system.slice/docker.service
├─3956 /usr/bin/dockerd-current --add-runtime docker-runc=/usr/libexec/docker/docker-runc-current --default-runtime=docker-runc --exec-opt native.cgroupdriver=systemd --userland-prox...
└─3960 /usr/bin/docker-containerd-current -l unix:///var/run/docker/libcontainerd/docker-containerd.sock --shim docker-containerd-shim --metrics-interval=0 --start-timeout 2m --state...
Apr 12 15:54:03 docslave2 dockerd-current3956: time="2017-04-12T15:54:03.553837363+08:00" level=info msg="libcontainerd: new containerd process, pid: 3960" 
Apr 12 15:54:04 docslave2 dockerd-current3956: time="2017-04-12T15:54:04.573290877+08:00" level=info msg="[graphdriver] using prior storage driver \"overlay\"" 
Apr 12 15:54:04 docslave2 dockerd-current3956: time="2017-04-12T15:54:04.576618671+08:00" level=info msg="Graph migration to content-addressability took 0.00 seconds" 
Apr 12 15:54:04 docslave2 dockerd-current3956: time="2017-04-12T15:54:04.578133501+08:00" level=info msg="Loading containers: start." 
Apr 12 15:54:04 docslave2 dockerd-current3956: time="2017-04-12T15:54:04.590627954+08:00" level=info msg="Firewalld running: false" 
Apr 12 15:54:04 docslave2 dockerd-current3956: time="2017-04-12T15:54:04.814893478+08:00" level=info msg="Loading containers: done." 
Apr 12 15:54:04 docslave2 dockerd-current3956: time="2017-04-12T15:54:04.814975859+08:00" level=info msg="Daemon has completed initialization" 
Apr 12 15:54:04 docslave2 dockerd-current3956: time="2017-04-12T15:54:04.815002649+08:00" level=info msg="Docker daemon" commit="96d83a5/1.12.6" graphdriver=overlay version=1.12.6
Apr 12 15:54:04 docslave2 systemd1: Started Docker Application Container Engine.
Apr 12 15:54:04 docslave2 dockerd-current3956: time="2017-04-12T15:54:04.848224332+08:00" level=info msg="API listen on /var/run/docker.sock" 
● flanneld.service - Flanneld overlay address etcd agent
Loaded: loaded (/usr/lib/systemd/system/flanneld.service; enabled; vendor preset: disabled)
Active: active (running) since Wed 2017-04-12 15:54:06 CST; 206ms ago
Main PID: 4109 (flanneld)
CGroup: /system.slice/flanneld.service
└─4109 /usr/bin/flanneld -etcd-endpoints=http://10.117.130.178:2379 -etcd-prefix=/atomic.io/network
Apr 12 15:54:06 docslave2 systemd1: Starting Flanneld overlay address etcd agent...
Apr 12 15:54:06 docslave2 flanneld-start4109: I0412 15:54:06.457895 4109 main.go:132] Installing signal handlers
Apr 12 15:54:06 docslave2 flanneld-start4109: I0412 15:54:06.458137 4109 manager.go:136] Determining IP address of default interface
Apr 12 15:54:06 docslave2 flanneld-start4109: I0412 15:54:06.458873 4109 manager.go:149] Using interface with name ens192 and address 10.117.130.147
Apr 12 15:54:06 docslave2 flanneld-start4109: I0412 15:54:06.458902 4109 manager.go:166] Defaulting external address to interface address (10.117.130.147)
Apr 12 15:54:06 docslave2 flanneld-start4109: I0412 15:54:06.472424 4109 local_manager.go:134] Found lease (172.17.22.0/24) for current IP (10.117.130.147), reusing
Apr 12 15:54:06 docslave2 flanneld-start4109: I0412 15:54:06.487790 4109 manager.go:250] Lease acquired: 172.17.22.0/24
Apr 12 15:54:06 docslave2 flanneld-start4109: I0412 15:54:06.493930 4109 network.go:98] Watching for new subnet leases
Apr 12 15:54:06 docslave2 flanneld-start4109: I0412 15:54:06.508579 4109 network.go:191] Subnet added: 172.17.85.0/24
Apr 12 15:54:06 docslave2 systemd1: Started Flanneld overlay address etcd agent.
[root@docslave2 ~]#

8、主节点查看集群所有的节点
[root@docmaster ~]# kubectl get nodes
NAME STATUS AGE
docslave1 Ready 47m
docslave2 Ready 53m
[root@docmaster ~]#

9、分节点查看网络
[root@docslave1 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host 
valid_lft forever preferred_lft forever
2: ens192: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:0c:29:88:d2:0e brd ff:ff:ff:ff:ff:ff
inet 10.117.130.148/24 brd 10.117.130.255 scope global ens192
valid_lft forever preferred_lft forever
inet6 fe80::bb07:f6a7:c5ce:dae3/64 scope link 
valid_lft forever preferred_lft forever
4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN 
link/ether 02:42:07:ae:a0:03 brd ff:ff:ff:ff:ff:ff
inet 172.17.85.1/24 scope global docker0
valid_lft forever preferred_lft forever
6: flannel0: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 1472 qdisc pfifo_fast state UNKNOWN qlen 500
link/none 
inet 172.17.85.0/16 scope global flannel0
valid_lft forever preferred_lft forever
[root@docslave1 ~]#

[root@docslave2 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host 
valid_lft forever preferred_lft forever
2: ens192: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:0c:29:1b:95:45 brd ff:ff:ff:ff:ff:ff
inet 10.117.130.147/24 brd 10.117.130.255 scope global ens192
valid_lft forever preferred_lft forever
inet6 fe80::5132:f61a:634a:d83a/64 scope link 
valid_lft forever preferred_lft forever
4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN 
link/ether 02:42:9b:2b:27:b2 brd ff:ff:ff:ff:ff:ff
inet 172.17.22.1/24 scope global docker0
valid_lft forever preferred_lft forever
5: flannel0: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 1472 qdisc pfifo_fast state UNKNOWN qlen 500
link/none 
inet 172.17.22.0/16 scope global flannel0
valid_lft forever preferred_lft forever
[root@docslave2 ~]#

10、测试master节点的etcd是否通
[root@docslave1 ~]# curl -s -L http://10.117.130.178:2379/version {"etcdserver":"3.1.0","etcdcluster":"3.1.0"}
[root@docslave1 ~]# 
[root@docslave2 ~]# curl -s -L http://10.117.130.178:2379/version {"etcdserver":"3.1.0","etcdcluster":"3.1.0"}
[root@docslave2 ~]# 
11、检查集群代理
[root@docslave1 ~]# systemctl status kube-proxy
● kube-proxy.service - Kubernetes Kube-Proxy Server
Loaded: loaded (/usr/lib/systemd/system/kube-proxy.service; enabled; vendor preset: disabled)
Active: active (running) since Wed 2017-04-12 15:53:52 CST; 5min ago
Docs: https://github.com/GoogleCloudPlatform/kubernetes
Main PID: 3972 (kube-proxy)
Memory: 19.1M
CGroup: /system.slice/kube-proxy.service
└─3972 /usr/bin/kube-proxy --logtostderr=true --v=0 --master=http://10.117.130.178:8080
Apr 12 15:53:52 docslave1 systemd1: Stopping Kubernetes Kube-Proxy Server...
Apr 12 15:53:52 docslave1 systemd1: Started Kubernetes Kube-Proxy Server.
Apr 12 15:53:52 docslave1 systemd1: Starting Kubernetes Kube-Proxy Server...
Apr 12 15:53:53 docslave1 kube-proxy3972: I0412 15:53:53.616650 3972 server.go:215] Using iptables Proxier.
Apr 12 15:53:53 docslave1 kube-proxy3972: W0412 15:53:53.625871 3972 proxier.go:253] clusterCIDR not specified, unable to distinguish between internal and external traffic
Apr 12 15:53:53 docslave1 kube-proxy3972: I0412 15:53:53.625893 3972 server.go:227] Tearing down userspace rules.
Apr 12 15:53:53 docslave1 kube-proxy3972: I0412 15:53:53.657194 3972 conntrack.go:81] Set sysctl 'net/netfilter/nf_conntrack_max' to 131072
Apr 12 15:53:53 docslave1 kube-proxy3972: I0412 15:53:53.658132 3972 conntrack.go:66] Setting conntrack hashsize to 32768
Apr 12 15:53:53 docslave1 kube-proxy3972: I0412 15:53:53.658409 3972 conntrack.go:81] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
Apr 12 15:53:53 docslave1 kube-proxy3972: I0412 15:53:53.658431 3972 conntrack.go:81] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
[root@docslave1 ~]# 
[root@docslave2 ~]# systemctl status kube-proxy
● kube-proxy.service - Kubernetes Kube-Proxy Server
Loaded: loaded (/usr/lib/systemd/system/kube-proxy.service; enabled; vendor preset: disabled)
Active: active (running) since Wed 2017-04-12 15:54:01 CST; 5min ago
Docs: https://github.com/GoogleCloudPlatform/kubernetes
Main PID: 3811 (kube-proxy)
Memory: 20.7M
CGroup: /system.slice/kube-proxy.service
└─3811 /usr/bin/kube-proxy --logtostderr=true --v=0 --master=http://10.117.130.178:8080
Apr 12 15:54:01 docslave2 systemd1: Started Kubernetes Kube-Proxy Server.
Apr 12 15:54:01 docslave2 systemd1: Starting Kubernetes Kube-Proxy Server...
Apr 12 15:54:02 docslave2 kube-proxy3811: I0412 15:54:02.109309 3811 server.go:215] Using iptables Proxier.
Apr 12 15:54:02 docslave2 kube-proxy3811: W0412 15:54:02.113181 3811 proxier.go:253] clusterCIDR not specified, unable to distinguish between internal and external traffic
Apr 12 15:54:02 docslave2 kube-proxy3811: I0412 15:54:02.113205 3811 server.go:227] Tearing down userspace rules.
Apr 12 15:54:02 docslave2 kube-proxy3811: I0412 15:54:02.182019 3811 conntrack.go:81] Set sysctl 'net/netfilter/nf_conntrack_max' to 131072
Apr 12 15:54:02 docslave2 kube-proxy3811: I0412 15:54:02.182415 3811 conntrack.go:66] Setting conntrack hashsize to 32768
Apr 12 15:54:02 docslave2 kube-proxy3811: I0412 15:54:02.182789 3811 conntrack.go:81] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
Apr 12 15:54:02 docslave2 kube-proxy3811: I0412 15:54:02.182811 3811 conntrack.go:81] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
[root@docslave2 ~]#

集群其他项检查

[root@docslave1 ~]# kubectl version
Client Version: version.Info{Major:"1", Minor:"5", GitVersion:"v1.5.2", GitCommit:"c55cf2b7d8bfeb947f77453415d775d7f71c89c2", GitTreeState:"clean", BuildDate:"2017-03-07T00:03:00Z", GoVersion:"go1.7.4", Compiler:"gc", Platform:"linux/amd64"}
The connection to the server localhost:8080 was refused - did you specify the right host or port?
[root@docslave1 ~]#

[root@docmaster ~]# kubectl get componentstatuses
NAME STATUS MESSAGE ERROR
scheduler Healthy ok 
controller-manager Healthy ok 
etcd-0 Healthy {"health": "true"} 
[root@docmaster ~]#

[root@docmaster ~]# kubectl run web --image=python3 --replicas=5 "python3 m http.server 8080" 
deployment "web" created
[root@docmaster ~]# 
[root@docmaster ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
web-1125031748-1s1kj 0/1 ContainerCreating 0 30s
web-1125031748-4m2r1 0/1 ContainerCreating 0 30s
web-1125031748-6h0f4 0/1 ContainerCreating 0 30s
web-1125031748-8sr7q 0/1 ContainerCreating 0 30s
web-1125031748-p1dj5 0/1 ContainerCreating 0 30s
[root@docmaster ~]# 
[root@docmaster ~]# kubectl get deployment
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
web 5 5 5 0 59s
[root@docmaster ~]# 
[root@docmaster ~]# kubectl describe pods web-1125031748-1s1kj
Name: web-1125031748-1s1kj
Namespace: default
Node: docslave1/10.117.130.148
Start Time: Wed, 12 Apr 2017 16:05:54 +0800
Labels: pod-template-hash=1125031748
run=web
Status: Pending
IP: 
Controllers: ReplicaSet/web-1125031748
Containers:
web:
Container ID: 
Image: python3
Image ID: 
Port: 
Args:
python3 -m http.server 8080
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Volume Mounts: <none>
Environment Variables: <none>
Conditions:
Type Status
Initialized True 
Ready False 
PodScheduled True 
No volumes.
QoS Class: BestEffort
Tolerations: <none>
Events:
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
-------- -------- ----- ---- ------------- -------- ------ -------
1m 1m 1 {default-scheduler } Normal Scheduled Successfully assigned web-1125031748-1s1kj to docslave1
[root@docmaster ~]#


  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值