kubeadm搭建k8s v1.5.2+calico+glusterfs

##环境

docker1 centos7 192.168.75.200

docker2 centos7 192.168.75.201

docker3 centos7 192.168.75.202

物理网络 192.168.75.1/24

Docker version 1.10.3, build 3999ccb-unsupported ,安装过程略

##1.安装kubernetes

安装请参考官网https://kubernetes.io/docs/getting-started-guides/kubeadm/

####安装kubelet,kubectl和kubeadm

# cat <<EOF > /etc/yum.repos.d/kubernetes.repo

[kubernetes]

name=Kubernetes

baseurl=http://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64

enabled=1

gpgcheck=0

repo_gpgcheck=0

EOF

# setenforce 0

####设置代理,不然国内连不上google,45.76.203.146:3129是我在墙外的一个squid服务器

# ssh -N -L 3129:127.0.0.1:3129 root@45.76.203.146

# export http_proxy=http://127.0.0.1:3129

# yum install -y docker kubelet kubeadm kubectl kubernetes-cni

# unset http_proxy

# systemctl enable docker && systemctl start docker

# systemctl enable kubelet && systemctl start kubelet

####初始化master

# kubeadm init

[kubeadm] WARNING: kubeadm is in alpha, please do not use it for production clusters.
[preflight] Running pre-flight checks
[preflight] Starting the kubelet service
[init] Using Kubernetes version: v1.5.2
[tokens] Generated token: "eb2b99.e5d156dd860ef80d"
[certificates] Generated Certificate Authority key and certificate.
[certificates] Generated API Server key and certificate
[certificates] Generated Service Account signing keys
[certificates] Created keys and certificates in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[apiclient] Created API client, waiting for the control plane to become ready
到这里卡死,Ctrl+C终止

####被墙怎么办

因为gcr.io被墙了,另外一个quay.io网站的image打开也很慢,导致calico一直没正常启动

解决方案:

我是在国外一台服务器上开了个haproxy的tcp代理到gcr.io的443端口,再通过ssh把443端口映射到本地443端口,最后改hosts文件把gcr.io改为127.0.0.1

另外一个quay.io/calico/node的image我是导入的

通过docker save imageid >imageid.img 导出image

拷到k8s上再去docker load < imageid.img

再通过 docker rename imageid REPOSITORY:TAG 去重命名image

####再次初始化

# kubeadm init --pod-network-cidr=10.1.0.0/16

[kubeadm] WARNING: kubeadm is in alpha, please do not use it for production clusters.
[preflight] Running pre-flight checks
[preflight] Starting the kubelet service
[init] Using Kubernetes version: v1.5.2
[tokens] Generated token: "f2ebdb.c0223c3c42185110"
[certificates] Generated Certificate Authority key and certificate.
[certificates] Generated API Server key and certificate
[certificates] Generated Service Account signing keys
[certificates] Created keys and certificates in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[apiclient] Created API client, waiting for the control plane to become ready
[apiclient] All control plane components are healthy after 22.639406 seconds
[apiclient] Waiting for at least one node to register and become ready
[apiclient] First node is ready after 1.506328 seconds
[apiclient] Creating a test deployment
[apiclient] Test deployment succeeded
[token-discovery] Created the kube-discovery deployment, waiting for it to become ready
[token-discovery] kube-discovery is ready after 3.005029 seconds
[addons] Created essential addon: kube-proxy
[addons] Created essential addon: kube-dns

Your Kubernetes master has initialized successfully!

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
    http://kubernetes.io/docs/admin/addons/

You can now join any number of machines by running the following on each node:

kubeadm join --token=5a674b.d33b282a7825fd65 192.168.75.201

####node加入master

# kubeadm join --token=5a674b.d33b282a7825fd65 192.168.75.201

[kubeadm] WARNING: kubeadm is in alpha, please do not use it for production clusters.
[preflight] Running pre-flight checks
[preflight] Starting the kubelet service
[tokens] Validating provided token
[discovery] Created cluster info discovery client, requesting info from "http://192.168.75.201:9898/cluster-info/v1/?token-id=5a674b"
[discovery] Cluster info object received, verifying signature using given token
[discovery] Cluster info signature and contents are valid, will use API endpoints [https://192.168.75.201:6443]
[bootstrap] Trying to connect to endpoint https://192.168.75.201:6443
[bootstrap] Detected server version: v1.5.2
[bootstrap] Successfully established connection with endpoint "https://192.168.75.201:6443"
[csr] Created API client to obtain unique certificate for this node, generating keys and certificate signing request
[csr] Received signed certificate from the API server:
Issuer: CN=kubernetes | Subject: CN=system:node:docker1 | CA: false
Not before: 2017-01-22 05:00:00 +0000 UTC Not After: 2018-01-22 05:00:00 +0000 UTC
[csr] Generating kubelet configuration
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"

Node join complete:
* Certificate signing request sent to master and response
  received.
* Kubelet informed of new secure connection details.

Run 'kubectl get nodes' on the master to see this machine join.

####在master上查看node

# kubectl get nodes

NAME      STATUS         AGE
docker1   Ready          46s
docker2   Ready,master   11m

####k8s配置安装calico及插件,也可以把文件下下来修改

# cd /etc/kubernetes && wget http://docs.projectcalico.org/v2.0/getting-started/kubernetes/installation/hosted/kubeadm/calico.yaml

修改calico.yaml,修改etcd地址calico的cidr

修改etcd_endpoint,这个是自己搭建的etcd

......
cidr: 10.1.0.0/16
......

# kubectl apply -f calico.yaml

####安装dashboard

# cd /etc/kubernetes && wget https://rawgit.com/kubernetes/dashboard/master/src/deploy/kubernetes-dashboard.yaml Accessing

# kubectl apply -f kubernetes-dashboard.yaml deployment "kubernetes-dashboard" created service "kubernetes-dashboard" created

####查看namespace为kube-system下的pod运行情况

# kubectl get po -o wide --namespace=kube-system

NAME                                       READY     STATUS    RESTARTS   AGE       IP               NODE
calico-etcd-ncsts                          1/1       Running   0          1h        192.168.75.201   docker2
calico-node-qlc5c                          2/2       Running   0          1h        192.168.75.200   docker1
calico-node-xvmvf                          2/2       Running   0          1h        192.168.75.201   docker2
calico-policy-controller-807063459-2jh90   1/1       Running   0          1h        192.168.75.200   docker1
dummy-2088944543-xhc51                     1/1       Running   0          1h        192.168.75.201   docker2
etcd-docker2                               1/1       Running   0          1h        192.168.75.201   docker2
kube-apiserver-docker2                     1/1       Running   0          1h        192.168.75.201   docker2
kube-controller-manager-docker2            1/1       Running   0          1h        192.168.75.201   docker2
kube-discovery-1769846148-f66cz            1/1       Running   0          1h        192.168.75.201   docker2
kube-dns-2924299975-k7bbj                  4/4       Running   65         1h        10.1.72.134      docker1
kube-proxy-9bdlh                           1/1       Running   0          1h        192.168.75.200   docker1
kube-proxy-c02sl                           1/1       Running   0          1h        192.168.75.201   docker2
kube-scheduler-docker2                     1/1       Running   0          1h        192.168.75.201   docker2
kubernetes-dashboard-3203831700-g2v0c      1/1       Running   17         1h        10.1.72.135      docker1

####访问dashboard

默认通过master访问dashboard要认证

可以通过kubeproxy访问

# kubectl proxy --accept-hosts='192.168.75.201' --address='192.168.75.201'

Starting to serve on 192.168.75.201:8001

打开http://192.168.75.201:8001/ui/

##2.安装glusterfs

因为环境里没那么多机器 只好复用了

在2台机器上安装安装glusterfs

# yum install centos-release-gluster38 -y

# yum install glusterfs{,-server,-fuse,-geo-replication,-libs,-api,-cli,-client-xlators} heketi -y

# systemctl start glusterd && systemctl enable glusterd

在docker1上执行gluster,和docker2建立集群

# gluster

gluster> peer probe 192.168.75.201
peer probe: success. 
gluster> peer status
Number of Peers: 1

Hostname: 192.168.75.201
Uuid: 5a50cb29-feaa-4935-8b2d-70e39a0557ba
State: Peer in Cluster (Connected)

####添加卷,测试后删除

gluster> volume create vol1 replica 2 192.168.75.200:/media/gfs 192.168.75.201:/media/gfs force
volume create: vol1: success: please start the volume to access data

####激活卷,测试后删除

gluster> volume start vol1
volume start: vol1: success

##3.配置heketi

# vi /etc/heketi/heketi.json

......
#修改端口,防止端口冲突
  "port": "1234",
......
#允许认证
  "use_auth": true,
......
#admin用户的key改为adminkey
      "key": "adminkey"
......
#修改执行插件为ssh,并配置ssh的所需证书,注意要能对集群中的机器免密ssh登陆,使用ssh-copy-id把pub key拷到每台glusterfs服务器上
    "executor": "ssh",
    "sshexec": {
      "keyfile": "/root/.ssh/id_rsa",
      "user": "root"
    },

    "_db_comment": "Database file name",
    "brick_min_size_gb" : 1,
    "db": "/var/lib/heketi/heketi.db"
  }

####启动heketi

# nohup heketi -config=/etc/heketi/heketi.json &

####配置heketi

注释每台glusterfs上的/etc/sudoers中的require tty,不然加第二个node死活报错,最后把日志级别调高才看到日志里有记录sudo提示require tty

#Defaults    requiretty

####添加cluster

# heketi-cli -user=admin -server=http://192.168.75.200:1234 -secret=adminkey -json=true cluster create

{"id":"d102a74079dd79aceb3c70d6a7e8b7c4","nodes":[],"volumes":[]}

####将3个glusterfs作为node添加到cluster

# heketi-cli -server="http://192.168.75.200:1234" -user="admin" -secret="adminkey" -json=true node add -cluster="d102a74079dd79aceb3c70d6a7e8b7c4" -management-host-name=192.168.75.200 -storage-host-name=192.168.75.200 -zone=1

{"zone":1,"hostnames":{"manage":["192.168.75.200"],"storage":["192.168.75.200"]},"cluster":"d102a74079dd79aceb3c70d6a7e8b7c4","id":"c3638f57b5c5302c6f7cd5136c8fdc5e","devices":[]}

# heketi-cli -server="http://192.168.75.200:1234" -user="admin" -secret="adminkey" -json=true node add -cluster="d102a74079dd79aceb3c70d6a7e8b7c4" -management-host-name=192.168.75.201 -storage-host-name=192.168.75.201 -zone=1

{"zone":1,"hostnames":{"manage":["192.168.75.201"],"storage":["192.168.75.201"]},"cluster":"d102a74079dd79aceb3c70d6a7e8b7c4","id":"0245885cb56c482828413002c7ee994c","devices":[]}

# heketi-cli -server="http://192.168.75.200:1234" -user="admin" -secret="adminkey" -json=true node add -cluster="d102a74079dd79aceb3c70d6a7e8b7c4" -management-host-name=192.168.75.202 -storage-host-name=192.168.75.202 -zone=1

{"zone":1,"hostnames":{"manage":["192.168.75.202"],"storage":["192.168.75.202"]},"cluster":"d102a74079dd79aceb3c70d6a7e8b7c4","id":"4c71d1213937ba01058b6cd7c9d84954","devices":[]}

####创建device,3台虚拟机上各添加了一块硬盘sdb

# heketi-cli -server="http://192.168.75.200:1234" -user="admin" -secret="adminkey" -json=true device add -name="/dev/sdb" -node="c3638f57b5c5302c6f7cd5136c8fdc5e"

Device added successfully

# heketi-cli -server="http://192.168.75.200:1234" -user="admin" -secret="adminkey" -json=true device add -name="/dev/sdb" -node="0245885cb56c482828413002c7ee994c"

Device added successfully

# heketi-cli -server="http://192.168.75.200:1234" -user="admin" -secret="adminkey" -json=true device add -name="/dev/sdb" -node="4c71d1213937ba01058b6cd7c9d84954"

Device added successfully

####添加volumn,这一步没必要做,可以让pvc自动创建

如果添加的volumn小的话可能会提示No Space,看日志是因为min brick limit 要解决这一问题要在heketi.json添加"brick_min_size_gb" : 1 ,1为1G

......
    "brick_min_size_gb" : 1,
    "db": "/var/lib/heketi/heketi.db"
......

size要比brick_min_size_gb大,如果设成1还是报min brick limit,replica必须大于1

# heketi-cli -server="http://192.168.75.200:1234" -user="admin" -secret="adminkey" -json=true volume create -size=3 -replica=2

{"size":3,"name":"vol_dfe295ac4c128e5e8a8cf7c7ce068d97","durability":{"type":"replicate","replicate":{"replica":2},"disperse":{"data":4,"redundancy":2}},"snapshot":{"enable":false,"factor":1},"id":"dfe295ac4c128e5e8a8cf7c7ce068d97","cluster":"d102a74079dd79aceb3c70d6a7e8b7c4","mount":{"glusterfs":{"device":"192.168.75.200:vol_dfe295ac4c128e5e8a8cf7c7ce068d97","options":{"backupvolfile-servers":"192.168.75.201"}}},"bricks":[{"id":"4479d5b212a7ca18caa0277e3f339f90","path":"/var/lib/heketi/mounts/vg_407cbaba20295ac218f0a44adc8b7e6f/brick_4479d5b212a7ca18caa0277e3f339f90/brick","device":"407cbaba20295ac218f0a44adc8b7e6f","node":"c3638f57b5c5302c6f7cd5136c8fdc5e","size":1572864},{"id":"c9489325b3e67bdb84250417c403f361","path":"/var/lib/heketi/mounts/vg_27f69c1a59a19e6d73b5477fdd121303/brick_c9489325b3e67bdb84250417c403f361/brick","device":"27f69c1a59a19e6d73b5477fdd121303","node":"0245885cb56c482828413002c7ee994c","size":1572864},{"id":"e3106832525b143a3913951ea7f24500","path":"/var/lib/heketi/mounts/vg_407cbaba20295ac218f0a44adc8b7e6f/brick_e3106832525b143a3913951ea7f24500/brick","device":"407cbaba20295ac218f0a44adc8b7e6f","node":"c3638f57b5c5302c6f7cd5136c8fdc5e","size":1572864},{"id":"fc3ede0060546f7eab48307efff05985","path":"/var/lib/heketi/mounts/vg_27f69c1a59a19e6d73b5477fdd121303/brick_fc3ede0060546f7eab48307efff05985/brick","device":"27f69c1a59a19e6d73b5477fdd121303","node":"0245885cb56c482828413002c7ee994c","size":1572864}]}

##4.配置kubernetes使用glusterfs

参考https://kubernetes.io/docs/user-guide/persistent-volumes/#persistentvolumeclaims

不过没写glusterfs的pv的样例,找了很久也没找到,只找到老版本的样例,

####创建storageclass

# vi /etc/kubernetes/storageclass-glusterfs.yaml

apiVersion: storage.k8s.io/v1beta1
kind: StorageClass
metadata:
  name: gfs1
provisioner: kubernetes.io/glusterfs
parameters:
  resturl: "http://192.168.75.200:1234"
  restauthenabled: "true"
  restuser: "admin"
  restuserkey: "adminkey"
#secretNamespace和secretName未定义,具体用法看官网

####通过配置创建glusterfs的名为gfs1的storageclass

# kubectl apply -f /etc/kubernetes/storageclass-glusterfs.yaml

####创建persistent volume或persistent volume claim

参考官方文档

https://kubernetes.io/docs/user-guide/persistent-volumes/#access-modes-1

https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/#creating-a-persistentvolumeclaim

创建pv,不过pv没样例,报没有定义volume type,最后拿老的文档拼出来的yaml竟然成功了,运气啊,不过我们不用pv,还是用pvc

# vi /etc/kubernetes/pv-pv0001.yaml

apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv0001
  labels:
    type: glusterfs
    pv: pv0001
  annotations:
    volume.beta.kubernetes.io/storage-class: "fortest"
spec:
  capacity:
    storage: 2G
  accessModes:
    - ReadWriteMany
  persistentVolumeReclaimPolicy: Recycle
  glusterfs:
    endpoints: "glusterfs-cluster"
    path: "abc123"

# kubectl create -f /etc/kubernetes/pv-pv0001.yaml

persistentvolume "pv0001" created

#####这里创建个pvc,请求模式为可多人读写,storage-class为刚刚创建的gfs1,限制请求的存储大小为100M,匹配标签pvc为fortest的资源

# vi /etc/kubernetes/pvc-fortest.yaml

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: default-pvc1
  namespace: default
  annotations:
    volume.beta.kubernetes.io/storage-class: "gfs1"
spec:
  accessModes:
  - ReadWriteMany
  resources:
    requests:
      storage: 2G
#下面这三行加了没用,pvc一直处在pending状态,去掉就好了
#  selector:
#    matchLabels:
#      pvc: "default_pvc1"

# kubectl create -f /etc/kubernetes/pvc-default-pvc1.yaml

persistentvolumeclaim "default-pvc1" created

####查看pvc状态为bound

# kubectl get pvc

NAME           STATUS    VOLUME                                     CAPACITY   ACCESSMODES   AGE
default-pvc1   Bound     pvc-a06d5668-e201-11e6-a4fd-080027520bff   2Gi        RWX           10m

####创建一个pod,使用pvc为fortest

# vi /etc/kubernetes/pod-test1.yaml

kind: Pod
apiVersion: v1
metadata:
  name: test1
  namespace: default
  labels:
    pvc: fortest
    env: test
    app: tomcat
spec:
  containers:
  - name: test-tomcat
    image: docker.io/tomcat
    volumeMounts:
    - mountPath: "/tmp"
      name: vl-test-tomcat
  volumes:
    - name: vl-test-tomcat
      persistentVolumeClaim:
        claimName: fortest

# kubectl create -f /etc/kubernetes/pod-test1.yaml

pod "test1" created

####查看pod发现test1在运行

# kubectl get po -o wide

NAME      READY     STATUS    RESTARTS   AGE       IP            NODE
test1     1/1       Running   0          37s       10.1.72.155   docker1

####进入pod1查看volume被成功挂在到/tmp

# kubectl exec -it -p test1 /bin/bash

W0124 15:13:46.183022   31589 cmd.go:325] -p POD_NAME is DEPRECATED and will be removed in a future version. Use exec POD_NAME instead.
root@test1:/usr/local/tomcat# df -h
Filesystem                                                                                        Size  Used Avail Use% Mounted on
/dev/mapper/docker-253:0-293364-ccad18bbad5ecb98b6fd9471dff9db415c2d7886e058fbe77ca3965079d0fdf6   10G  396M  9.7G   4% /
tmpfs                                                                                             371M     0  371M   0% /dev
tmpfs                                                                                             371M     0  371M   0% /sys/fs/cgroup
192.168.75.200:vol_39ac5a28aca8928778c8715216fd247c                                               2.0G   66M  2.0G   4% /tmp
/dev/mapper/centos-root                                                                            13G  6.0G  7.0G  47% /etc/hosts
shm                                                                                                64M     0   64M   0% /dev/shm
tmpfs                                                                                             371M   12K  371M   1% /run/secrets/kubernetes.io/serviceaccount

##5.以service暴露应用

将pod为test1的tomcat暴露给外网,dns可以解析service的clusterip,域名为<svc-name>.<namespace>.svc.cluster.local,dns服务器只有node,master和容器可以访问,外部无法访问,因为dns暴露出来的方式也是clusterip,这里是10.96.0.10

###使用nodePort方式

创建service配置文件

# cat svc_test1.yaml

apiVersion: v1
kind: Service
metadata:
  labels:
    provider: test1
  name: svc-test1
  namespace: default
spec:
  selector:
    env: test
    app: tomcat
  ports:
  - name: http
    port: 8080
    protocol: TCP
    targetPort: 8080
    nodePort: 30080
  type: NodePort

创建service

# kubectl apply -f svc_test1.yaml

service "svc-test1" created

访问任意一台node上的30080端口,均可打开看到tomcat首页

http://192.168.75.200:30080/

http://192.168.75.201:30080/

dns解析

# dig svc-test1.default.svc.cluster.local @10.96.0.10

; <<>> DiG 9.9.4-RedHat-9.9.4-38.el7_3.1 <<>> svc-test1.default.svc.cluster.local @10.96.0.10
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 24787
;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0

;; QUESTION SECTION:
;svc-test1.default.svc.cluster.local. IN        A

;; ANSWER SECTION:
svc-test1.default.svc.cluster.local. 30 IN A    10.96.59.39

;; Query time: 4 msec
;; SERVER: 10.96.0.10#53(10.96.0.10)
;; WHEN: Tue Feb 07 16:23:44 CST 2017
;; MSG SIZE  rcvd: 69

###使用clusterIP方式

# cat svc_ci_test1.yaml

apiVersion: v1
kind: Service
metadata:
  labels:
    provider: test1
  name: svc-ci-test1
  namespace: default
spec:
  selector:
    env: test
    app: tomcat
  ports:
  - name: http
    port: 8080
    protocol: TCP
    targetPort: 8080
  type: ClusterIP

通过clusterIP访问,10.96.59.39为自动分配的clusterip

# curl http://10.106.204.126:8080/

# dig svc-ci-test1.default.svc.cluster.local @10.96.0.10

; <<>> DiG 9.9.4-RedHat-9.9.4-38.el7_3.1 <<>> svc-ci-test1.default.svc.cluster.local @10.96.0.10
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 53338
;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0

;; QUESTION SECTION:
;svc-ci-test1.default.svc.cluster.local.        IN A

;; ANSWER SECTION:
svc-ci-test1.default.svc.cluster.local. 30 IN A 10.106.204.126

;; Query time: 5 msec
;; SERVER: 10.96.0.10#53(10.96.0.10)
;; WHEN: Tue Feb 07 16:24:09 CST 2017
;; MSG SIZE  rcvd: 72

使用ExternalNAME

可以指定个域名或ip,会把域名给cname过去

# cat svc_en_test1.yaml

apiVersion: v1
kind: Service
metadata:
  labels:
    provider: test1
  name: svc-en-test1
  namespace: default
spec:
  selector:
    env: test
    app: tomcat
  ports:
  - name: http
    port: 8080
    protocol: TCP
    targetPort: 8080
  externalName: www.baidu.com
  type: ExternalName

# dig svc-en-test1.default.svc.cluster.local @10.96.0.10

; <<>> DiG 9.9.4-RedHat-9.9.4-38.el7_3.1 <<>> svc-en-test1.default.svc.cluster.local @10.96.0.10
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 10681
;; flags: qr aa rd ra; QUERY: 1, ANSWER: 4, AUTHORITY: 0, ADDITIONAL: 0

;; QUESTION SECTION:
;svc-en-test1.default.svc.cluster.local.        IN A

;; ANSWER SECTION:
svc-en-test1.default.svc.cluster.local. 30 IN CNAME www.baidu.com.
www.baidu.com.          30      IN      CNAME   www.a.shifen.com.
www.a.shifen.com.       30      IN      A       61.135.169.121
www.a.shifen.com.       30      IN      A       61.135.169.125

;; Query time: 22 msec
;; SERVER: 10.96.0.10#53(10.96.0.10)
;; WHEN: Tue Feb 07 17:10:20 CST 2017
;; MSG SIZE  rcvd: 142

##ReplicationController

检测探针官方文档

https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#container-probes

rc操作文档

https://kubernetes.io/docs/user-guide/replication-controller/operations/#sample-file

rc resizing文档

https://kubernetes.io/docs/user-guide/resizing-a-replication-controller/

rolling update

https://kubernetes.io/docs/user-guide/rolling-updates/

使用ReplicationController管理多副本的应用方便水平扩容和收缩。

rc配置文件

# cat rc-test.yaml

kind: ReplicationController
apiVersion: v1
metadata:
  name: rctest1
  labels:
    env: default
    app: rctest1tomcat
  namespace: default
spec:
  replicas: 2
  selector:
    rcname: rctest1
  template:
    metadata:
      labels:
        rcname: rctest1
    spec:
      volumes:
      - name: vl-rc-rctest1-tomcat
        persistentVolumeClaim:
          claimName: default-pvc1
      containers:
      - name: tomcat
        image: tomcat
        volumeMounts:
        - mountPath: "/tmp"
          name: vl-rc-rctest1-tomcat
        ports:
        - containerPort: 8080
          protocol: TCP
        imagePullPolicy: IfNotPresent
        readinessProbe:
          initialDelaySeconds : 5
          httpGet:
            path: /
            port: 8080
            httpHeaders:
              - name: X-my-header
                value: xxxx
          timeoutSeconds: 3
      restartPolicy: Always

####创建rc

# kubectl apply -f rc-test.yaml

replicationcontroller "rctest1" created

# kubectl get rc

NAME      DESIRED   CURRENT   READY     AGE
rctest1   2         2         2         6m

# kubectl describe rc rctest1

Name:           rctest1
Namespace:      default
Image(s):       tomcat
Selector:       rcname=rctest1
Labels:         app=rctest1tomcat
                env=default
Replicas:       2 current / 2 desired
Pods Status:    2 Running / 0 Waiting / 0 Succeeded / 0 Failed
Volumes:
  vl-rc-rctest1-tomcat:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  default-pvc1
    ReadOnly:   false
Events:
  FirstSeen     LastSeen        Count   From                            SubObjectPath   Type            Reason                  Message
  ---------     --------        -----   ----                            -------------   --------        ------                  -------
  7m            7m              1       {replication-controller }                       Normal          SuccessfulCreate        Created pod: rctest1-cqkbw
  7m            7m              1       {replication-controller }                       Normal          SuccessfulCreate        Created pod: rctest1-x3kd9

####resizing rc

自动收缩和扩容可以使用 Horizontal Pod Autoscaler ,参考https://kubernetes.io/docs/user-guide/horizontal-pod-autoscaling/walkthrough/

# kubectl scale rc rctest1 --replicas=3

replicationcontroller "rctest1" scaled

# kubectl get rc

NAME      DESIRED   CURRENT   READY     AGE
rctest1   3         3         3         15m

####更新

具体请参考文档

####更新image

为了省事,直接在docker1上打个新的tag

# docker tag tomcat tomcat:v1

####rolling update rc

可以改文件进行rolling update,也可以指定新的image进行

默认一分钟间隔升降一台,所以操作比较耗时

# kubectl rolling-update rctest1 --image=tomcat:v1

Created rctest1-8fea061ac5168e37a3c989adfba63bd1
Scaling up rctest1-8fea061ac5168e37a3c989adfba63bd1 from 0 to 2, scaling down rctest1 from 2 to 0 (keep 2 pods available, don't exceed 3 pods)
Scaling rctest1-8fea061ac5168e37a3c989adfba63bd1 up to 1
Scaling rctest1 down to 1
Scaling rctest1-8fea061ac5168e37a3c989adfba63bd1 up to 2
Scaling rctest1 down to 0
Update succeeded. Deleting old controller: rctest1
Renaming rctest1 to rctest1-8fea061ac5168e37a3c989adfba63bd1
replicationcontroller "rctest1" rolling updated

####查看rolling update 成功

# kubectl get rc

NAME      DESIRED   CURRENT   READY     AGE
rctest1   2         2         2         2m

# kubectl describe rc rctest1

Name:           rctest1
Namespace:      default
Image(s):       tomcat:v1
Selector:       deployment=8fea061ac5168e37a3c989adfba63bd1,rcname=rctest1
Labels:         app=rctest1tomcat
                env=default
Replicas:       2 current / 2 desired
Pods Status:    2 Running / 0 Waiting / 0 Succeeded / 0 Failed
Volumes:
  vl-rc-rctest1-tomcat:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  default-pvc1
    ReadOnly:   false
No events.

####检测探针的效果测试,这个以后再说吧

转载于:https://my.oschina.net/u/1791060/blog/830023

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值