云计算day11-Kubernetes_K8s

1. 健康检查

1.1.1 探针的种类

livenessProbe:健康状态检查,周期性检查服务是否存活,检查结果失败,将重启容器

readinessProbe:可用性检查,周期性检查服务是否可用,不可用将从service的endpoints中移除

1.1.2 探针的检测方法

exec:执行一段命令
httpGet:检测某个 http 请求的返回状态码
tcpSocket:测试某个端口是否能够连接

1.1.3 liveness探针的exec使用

[root@k8s-master k8s_yaml]# mkdir healthy
[root@k8s-master k8s_yaml]# cd healthy
[root@k8s-master healthy]# cat  nginx_pod_exec.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: exec
spec:
  containers:
    - name: nginx
      image: 10.0.0.11:5000/nginx:1.13
      ports:
        - containerPort: 80
      args:
        - /bin/sh
        - -c
        - touch /tmp/healthy; sleep 30; rm -rf /tmp/healthy; sleep 600
      livenessProbe:
        exec:
          command:
            - cat
            - /tmp/healthy
        initialDelaySeconds: 5   
        periodSeconds: 5

[root@k8s-master healthy]# kubectl create -f nginx_pod_exec.yaml

1.1.4 liveness探针的httpGet使用

[root@k8s-master healthy]# vim  nginx_pod_httpGet.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: httpget
spec:
  containers:
    - name: nginx
      image: 10.0.0.11:5000/nginx:1.13
      ports:
        - containerPort: 80
      livenessProbe:
        httpGet:
          path: /index.html
          port: 80
        initialDelaySeconds: 3
        periodSeconds: 3

1.1.5 liveness探针的tcpSocket使用

[root@k8s-master healthy]# vim   nginx_pod_tcpSocket.yaml
apiVersion: v1
kind: Pod
metadata:
  name: tcpsocket
spec:
  containers:
    - name: nginx
      image: 10.0.0.11:5000/nginx:1.13
      ports:
        - containerPort: 80
      args:
        - /bin/sh
        - -c
        - tailf  /etc/hosts
      livenessProbe:
        tcpSocket:
          port: 80
        initialDelaySeconds: 60
        periodSeconds: 3

#查看pod,1分钟后重启了一次
root@k8s-master healthy]# kubectl create -f nginx_pod_tcpSocket.yaml
[root@k8s-master healthy]# kubectl get pod
NAME                    READY     STATUS    RESTARTS   AGE
tcpsocket               1/1       Running   1          4m

1.1.6 readiness探针的httpGet使用

可用性检查readinessprobe

[root@k8s-master healthy]# vim  nginx-rc-httpGet.yaml
apiVersion: v1
kind: ReplicationController
metadata:
  name: readiness
spec:
  replicas: 2
  selector:
    app: readiness
  template:
    metadata:
      labels:
        app: readiness
    spec:
      containers:
      - name: readiness
        image: 10.0.0.11:5000/nginx:1.13
        ports:
        - containerPort: 80
        readinessProbe:
          httpGet:
            path: /lcx.html
            port: 80
          initialDelaySeconds: 3
          periodSeconds: 3

[root@k8s-master healthy]# kubectl create -f nginx-rc-httpGet.yaml

1.2dashboard服务

1:上传并导入镜像,打标签

2:创建dashborad的deployment和service

3:访问http://10.0.0.11:8080/ui/


在master上传镜像
官网配置文件下载链
镜像下载链接: 提取码: qjb7

docker load -i kubernetes-dashboard-amd64_v1.4.1.tar.gz

#在k8s-node2上上传镜像
[root@k8s-node2 ~]# docker load -i kubernetes-dashboard-amd64_v1.4.1.tar.gz 
5f70bf18a086: Loading layer 1.024 kB/1.024 kB
2e350fa8cbdf: Loading layer 86.96 MB/86.96 MB
Loaded image: index.tenxcloud.com/google_containers/kubernetes-dashboard-amd64:v1.4.1

dashboard.yaml

[root@k8s-master dashboard]# cat dashboard.yaml 
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
# Keep the name in sync with image version and
# gce/coreos/kube-manifests/addons/dashboard counterparts
  name: kubernetes-dashboard-latest
  namespace: kube-system
spec:
  replicas: 1
  template:
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
        version: latest
        kubernetes.io/cluster-service: "true"
    spec:
      nodeName: k8s-node2
      containers:
      - name: kubernetes-dashboard
        image: index.tenxcloud.com/google_containers/kubernetes-dashboard-amd64:v1.4.1
        imagePullPolicy: IfNotPresent
        resources:
          # keep request = limit to keep this container in guaranteed class
          limits:
            cpu: 100m
            memory: 50Mi
          requests:
            cpu: 100m
            memory: 50Mi
        ports:
        - containerPort: 9090
        args:
         -  --apiserver-host=http://10.0.0.11:8080
        livenessProbe:
          httpGet:
            path: /
            port: 9090
          initialDelaySeconds: 30
          timeoutSeconds: 30

dashboard-svc.yaml

[root@k8s-master dashboard]# vim dashboard-svc.yaml 
apiVersion: v1
kind: Service
metadata:
  name: kubernetes-dashboard
  namespace: kube-system
  labels:
    k8s-app: kubernetes-dashboard
    kubernetes.io/cluster-service: "true"
spec:
  selector:
    k8s-app: kubernetes-dashboard
  ports:
  - port: 80
    targetPort: 9090

创建

[root@k8s-master dashboard]# kubectl create -f .
service "kubernetes-dashboard" created
deployment "kubernetes-dashboard-latest" created

#检查是否 Runing
[root@k8s-master dashboard]# kubectl get all -n kube-system
NAME                                 DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
deploy/kube-dns                      1         1         1            1           17h
deploy/kubernetes-dashboard-latest   1         1         1            1           20s

NAME                       CLUSTER-IP       EXTERNAL-IP   PORT(S)         AGE
svc/kube-dns               10.254.230.254   <none>        53/UDP,53/TCP   17h
svc/kubernetes-dashboard   10.254.216.169   <none>        80/TCP          20s

NAME                                        DESIRED   CURRENT   READY     AGE
rs/kube-dns-2622810276                      1         1         1         17h
rs/kubernetes-dashboard-latest-3233121221   1         1         1         20s

NAME                                              READY     STATUS    RESTARTS   AGE
po/kube-dns-2622810276-wvh5m                      4/4       Running   4          17h
po/kubernetes-dashboard-latest-3233121221-km08b   1/1       Running   0          20s

1.3 通过apiservicer反向代理访问service

第一种:NodePort类型 
type: NodePort
  ports:
    - port: 80
      targetPort: 80
      nodePort: 30008
​
第二种:ClusterIP类型
 type: ClusterIP
  ports:
    - port: 80
      targetPort: 80
      
http://10.0.0.11:8080/api/v1/proxy/namespaces/命令空间/services/service的名字/
​
http://10.0.0.11:8080/api/v1/proxy/namespaces/default/services/myweb/



2. k8s弹性伸缩

k8s弹性伸缩,需要附加插件heapster监控

2.1 安装heapster监控

1:上传并导入镜像,打标签
k8s-node2上

[root@k8s-node2 opt]# ll
total 1492076
-rw-r--r-- 1 root root 275096576 Sep 17 11:42 docker_heapster_grafana.tar.gz
-rw-r--r-- 1 root root 260942336 Sep 17 11:43 docker_heapster_influxdb.tar.gz
-rw-r--r-- 1 root root 991839232 Sep 17 11:44 docker_heapster.tar.gz


for n in `ls *.tar.gz`;do docker load -i $n ;done
docker tag docker.io/kubernetes/heapster_grafana:v2.6.0 10.0.0.11:5000/heapster_grafana:v2.6.0
docker tag  docker.io/kubernetes/heapster_influxdb:v0.5 10.0.0.11:5000/heapster_influxdb:v0.5
docker tag docker.io/kubernetes/heapster:canary 10.0.0.11:5000/heapster:canary

2:上传配置文件 kubectl create -f .

influxdb-grafana-controller.yaml

mkdir heapster
cd heapster/

[root@k8s-master heapster]# cat influxdb-grafana-controller.yaml 
apiVersion: v1
kind: ReplicationController
metadata:
  labels:
    name: influxGrafana
  name: influxdb-grafana
  namespace: kube-system
spec:
  replicas: 1
  selector:
    name: influxGrafana
  template:
    metadata:
      labels:
        name: influxGrafana
    spec:
      nodeName: k8s-node2
      containers:
      - name: influxdb
        image: 10.0.0.11:5000/heapster_influxdb:v0.5
        volumeMounts:
        - mountPath: /data
          name: influxdb-storage
      - name: grafana
        image: 10.0.0.11:5000/heapster_grafana:v2.6.0
        env:
          - name: INFLUXDB_SERVICE_URL
            value: http://monitoring-influxdb:8086
            # The following env variables are required to make Grafana accessible via
            # the kubernetes api-server proxy. On production clusters, we recommend
            # removing these env variables, setup auth for grafana, and expose the grafana
            # service using a LoadBalancer or a public IP.
          - name: GF_AUTH_BASIC_ENABLED
            value: "false"
          - name: GF_AUTH_ANONYMOUS_ENABLED
            value: "true"
          - name: GF_AUTH_ANONYMOUS_ORG_ROLE
            value: Admin
          - name: GF_SERVER_ROOT_URL
            value: /api/v1/proxy/namespaces/kube-system/services/monitoring-grafana/
        volumeMounts:
        - mountPath: /var
          name: grafana-storage
      volumes:
      - name: influxdb-storage
        emptyDir: {}
      - name: grafana-storage
        emptyDir: {}

grafana-service.yaml

[root@k8s-master heapster]# cat grafana-service.yaml 
apiVersion: v1
kind: Service
metadata:
  labels:
    kubernetes.io/cluster-service: 'true'
    kubernetes.io/name: monitoring-grafana
  name: monitoring-grafana
  namespace: kube-system
spec:
  # In a production setup, we recommend accessing Grafana through an external Loadbalancer
  # or through a public IP. 
  # type: LoadBalancer
  ports:
  - port: 80
    targetPort: 3000
  selector:
    name: influxGrafana

influxdb-service.yaml

[root@k8s-master heapster]# vim influxdb-service.yaml 
apiVersion: v1
kind: Service
metadata:
  labels: null
  name: monitoring-influxdb
  namespace: kube-system
spec:
  ports:
  - name: http
    port: 8083
    targetPort: 8083
  - name: api
    port: 8086
    targetPort: 8086
  selector:
    name: influxGrafana
    

heapster-service.yaml

[root@k8s-master heapster]# cat heapster-service.yaml
apiVersion: v1
kind: Service
metadata:
  labels:
    kubernetes.io/cluster-service: 'true'
    kubernetes.io/name: Heapster
  name: heapster
  namespace: kube-system
spec:
  ports:
  - port: 80
    targetPort: 8082
  selector:
    k8s-app: heapster

heapster-controller.yaml

[root@k8s-master heapster]# cat heapster-controller.yaml 
apiVersion: v1
kind: ReplicationController
metadata:
  labels:
    k8s-app: heapster
    name: heapster
    version: v6
  name: heapster
  namespace: kube-system
spec:
  replicas: 1
  selector:
    k8s-app: heapster
    version: v6
  template:
    metadata:
      labels:
        k8s-app: heapster
        version: v6
    spec:
      nodeName: k8s-node2
      containers:
      - name: heapster
        image: 10.0.0.11:5000/heapster:canary
        imagePullPolicy: IfNotPresent
        command:
        - /heapster
        - --source=kubernetes:http://10.0.0.11:8080?inClusterConfig=false
        - --sink=influxdb:http://monitoring-influxdb:8086
修改配置文件:
#heapster-controller.yaml
    spec:
      nodeName: 10.0.0.13
      containers:
      - name: heapster
        image: 10.0.0.11:5000/heapster:canary
        imagePullPolicy: IfNotPresent
#influxdb-grafana-controller.yaml
    spec:
      nodeName: 10.0.0.13
      containers:
[root@k8s-master heapster]# kubectl create -f .

3:打开dashboard验证
http://10.0.0.11:8080/api/v1/proxy/namespaces/kube-system/services/kubernetes-dashboard

2.2弹性伸缩

1:修改rc的配置文件

  containers:
  - name: myweb
    image: 10.0.0.11:5000/nginx:1.13
    ports:
    - containerPort: 80
    resources:
      limits:
        cpu: 100m
      requests:
        cpu: 100m

2:创建弹性伸缩规则

kubectl  autoscale  deploy  nginx-deployment  --max=8  --min=1 --cpu-percent=10

kubectl get hpa

3:测试

yum install httpd-tools -y

 ab -n 1000000 -c 40 http://172.16.28.6/index.html

扩容截图


缩容截图

3. 持久化存储

数据持久化类型:

3.1 emptyDir:

了解

3.2 HostPath:

spec:
  nodeName: 10.0.0.13
  volumes:
  - name: mysql
    hostPath:
      path: /data/wp_mysql
  containers:
    - name: wp-mysql
      image: 10.0.0.11:5000/mysql:5.7
      imagePullPolicy: IfNotPresent
      ports:
      - containerPort: 3306
      volumeMounts:
      - mountPath: /var/lib/mysql
        name: mysql

3.3 nfs: ☆☆☆

#所有节点安装nfs
yum install nfs-utils -y
===========================================

master节点:
#创建目录
mkdir -p /data/tomcat-db

#修改nfs配置文件
[root@k8s-master tomcat-db]# vim /etc/exports
/data 10.0.0.0/24(rw,sync,no_root_squash,no_all_squash)

#重启服务
[root@k8s-master tomcat-db]# systemctl restart rpcbind
[root@k8s-master tomcat-db]# systemctl restart nfs

#检查
[root@k8s-master tomcat-db]# showmount -e 10.0.0.11
Export list for 10.0.0.11:
/data 10.0.0.0/24

添加配置文件mysql-rc-nfs.yaml

#需要修改的地方:
volumes:
- name: mysql
  nfs:
    path: /data/tomcat-db
    server: 10.0.0.11
================================================

[root@k8s-master tomcat_demo]# pwd
/root/k8s_yaml/tomcat_demo
[root@k8s-master tomcat_demo]# cat mysql-rc-nfs.yaml 
apiVersion: v1
kind: ReplicationController
metadata:
  name: mysql
spec:
  replicas: 1
  selector:
    app: mysql
  template:
    metadata:
      labels:
        app: mysql
    spec:
      volumes: 
      - name: mysql 
        nfs:
          path: /data/tomcat-db
          server: 10.0.0.11
      containers:
        - name: mysql
          volumeMounts:
          - mountPath: /var/lib/mysql
            name: mysql
          image: 10.0.0.11:5000/mysql:5.7
          ports:
          - containerPort: 3306
          env:
          - name: MYSQL_ROOT_PASSWORD
            value: '123456'


kubectl delete -f mysql-rc-nfs.yaml
kubectl create -f mysql-rc-nfs.yaml
kubectl get pod


#查看/data目录是否共享成功
[root@k8s-master tomcat_demo]# ls /data/tomcat-db/
auto.cnf  ib_buffer_pool  ib_logfile0  ibtmp1  performance_schema
HPE_APP   ibdata1         ib_logfile1  mysql   sys

查看是否挂在共享目录

#在node1上
[root@k8s-node1 ~]# df -h|grep nfs
10.0.0.11:/data/tomcat-db   48G  6.8G   42G  15% /var/lib/kubelet/pods/8675fe7e-d927-11e9-a65f-000c29b2785a/volumes/kubernetes.io~nfs/mysql

#重启kubelet
[root@k8s-node1 ~]# systemctl restart kubelet.service 


#在master节点查看node状态
[root@k8s-master tomcat_demo]# kubectl get nodes
NAME        STATUS    AGE
k8s-node1   Ready     5d
k8s-node2   Ready     6d


#查看当前的mysql在node1上运行
[root@k8s-master ~]# kubectl get pods -o wide
NAME                                READY     STATUS    RESTARTS   AGE       IP            NODE
mysql-kld7f                         1/1       Running   0          1m        172.18.19.5   k8s-node1
myweb-38hgv                         1/1       Running   1          23h       172.18.19.4   k8s-node1
nginx-847814248-hq268               1/1       Running   0          4h        172.18.19.2   k8s-node1
   
   
#将mysql删除掉,重新生成的mysql后跳到了node2上
[root@k8s-master ~]# kubectl delete pod mysql-kld7f 
pod "mysql-kld7f" deleted
[root@k8s-master ~]# kubectl get pods -o wide
NAME                                READY     STATUS              RESTARTS   AGE       IP            NODE
mysql-14kj0                         0/1       ContainerCreating   0          1s        <none>        k8s-node2
mysql-kld7f                         1/1       Terminating         0          2m        172.18.19.5   k8s-node1
myweb-38hgv                         1/1       Running             1          23h       172.18.19.4   k8s-node1
nginx-847814248-hq268               1/1       Running             0          4h        172.18.19.2   k8s-node1
nginx-deployment-2807576163-c9g0n   1/1       Running             0          4h        172.18.53.4   k8s-node2

#在node2上查看挂载目录
[root@k8s-node2 ~]# df -h|grep nfs
10.0.0.11:/data/tomcat-db   48G  6.8G   42G  15% /var/lib/kubelet/pods/ed09eb26-d929-11e9-a65f-000c29b2785a/volumes/kubernetes.io~nfs/mysql

刷新网页查看之前添加的数据还在,说明nfs持久化配置成功

3.4 pvc:

网上资料

**

pv: persistent volume    全局资源,k8s集群

pvc: persistent volume  claim,   局部资源属于某一个namespace


3.5 分布式存储glusterfs ☆☆☆☆☆

a: 什么是glusterfs

Glusterfs是一个开源分布式文件系统,具有强大的横向扩展能力,可支持数PB存储容量和数千客户端,通过网络互联成一个并行的网络文件系统。具有可扩展性、高性能、高可用性等特点。

b: 安装glusterfs

1.三个节点都添加俩块硬盘

测试环境,大小随意

2.三个节点都热添加硬盘不重启
echo "- - -" > /sys/class/scsi_host/host0/scan
echo "- - -" > /sys/class/scsi_host/host1/scan
echo "- - -" > /sys/class/scsi_host/host2/scan

#一定要都添加hosts解析
cat /etc/hosts
	10.0.0.11 k8s-master
	10.0.0.12 k8s-node1
	10.0.0.13 k8s-node2
3.三个节点查看磁盘是否能够识别出来,然后格式化
fdisk -l
mkfs.xfs /dev/sdb
mkfs.xfs /dev/sdc
4.所有节点创建目录
mkdir -p /gfs/test1
mkdir -p /gfs/test2
5.防止挂载后重启盘符改变,需要修改UUID

master节点

#blkid  查看每块盘的ID

[root@k8s-master ~]# blkid 
/dev/sda1: UUID="72aabc10-44b8-4c05-86bd-049157d771f8" TYPE="swap" 
/dev/sda2: UUID="35076632-0a8a-4234-bd8a-45dc7df0fdb3" TYPE="xfs" 
/dev/sdb: UUID="577ef260-533b-45f5-94c6-60e73b17d1fe" TYPE="xfs" 
/dev/sdc: UUID="5a907588-80a1-476b-8805-d458e22dd763" TYPE="xfs" 

[root@k8s-master ~]# vim /etc/fstab 
UUID=35076632-0a8a-4234-bd8a-45dc7df0fdb3 /                       xfs     defaults        0 0
UUID=72aabc10-44b8-4c05-86bd-049157d771f8 swap                    swap    defaults        0 0
UUID=577ef260-533b-45f5-94c6-60e73b17d1fe /gfs/test1              xfs     defaults        0 0
UUID=5a907588-80a1-476b-8805-d458e22dd763 /gfs/test2              xfs     defaults        0 0

#挂载并查看
[root@k8s-master ~]# mount -a
[root@k8s-master ~]# df -h
.....
/dev/sdb         10G   33M   10G   1% /gfs/test1
/dev/sdc         10G   33M   10G   1% /gfs/test2

node1节点

[root@k8s-node1 ~]# blkid 
/dev/sda1: UUID="72aabc10-44b8-4c05-86bd-049157d771f8" TYPE="swap" 
/dev/sda2: UUID="35076632-0a8a-4234-bd8a-45dc7df0fdb3" TYPE="xfs" 
/dev/sdb: UUID="c9a47468-ce5c-4aac-bffc-05e731e28f5b" TYPE="xfs" 
/dev/sdc: UUID="7340cc1b-2c83-40be-a031-1aad8bdd5474" TYPE="xfs" 

[root@k8s-node1 ~]# vim /etc/fstab
UUID=35076632-0a8a-4234-bd8a-45dc7df0fdb3 /                       xfs     defaults        0 0
UUID=72aabc10-44b8-4c05-86bd-049157d771f8 swap                    swap    defaults        0 0
UUID=c9a47468-ce5c-4aac-bffc-05e731e28f5b /gfs/test1              xfs     defaults        0 0
UUID=7340cc1b-2c83-40be-a031-1aad8bdd5474 /gfs/test2              xfs     defaults        0 0


[root@k8s-node1 ~]# mount -a
[root@k8s-node1 ~]# df -h
/dev/sdb                    10G   33M   10G   1% /gfs/test1
/dev/sdc                    10G   33M   10G   1% /gfs/test2

node2节点

[root@k8s-node2 ~]# blkid 
/dev/sda1: UUID="72aabc10-44b8-4c05-86bd-049157d771f8" TYPE="swap" 
/dev/sda2: UUID="35076632-0a8a-4234-bd8a-45dc7df0fdb3" TYPE="xfs" 
/dev/sdb: UUID="6a2f2bbb-9011-41b6-b62b-37f05e167283" TYPE="xfs" 
/dev/sdc: UUID="3a259ad4-7738-4fb8-925c-eb6251e8dd18" TYPE="xfs" 


[root@k8s-node2 ~]# vim /etc/fstab 
UUID=35076632-0a8a-4234-bd8a-45dc7df0fdb3 /                       xfs     defaults        0 0
UUID=72aabc10-44b8-4c05-86bd-049157d771f8 swap                    swap    defaults        0 0
UUID=6a2f2bbb-9011-41b6-b62b-37f05e167283 /gfs/test1              xfs     defaults        0 0
UUID=3a259ad4-7738-4fb8-925c-eb6251e8dd18 /gfs/test2              xfs     defaults        0 0

[root@k8s-node2 ~]# mount -a
[root@k8s-node2 ~]# df -h
/dev/sdb         10G   33M   10G   1% /gfs/test1
/dev/sdc         10G   33M   10G   1% /gfs/test2
6. master节点上下载软件并启动
#为节省带宽下载前打开缓存
[root@k8s-master volume]# vim /etc/yum.conf 
keepcache=1

yum install  centos-release-gluster -y
yum install  install glusterfs-server -y

systemctl start glusterd.service
systemctl enable glusterd.service

然后在两个node节点上安装软件并启动

yum install  centos-release-gluster -y
yum install  install glusterfs-server -y

systemctl start glusterd.service
systemctl enable glusterd.service
7.查看gluster节点
#当前只能看到自己
[root@k8s-master volume]# bash
[root@k8s-master volume]# gluster pool list 
UUID					Hostname 	State
a335ea83-fcf9-4b7d-ba3d-43968aa8facf	localhost	Connected 


#将另外两个节点加入进来
[root@k8s-master volume]# gluster peer probe k8s-node1 
peer probe: success. 
[root@k8s-master volume]# gluster peer probe k8s-node2 
peer probe: success. 
[root@k8s-master volume]# gluster pool list 
UUID					Hostname 	State
ebf5838a-4de2-447b-b559-475799551895	k8s-node1	Connected 
78678387-cc5b-4577-b0fe-b11b4ca80a67	k8s-node2	Connected 
a335ea83-fcf9-4b7d-ba3d-43968aa8facf	localhost	Connected 
8.去资源池创建卷查看后再删除
#wahaha是卷名
[root@k8s-master volume]# gluster volume create wahaha k8s-master:/gfs/test1 k8s-master:/gfs/test2 k8s-node1:/gfs/test1 k8s-node1:/gfs/test2 force
volume create: wahaha: success: please start the volume to access data

#查看创建卷的属性
[root@k8s-master volume]# gluster volume info wahaha

#删除卷
[root@k8s-master volume]# gluster volume delete wahaha 
Deleting volume will erase all information about the volume. Do you want to continue? (y/n) y
volume delete: wahaha: success
9.再次创建分布式复制卷☆☆☆

分布式复制卷图解

#查询帮助的命令
[root@k8s-master volume]# gluster volume create --help

#创建卷,在上次创建的命令上指定副本数 <replica 2>
[root@k8s-master volume]# gluster volume create wahaha replica 2 k8s-master:/gfs/test1 k8s-master:/gfs/test2 k8s-node1:/gfs/test1 k8s-node1:/gfs/test2 force
volume create: wahaha: success: please start the volume to access data

#必须启动后才能volume此数据
[root@k8s-master volume]# gluster volume start wahaha 
volume start: wahaha: success

10挂载卷
#在node2上挂载已经成为20G了
[root@k8s-node2 ~]# mount -t glusterfs 10.0.0.11:/wahaha /mnt
[root@k8s-node2 ~]# df -h
/dev/sdb            10G   33M   10G   1% /gfs/test1
/dev/sdc            10G   33M   10G   1% /gfs/test2
10.0.0.11:/wahaha   20G  270M   20G   2% /mnt
11测试是否共享
#在node2上复制一些内容到/mnt下
[root@k8s-node2 ~]# cp -a /etc/hosts /mnt/
[root@k8s-node2 ~]# ll /mnt/
total 1
-rw-r--r-- 1 root root 253 Sep 11 10:19 hosts


#在master节点上查看
[root@k8s-master volume]# ll /gfs/test1/
total 4
-rw-r--r-- 2 root root 253 Sep 11 10:19 hosts
[root@k8s-master volume]# ll /gfs/test2/
total 4
-rw-r--r-- 2 root root 253 Sep 11 10:19 hosts
12.扩容
#在master节点上
[root@k8s-master volume]# gluster volume add-brick wahaha  k8s-node2:/gfs/test1 k8s-node2:/gfs/test2 force
volume add-brick: success

#在node2上查看已经扩容成功了
[root@k8s-node2 ~]# df -h
10.0.0.11:/wahaha   30G  404M   30G   2% /mnt

13.扩展_添加节点、添加副本的方法
#新加节点后,均衡数据的命令,建议访问量低的时候进行
[root@k8s-master ~]# gluster volume rebalance wahaha start force
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值