准备工作
一、kubernetes集群配置ceph
待补充
二、kubernetes集群配置cephfs provisioner
- 1.将镜像
cephfs-provisioner.tar
上传到镜像仓库 - 2.kubernetes中部署cephfs provisioner
non-rbac/deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: cephfs-provisioner
spec:
replicas: 1
selector:
matchLabels:
app: cephfs-provisioner
strategy:
type: Recreate
template:
metadata:
labels:
app: cephfs-provisioner
spec:
containers:
- name: cephfs-provisioner
image: "quay.io/external_storage/cephfs-provisioner:latest"
env:
- name: PROVISIONER_NAME
value: ceph.com/cephfs
command:
- "/usr/local/bin/cephfs-provisioner"
args:
- "-id=cephfs-provisioner-1"
kubectl apply -f 2-cephfs-non-rbac-provisioner-deployment.yml
三、创建storageClass
将rabbitmq-storageclass.yml中provisioner设定为ceph.com/cephfs
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: cephfs
provisioner: ceph.com/cephfs
parameters:
monitors: 172.24.0.6:6789
adminId: admin
adminSecretName: ceph-secret-admin
adminSecretNamespace: "kube-system"
claimRoot: /pvc-volumes
执行kubectl apply -f 3-rabbitmq-storageclass.yml
查看storageclass kubectl get storageclass
四、rabbitmq operator部署
1.镜像下载
将镜像cluster-operator:1.5.0.tar
上传到镜像仓库
2.安装
修改cluster-operator.yml
文件中image信息
kubectl apply -f 5-cluster-operator.yml
3.检查
kubectl get all -n rabbitmq-system
可以看到rabbitmq operator的资源已经创建成功
NAME READY STATUS RESTARTS AGE
pod/rabbitmq-cluster-operator-79f7cdccdc-tq2dh 1/1 Running 1 29h
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/rabbitmq-cluster-operator 1/1 1 1 29h
NAME DESIRED CURRENT READY AGE
replicaset.apps/rabbitmq-cluster-operator-79f7cdccdc 1 1 1 29h
集群配置
一、集群安装
确定rabbitmq.yaml的资源信息
- 指定storageClassName为准备工作中创建的storageClass的name字段
cephfs
- 磁盘空间不少于10Gi
- 最少1G内存 1000m CPU
kubectl apply -f 7-rabbitmq-cluster.yml
二、配置端口访问
1.通过内网的ingress-nginx-controller
创建ingress绑定rabbitmq的svc 放开15672端口
k8s 1.17版本之前 ingress extensions/v1beta1
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test-ingress
namespace: default
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: rabbitmq-management.com
http:
paths:
- path: /
backend:
serviceName: production
servicePort: 15672
将ingress-nginx-controller的端口通过nodeport暴露出去,测试暴露的端口是主机的30005端口
[root@k8s-01 rabbitmq]# kubectl get svc -n ingress-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-controller NodePort 10.103.19.110 <none> 80:30005/TCP,443:30006/TCP 41m
ingress-nginx-controller-admission ClusterIP 10.111.70.146 <none> 443/TCP 41m
windwos主机添加hosts强制解析
192.168.223.147 rabbitmq-management.com
浏览器访问:http://rabbitmq-management.com:30005/
查找账号密码,先设置rabbitmq的实例名production,然后再去获取,username和password
instance=production
kubectl get secret ${instance}-default-user -o jsonpath="{.data.username}" | base64 --decode
kubectl get secret ${instance}-default-user -o jsonpath="{.data.password}" | base64 --decode
2.通过istio gateway和virtualservice开放
- 前提条件
确保集群中在安装rabbitmq集群之前已经安装istio,并且设置手动注入 - 配置gateway和vistualservice的配置文件
rabbitmq-gateway.yml
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: rabbitmq-gateway
spec:
selector:
istio: ingressgateway # use istio default controller
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "production.com"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: rabbitmq-cluster
spec:
hosts:
- "production.com"
gateways:
- rabbitmq-gateway
http:
- match:
- uri:
prefix: /
route:
- destination:
host: production
port:
number: 15672
kubectl apply -f rabbitmq-gateway.yml
配置强制解析
192.168.223.147 production.com
查看端口
kubectl get svc -n istio-system istio-ingressgateway
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
istio-ingressgateway LoadBalancer 10.100.28.8 <pending> 15021:32379/TCP,80:31110/TCP,443:30546/TCP,31400:31849/TCP,15443:31578/TCP 22d