没事儿随便看点东西,发现了很有趣的东西,把自己的集群跑起来玩一玩。
杜宽大佬写的系统,感觉很好用,自己搭建一遍记录一下搭建过程。
原文传送门在这里了: https://github.com/dotbalo/ratel-doc
- 准备工作:
有一套k8s集群。
安装了网络组件,Flannel、Calico都可以,我使用的是Calico。
安装了dashboard。
所有配置文件都在/root/ratel/ 这个文件夹下。
###
集群信息如下:
使用kubeadm部署的集群,3台master用haproxy做的高可用,keepalived维护了一个vip。只有一台node节点。
删除了node-role.kubernetes.io/master:NoSchedule这个污点,允许pod部署到master节点上。
[root@master01 ratel]# kubectl describe nodes | grep Taint
Taints: <none>
Taints: <none>
Taints: <none>
Taints: <none>
[root@master01 ~]# kubectl get nodes;
NAME STATUS ROLES AGE VERSION
k8s-master01 Ready control-plane,master 40d v1.20.8
master02 Ready control-plane,master 40d v1.20.8
master03 Ready control-plane,master 40d v1.20.8
node01 Ready <none> 40d v1.20.8
vip: 192.168.154.200
master01: 192.168.154.151
master02: 192.168.154.152
master03: 192.168.154.153
node01: 192.168.154.161
[root@master01 ~]# kubectl get svc -n kubernetes-dashboard
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
dashboard-metrics-scraper ClusterIP 10.97.187.85 <none> 8000/TCP 7h8m
kubernetes-dashboard NodePort 10.97.232.146 <none> 443:30001/TCP 7h8m
[root@master01 ratel]# kubectl cluster-info
Kubernetes control plane is running at https://192.168.154.200:16443
KubeDNS is running at https://192.168.154.200:16443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
###
- 配置文件:servers.yaml
vim servers.yaml
- serverName: 'sda1' # 集群名称
serverAddress: 'https://192.168.154.200:16443' # k8s集群APIServer地址,如果是高可用,写vip即可,通过kubectl cluster-info可以查看
#serverAdminUser: 'xxx' # 使用账号密码管理集群
#serverAdminPassword: 'xxx#'
serverAdminToken: 'null' # 使用token管理集群
serverDashboardUrl: "https://192.168.154.200:30001/#" # dashboard的url,1.x版本需要添加/#!,2.x需要添加/#
production: 'false'
kubeConfigPath: "/mnt/sda1.config" # 使用kubeconfig管理集群,是/root/.kube/config这个文件, 也是/etc/kubernetes/admin.conf 【有的版本是 admin.kubeconfig】,这里的目录部署宿主机目录,而是挂载到容器内部的目录。 kubeconfig文件放在了宿主机的/root/ratel/sda1.config
harborConfig: "HarborUrl, HarborUsername, HarborPassword, HarborEmail" # 按需要修改
# 三种管理集群的方式选择一种即可:
# 使用账号密码管理集群需要配置basic auth不使用了
# 使用token管理集群,暂不支持。
# 这里选择使用kubeconfig管理集群
- 创建secret:
kubectl create secret generic ratel-config --from-file=/root/ratel/sda1.config --from-file=/root/ratel/servers.yaml -n kube-system
# /root/ratel/sda1.config 就是kubeconfig文件
# 是/root/.kube/config这个文件, 也是/etc/kubernetes/admin.conf 【有的版本是 admin.kubeconfig】
- 创建RBAC:
# 创建名为kube-users的namespace用来存放用户【serviceaccount】:
kubectl create ns kube-users
# 创建ClusterRole:
vim ratel-rbac.yaml
apiVersion: v1
items:
- apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
annotations:
rbac.authorization.kubernetes.io/autoupdate: "true"
labels:
kubernetes.io/bootstrapping: rbac-defaults
rbac.authorization.k8s.io/aggregate-to-edit: "true"
name: ratel-namespace-readonly
rules:
- apiGroups:
- ""
resources:
- namespaces
verbs:
- get
- list
- watch
- apiGroups:
- metrics.k8s.io
resources:
- pods
verbs:
- get
- list
- watch
- apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: ratel-pod-delete
rules:
- apiGroups:
- ""
resources:
- pods
verbs:
- get
- list
- delete
- apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: ratel-pod-exec
rules:
- apiGroups:
- ""
resources:
- pods
- pods/log
verbs:
- get
- list
- apiGroups:
- ""
resources:
- pods/exec
verbs:
- create
- apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
annotations:
rbac.authorization.kubernetes.io/autoupdate: "true"
name: ratel-resource-edit
rules:
- apiGroups:
- ""
resources:
- configmaps
- persistentvolumeclaims
- services
- services/proxy
verbs:
- patch
- update
- apiGroups:
- apps
resources:
- daemonsets
- deployments
- deployments/rollback
- deployments/scale
- statefulsets
- statefulsets/scale
verbs:
- patch
- update
- apiGroups:
- autoscaling
resources:
- horizontalpodautoscalers
verbs:
- patch
- update
- apiGroups:
- batch
resources:
- cronjobs
- jobs
verbs:
- patch
- update
- apiGroups:
- extensions
resources:
- daemonsets
- deployments
- deployments/rollback
- deployments/scale
- ingresses
- networkpolicies
verbs:
- patch
- update
- apiGroups:
- networking.k8s.io
resources:
- ingresses
- networkpolicies
verbs:
- patch
- update
- apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: ratel-resource-readonly
rules:
- apiGroups:
- ""
resources:
- configmaps
- endpoints
- persistentvolumeclaims
- pods
- replicationcontrollers
- replicationcontrollers/scale
- serviceaccounts
- services
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- bindings
- events
- limitranges
- namespaces/status
- pods/log
- pods/status
- replicationcontrollers/status
- resourcequotas
- resourcequotas/status
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- namespaces
verbs:
- get
- list
- watch
- apiGroups:
- apps
resources:
- controllerrevisions
- daemonsets
- deployments
- deployments/scale
- replicasets
- replicasets/scale
- statefulsets
- statefulsets/scale
verbs:
- get
- list
- watch
- apiGroups:
- autoscaling
resources:
- horizontalpodautoscalers
verbs:
- get
- list
- watch
- apiGroups:
- batch
resources:
- cronjobs
- jobs
verbs:
- get
- list
- watch
- apiGroups:
- extensions
resources:
- daemonsets
- deployments
- deployments/scale
- ingresses
- networkpolicies
- replicasets
- replicasets/scale
- replicationcontrollers/scale
verbs:
- get
- list
- watch
- apiGroups:
- policy
resources:
- poddisruptionbudgets
verbs:
- get
- list
- watch
- apiGroups:
- networking.k8s.io
resources:
- networkpolicies
- ingresses
verbs:
- get
- list
- watch
- apiGroups:
- metrics.k8s.io
resources:
- pods
verbs:
- get
- list
- watch
kind: List
metadata:
resourceVersion: ""
selfLink: ""
kubectl create -f ratel-rbac.yaml
# 创建ClusterRoleBinding
vim ratel-rbac-binding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: ratel-namespace-readonly-sa
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: ratel-namespace-readonly
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: Group
name: system:serviceaccounts:kube-users # 授权给kube-users命名空间下的所有用户
kubectl create -f ratel-rbac-binding.yaml
- 部署ratel:
vim ratel-deploy.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: ratel
name: ratel
namespace: kube-system
spec:
replicas: 1
selector:
matchLabels:
app: ratel
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
app: ratel
spec:
containers:
- command:
- sh
- -c
- ./ratel -c /mnt/servers.yaml
env:
- name: TZ
value: Asia/Shanghai
- name: LANG
value: C.UTF-8
- name: ProRunMode # 运行模式,prod【日志较少】,dev【日志较多】,按需修改
value: prod
- name: ADMIN_USERNAME # 登录账号,按需修改
value: admin
- name: ADMIN_PASSWORD # 登录密码,按需修改
value: "123456"
image: registry.cn-beijing.aliyuncs.com/dotbalo/ratel:latest
imagePullPolicy: Always
livenessProbe:
failureThreshold: 2
initialDelaySeconds: 10
periodSeconds: 60
successThreshold: 1
tcpSocket:
port: 8888
timeoutSeconds: 2
name: ratel
ports:
- containerPort: 8888
name: web
protocol: TCP
readinessProbe:
failureThreshold: 2
initialDelaySeconds: 10
periodSeconds: 60
successThreshold: 1
tcpSocket:
port: 8888
timeoutSeconds: 2
resources:
limits:
cpu: 500m
memory: 512Mi
requests:
cpu: 500m
memory: 512Mi
volumeMounts:
- mountPath: /mnt
name: ratel-config
dnsPolicy: ClusterFirst
imagePullSecrets:
- name: myregistrykey
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
volumes:
- name: ratel-config
secret:
defaultMode: 420
secretName: ratel-config
kubectl apply -f ratel-deploy.yaml
- 创建 ratel service
vim ratel-service.yaml
apiVersion: v1
kind: Service
metadata:
labels:
app: ratel
name: ratel
namespace: kube-system
spec:
ports:
- name: container-1-web-1
port: 8888
protocol: TCP
targetPort: 8888
selector:
app: ratel
type: ClusterIP # 也可以改成NodePort,直接访问service,一般clusterIP + ingress
- 创建Ingress:【我使用的NodePort,没有使用Ingress】
vim ratel-ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ratel
namespace: kube-system
spec:
rules:
- host: krm.test.com
http:
paths:
- backend:
serviceName: ratel
servicePort: 8888
path: /
- 结果:
[root@master01 ratel]# kubectl get svc -n kube-system;
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 40d
metrics-server ClusterIP 10.107.4.211 <none> 443/TCP 40d
ratel NodePort 10.98.164.4 <none> 8888:32065/TCP 11s
访问:
http://192.168.154.200:32065/
按照文档过来还是很顺利的奥。
多集群的配置,原文里面也有奥。
慢慢再玩一玩。
传送门在这里:https://github.com/dotbalo/ratel-doc