kubernetes 搭建zkui管理集群

kubernetes 搭建zkui管理集群

前提是已经搭建好zk集群

zkui的jar包下载地址: https://pan.baidu.com/s/1zpyjQDKUh3cYmfZFmnKLUA
提取密码:ut24

在node节点上制作镜像:

修改config.cfg的zkServer为已经搭建好的zk集群地址;

#Server Port
serverPort=9090
#Comma seperated list of all the zookeeper servers
#zkServer=zookeeper1:2181,zookeeper2:2181,zookeeper2:2181
zkServer=zk-0.zk-hs.middle.svc.cluster.local:2181,zk-1.zk-hs.middle.svc.cluster.local:2181,zk-2.zk-hs.middle.svc.cluster.local:2181
#Http path of the repository. Ignore if you dont intent to upload files from repository.
scmRepo=http://myserver.com/@rev1=
#Path appended to the repo url. Ignore if you dont intent to upload files from repository.
scmRepoPath=//appconfig.txt
#if set to true then userSet is used for authentication, else ldap authentication is used.
ldapAuth=false
ldapDomain=mycompany,mydomain
#ldap authentication url. Ignore if using file based authentication.
ldapUrl=ldap://<ldap_host>:<ldap_port>/dc=mycom,dc=com
#Specific roles for ldap authenticated users. Ignore if using file based authentication.
ldapRoleSet={"users": [{ "username":"domain\\user1" , "role": "ADMIN" }]}
userSet = {"users": [{ "username":"own" , "password":"own123","role": "ADMIN" },{ "username":"appconfig" , "password":"appconfig","role": "USER" }]}
#Set to prod in production and dev in local. Setting to dev will clear history each time.
env=prod
jdbcClass=org.h2.Driver
jdbcUrl=jdbc:h2:zkui
jdbcUser=admin
jdbcPwd=111111
#If you want to use mysql db to store history then comment the h2 db section.
#jdbcClass=com.mysql.jdbc.Driver
#jdbcUrl=jdbc:mysql://localhost:3306/zkui
#jdbcUser=root
#jdbcPwd=manager
loginMessage=Please login using apm/manager or appconfig/appconfig.
#session timeout 5 mins/300 secs.
sessionTimeout=300
#Default 5 seconds to keep short lived zk sessions. If you have large data then the read will take more than 30 seconds so increase this accordingly. 
#A bigger zkSessionTimeout means the connection will be held longer and resource consumption will be high.
zkSessionTimeout=5
#Block PWD exposure over rest call.
blockPwdOverRest=false
#ignore rest of the props below if https=false.
https=false
#keystoreFile=/home/user/keystore.jks
#keystorePwd=password
#keystoreManagerPwd=password
# The default ACL to use for all creation of nodes. If left blank, then all nodes will be universally accessible
# Permissions are based on single character flags: c (Create), r (read), w (write), d (delete), a (admin), * (all)
# For example defaultAcl={"acls": [{"scheme":"ip", "id":"192.168.1.192", "perms":"*"}, {"scheme":"ip", id":"192.168.1.0/24", "perms":"r"}]
defaultAcl=

Dockerfile

# dockerfile for zkui
FROM openjdk:8-jdk-alpine
MAINTAINER xxxxxx <xxxxx@xxx.xx> # 作者以及邮箱
WORKDIR /var/app
ADD zkui-2.0-SNAPSHOT.jar /var/app/zkui.jar
ADD config.cfg /var/app/config.cfg
EXPOSE 9090
ENTRYPOINT ["java","-jar","/var/app/zkui.jar"]

目录内文件

config.cfg
Dockerfile
zkui-2.0-SNAPSHOT.jar

制作镜像

docker build -t zkui:v2.0 .

在主节点上进行搭建:

zkui.yaml

---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    k8s.kuboard.cn/layer: ''
    k8s.kuboard.cn/name: zkui
  name: zkui
  namespace: middle  # 和zk的集群在同一个namespace下
spec:
  selector:
    matchLabels:
      k8s.kuboard.cn/layer: ''
      k8s.kuboard.cn/name: zkui
  template:
    metadata:
      labels:
        k8s.kuboard.cn/layer: ''
        k8s.kuboard.cn/name: zkui
    spec:
      containers:
        - image: 'zkui:v2.0'
          imagePullPolicy: IfNotPresent   # 优先从本地获取镜像,如果不存在则从仓库拉取;
          name: zkui
      dnsPolicy: ClusterFirst
      nodeName: host-node3   # 指定镜像制作的节点
---
apiVersion: v1
kind: Service
metadata:
  labels:
    k8s.kuboard.cn/layer: ''
    k8s.kuboard.cn/name: zkui
  name: zkui
  namespace: middle
spec:
  externalTrafficPolicy: Cluster
  ports:
    - name: zkui-web
      nodePort: 30002   # 开放了30000-3xxxxx
      port: 9090
      protocol: TCP
      targetPort: 9090
  selector:
    k8s.kuboard.cn/layer: ''
    k8s.kuboard.cn/name: zkui
  sessionAffinity: None
  type: NodePort

在master节点执行

kubectl  apply -f zkui.yaml

踩的坑:

最初的设计初衷是,使用configmap存储zkui的配置文件config.cfg,然后使用volume挂上去;启动后pod处于 imagebackoff 状态,查看容器的日志,显示为:Error: Unable to access jarfile zkui.jar ,最后发现是由于使用volume挂在后会覆盖原目录,就像linux的mount一样;导致zkui.jar文件不见了;虽然可以使用subpath方式进行挂载,但是subpath不支持热更新,所以也就没有意义了;
想的解决方案是git拉下来zkui修改一下启动项;或者是configmap配置环境变量,再通过脚本生成config.cfg;但是想想zkui有可能很久都不会变了,所以没有付诸实施;
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值