k8s kafka集群 连接不上_k8s部署Kafka集群

前言

本次的目的是通过使用k8s搭建一个三节点的kafka集群,因为kafka集群需要用到存储,所以我们需要准备三个持久卷(Persistent Volume) 简称就是PV。

创建zk-pv

首先通过nfs创建三个共享目录

mkdir -p /data/share/pv/{kafka01,kafka02,kafka03}

分别对应三节点zk集群中的三个pod的持久化目录,创建好目录之后编写yaml创建kafka-pv.yaml

apiVersion: v1

kind: PersistentVolume

metadata:

name: k8s-pv-kafka01

namespace: tools

labels:

app: kafka

annotations:

volume.beta.kubernetes.io/storage-class: "mykafka"

spec:

capacity:

storage: 10Gi

accessModes:

- ReadWriteOnce

hostPath:

path: /data/share/pv/kafka01

persistentVolumeReclaimPolicy: Recycle

---

apiVersion: v1

kind: PersistentVolume

metadata:

name: k8s-pv-kafka02

namespace: tools

labels:

app: kafka

annotations:

volume.beta.kubernetes.io/storage-class: "mykafka"

spec:

capacity:

storage: 10Gi

accessModes:

- ReadWriteOnce

hostPath:

path: /data/share/pv/kafka02

persistentVolumeReclaimPolicy: Recycle

---

apiVersion: v1

kind: PersistentVolume

metadata:

name: k8s-pv-kafka03

namespace: tools

labels:

app: kafka

annotations:

volume.beta.kubernetes.io/storage-class: "mykafka"

spec:

capacity:

storage: 10Gi

accessModes:

- ReadWriteOnce

hostPath:

path: /data/share/pv/kafka03

persistentVolumeReclaimPolicy: Recycle

---

使用如下命令创建kafka-pk

kubectl create -f kafka-pv.yaml

出现如下提示就代表创建成功

image-20200726133512679

这是我们可以通过如下命令去查看创建成功的pv

kubectl get pv -o wide

image-20200726131248218

创建Kafka集群

我们选择使用statefulset去部署kafka集群的三节点,并且使用刚刚创建的pv作为存储设备。

kafka.yaml

---

apiVersion: v1

kind: Service

metadata:

name: kafka-hs

namespace: tools

labels:

app: kafka

spec:

ports:

- port: 9092

name: server

clusterIP: None

selector:

app: kafka

---

apiVersion: v1

kind: Service

metadata:

name: kafka-cs

namespace: tools

labels:

app: kafka

spec:

selector:

app: kafka

type: NodePort

ports:

- name: client

port: 9092

nodePort: 19092

---

apiVersion: policy/v1beta1

kind: PodDisruptionBudget

metadata:

name: kafka-pdb

namespace: tools

spec:

selector:

matchLabels:

app: kafka

minAvailable: 2

---

apiVersion: apps/v1

kind: StatefulSet

metadata:

name: kafka

namespace: tools

spec:

serviceName: kafka-hs

replicas: 3

selector:

matchLabels:

app: kafka

template:

metadata:

labels:

app: kafka

spec:

affinity:

podAntiAffinity:

requiredDuringSchedulingIgnoredDuringExecution:

- labelSelector:

matchExpressions:

- key: "app"

operator: In

values:

- kafka

topologyKey: "kubernetes.io/hostname"

podAffinity:

preferredDuringSchedulingIgnoredDuringExecution:

- weight: 1

podAffinityTerm:

labelSelector:

matchExpressions:

- key: "app"

operator: In

values:

- zk

topologyKey: "kubernetes.io/hostname"

terminationGracePeriodSeconds: 300

containers:

- name: kafka

imagePullPolicy: IfNotPresent

image: registry.cn-hangzhou.aliyuncs.com/jaxzhai/k8skafka:v1

resources:

requests:

memory: "1Gi"

cpu: 500m

ports:

- containerPort: 9092

name: server

command:

- sh

- -c

- "exec kafka-server-start.sh /opt/kafka/config/server.properties --override broker.id=${HOSTNAME##*-} \

--override listeners=PLAINTEXT://:9092 \

--override zookeeper.connect=zk-0.zk-hs.tools.svc.cluster.local:2181,zk-1.zk-hs.tools.svc.cluster.local:2181,zk-2.zk-hs.tools.svc.cluster.local:2181 \

--override log.dir=/var/lib/kafka \

--override auto.create.topics.enable=true \

--override auto.leader.rebalance.enable=true \

--override background.threads=10 \

--override compression.type=producer \

--override delete.topic.enable=true \

--override leader.imbalance.check.interval.seconds=300 \

--override leader.imbalance.per.broker.percentage=10 \

--override log.flush.interval.messages=9223372036854775807 \

--override log.flush.offset.checkpoint.interval.ms=60000 \

--override log.flush.scheduler.interval.ms=9223372036854775807 \

--override log.retention.bytes=-1 \

--override log.retention.hours=168 \

--override log.roll.hours=168 \

--override log.roll.jitter.hours=0 \

--override log.segment.bytes=1073741824 \

--override log.segment.delete.delay.ms=60000 \

--override message.max.bytes=1000012 \

--override min.insync.replicas=1 \

--override num.io.threads=8 \

--override num.network.threads=3 \

--override num.recovery.threads.per.data.dir=1 \

--override num.replica.fetchers=1 \

--override offset.metadata.max.bytes=4096 \

--override offsets.commit.required.acks=-1 \

--override offsets.commit.timeout.ms=5000 \

--override offsets.load.buffer.size=5242880 \

--override offsets.retention.check.interval.ms=600000 \

--override offsets.retention.minutes=1440 \

--override offsets.topic.compression.codec=0 \

--override offsets.topic.num.partitions=50 \

--override offsets.topic.replication.factor=3 \

--override offsets.topic.segment.bytes=104857600 \

--override queued.max.requests=500 \

--override quota.consumer.default=9223372036854775807 \

--override quota.producer.default=9223372036854775807 \

--override replica.fetch.min.bytes=1 \

--override replica.fetch.wait.max.ms=500 \

--override replica.high.watermark.checkpoint.interval.ms=5000 \

--override replica.lag.time.max.ms=10000 \

--override replica.socket.receive.buffer.bytes=65536 \

--override replica.socket.timeout.ms=30000 \

--override request.timeout.ms=30000 \

--override socket.receive.buffer.bytes=102400 \

--override socket.request.max.bytes=104857600 \

--override socket.send.buffer.bytes=102400 \

--override unclean.leader.election.enable=true \

--override zookeeper.session.timeout.ms=6000 \

--override zookeeper.set.acl=false \

--override broker.id.generation.enable=true \

--override connections.max.idle.ms=600000 \

--override controlled.shutdown.enable=true \

--override controlled.shutdown.max.retries=3 \

--override controlled.shutdown.retry.backoff.ms=5000 \

--override controller.socket.timeout.ms=30000 \

--override default.replication.factor=1 \

--override fetch.purgatory.purge.interval.requests=1000 \

--override group.max.session.timeout.ms=300000 \

--override group.min.session.timeout.ms=6000 \

--override inter.broker.protocol.version=0.10.2-IV0 \

--override log.cleaner.backoff.ms=15000 \

--override log.cleaner.dedupe.buffer.size=134217728 \

--override log.cleaner.delete.retention.ms=86400000 \

--override log.cleaner.enable=true \

--override log.cleaner.io.buffer.load.factor=0.9 \

--override log.cleaner.io.buffer.size=524288 \

--override log.cleaner.io.max.bytes.per.second=1.7976931348623157E308 \

--override log.cleaner.min.cleanable.ratio=0.5 \

--override log.cleaner.min.compaction.lag.ms=0 \

--override log.cleaner.threads=1 \

--override log.cleanup.policy=delete \

--override log.index.interval.bytes=4096 \

--override log.index.size.max.bytes=10485760 \

--override log.message.timestamp.difference.max.ms=9223372036854775807 \

--override log.message.timestamp.type=CreateTime \

--override log.preallocate=false \

--override log.retention.check.interval.ms=300000 \

--override max.connections.per.ip=2147483647 \

--override num.partitions=1 \

--override producer.purgatory.purge.interval.requests=1000 \

--override replica.fetch.backoff.ms=1000 \

--override replica.fetch.max.bytes=1048576 \

--override replica.fetch.response.max.bytes=10485760 \

--override reserved.broker.max.id=1000 "

env:

- name: KAFKA_HEAP_OPTS

value : "-Xmx1G -Xms1G"

- name: KAFKA_OPTS

value: "-Dlogging.level=INFO"

volumeMounts:

- name: datadir

mountPath: /var/lib/kafka

readinessProbe:

exec:

command:

- sh

- -c

- "/opt/kafka/bin/kafka-broker-api-versions.sh --bootstrap-server=localhost:9092"

securityContext:

runAsUser: 1000

fsGroup: 1000

volumeClaimTemplates:

- metadata:

name: datadir

annotations:

volume.beta.kubernetes.io/storage-class: "mykafka"

spec:

accessModes: [ "ReadWriteOnce" ]

resources:

requests:

storage: 10Gi

使用kubectl apply -f kafka.yaml部署

image-20200726133616334

可以通过kubect get pods -n tool

image-20200726132035822

可以查看到三个pod都是running状态了,我们再看service状态 可以通过kubect get svc -n tool

image-20200726133839263

可以看到我们将9092端口通过nodePort映射给了19092暴露出去了。

验证Kafka集群是否启动成功

我们可以通过kubectl exec -it kafka-1 -n tools /bin/bash 进入容器

image-20200726134446855

创建topic成功 代表我们kafka集群部署成功!!!

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值