【kafka下载安装配置】

一、下载安装配置

1、官网下载:Apache Kafka

2、在/usr路径下创建一个文件夹kafka

        mkdir /usr/local/kafka

3、解压压缩包到 /usr/kafka目录下

        tar -zxvf kafka_2.12-2.2.0.tgz -C /usr/local/kafka

4.在 /usr/local/kafka/kafka_2.12-2.2.0 新建一个文件夹 kafka-logs

        mkdir /usr/local/kafka/kafka_2.12-2.2.0/kafka-logs

5.修改kafka的配置文件

        在kafka主目录下 bin/config 或 bin/config/kraft文件夹中找到  server.properties 并进行修改

        vim bin/config/server.properties

        kafka在启动服务之前,在server.properties文件中要设定3个参数:    broker.id、log.dirs、zookeeper.connect

        broker.id=0
 
        log.dirs=/usr/local/kafka/kafka_2.12-2.2.0/kafka-logs
 
        zookeeper.connect=localhost:2181
 
        delete.topic.enble=true
 
        advertised.listeners=PLAINTEXT://localhost:9092

delete.topic.enble=true  :这段代码会对以后删除kafka中的topic有影响,这段代码在文件尾部添加上即可

listeners=PLAINTEXT://:9092 :这个命令也很重要,需要记住(这个命令在文章里先不做分析)

advertised.listeners=PLAINTEXT://自己的主机ip地址:9092:这个localhost我用的是主机ip地址

6.配置kafka环境变量,配置zookeeper 和 kafka的全局命令 

vim /etc/profile

直接在下边添加这些配置

export KAFKA_HOME=/usr/local/kafka/kafka_2.12-2.2.0
export PATH=KAFKA_HOME/bin:$PATH

使得配置生效

 source /etc/profile

7.验证kafka配置成功

echo $KAFKA_HOME

二、kafaka的使用

1.启动 Zookeeper 服务,在 kafka 的根目录下使用命令

Kafka用到了Zookeeper,先开启zookeeper,再开启Kafka (依次开启

下面用一个单实例的Zookkeeper服务,可以在命令结尾处加个&符号,这样就可以启动后离开控制台(进程守护),启动kafak命令同理

./bin/zookeeper-server-start.sh config/zookeeper.properties &

2.启动 kafka,在 kafka 的根目录下使用命令

./bin/kafka-server-start.sh config/server.properties &

另一种方法:

  1. 启动zookeeper
    1. ./bin/zookeeper-server-start.sh config/zookeeper.properties &
    2. 后台启动 nohup ./bin/zookeeper-server-start.sh config/zookeeper.properties >/dev/null 2>&1 &
  2. 启动kafka
    1. ./bin/kafka-server-start.sh config/server.properties &
    2. 后台启动  nohup /bin/kafka-server-start.sh config/server.properties >/dev/null 2>&1 &

3.关闭服务

先关闭kafka,在kafka的根目录下使用命令

./bin/kafka-server-stop.sh

再关闭Zookeeper,在zookeeper的根目录下使用命令

./bin/zookeeper-server-stop.sh

验证一下是否关闭

jps

注意:

1、云服务部署的需要设置防火墙和安全组的端口都打开,不然链接不上

2、报错

ERROR Fatal error during KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer)
kafka.common.InconsistentClusterIdException: The Cluster ID FzLBTDL8RY64cuEqwo_brQ doesn't match stored clusterId Some(RZMnIFBHSpSfhFGy_I6x_Q) in meta.properties. The broker is trying to join the wrong cluster. Configured zookeeper.connect may be wrong.

是meta.properties(路径:/opt/kafka/logs)里面的id值和/opt/kafka/config的server.properties中的broker.id不一致。
这是我修改之后,原本是broker.id=1 改成0

就是这个文件里面的cluster.id不一样导致的,您只需要改成和您报错的那个id一样就可以了(不是报错括号里面的那个id哈,是第一个id号哈)

保存重启就好了

kafka的java:

pom:

------------------------------------------

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>

    <groupId>org.example</groupId>
    <artifactId>toolsjava</artifactId>
    <version>1.0-SNAPSHOT</version>

    <properties>
        <maven.compiler.source>8</maven.compiler.source>
        <maven.compiler.target>8</maven.compiler.target>
        <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
    </properties>

    <dependencies>
        <dependency>
            <groupId>cn.hutool</groupId>
            <artifactId>hutool-all</artifactId>
            <version>5.8.10</version>
        </dependency>

        <dependency>
            <groupId>commons-codec</groupId>
            <artifactId>commons-codec</artifactId>
            <version>1.10</version>
        </dependency>

        <dependency>
            <groupId>io.netty</groupId>
            <artifactId>netty-all</artifactId>
            <version>4.1.66.Final</version>
        </dependency>

        <dependency>
            <groupId>io.netty</groupId>
            <artifactId>netty-buffer</artifactId>
            <version>4.1.49.Final</version>
        </dependency>

        <!--kafka工具依赖-->
        <dependency>
            <groupId>org.apache.kafka</groupId>
            <artifactId>kafka-clients</artifactId>
            <version>3.4.0</version>
        </dependency>

        <dependency>
            <groupId>org.apache.kafka</groupId>
            <artifactId>kafka-streams</artifactId>
            <version>2.1.1</version>
        </dependency>

        <!--slf4j工具依赖-->
        <dependency>
            <groupId>org.slf4j</groupId>
            <artifactId>slf4j-nop</artifactId>
            <version>1.7.2</version>
        </dependency>


    </dependencies>

</project>

consumertest001

------------------------------------------------------------------------------------------------------------

package org.kafkatools;

import org.apache.kafka.clients.consumer.ConsumerConfig;
import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.clients.consumer.ConsumerRecords;
import org.apache.kafka.clients.consumer.KafkaConsumer;
import org.apache.kafka.common.serialization.StringDeserializer;

import java.time.Duration;
import java.util.Arrays;
import java.util.Properties;


public class ConsumerTest001 {

    private final static String TOPIC_NAME = "python_test";
    private final static String CONSUMER_GROUP_NAME = "testGroup001";

    public static void main(String[] args) {
        Properties properties = new Properties();
        properties.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, "11.11.11.11:9092");
        /**
         * 1.消费分组名
         */
        properties.put(ConsumerConfig.GROUP_ID_CONFIG, CONSUMER_GROUP_NAME);
        properties.put(ConsumerConfig.GROUP_ID_CONFIG, "1");

        /**
         * 1.2设置序列化
         */
        properties.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class.getName());
        properties.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class.getName());

        /**
         * 2.创建一个消费者的客户端
         */
        KafkaConsumer<String, String> consumer = new KafkaConsumer<>(properties);

        /**
         * 3.消费者订阅主题列表
         */
        consumer.subscribe(Arrays.asList(TOPIC_NAME));

        while (true) {
            ConsumerRecords<String, String> records = consumer.poll(Duration.ofMillis(1000));
            for (ConsumerRecord<String, String> record : records) {
                System.out.printf("nowtime=%s--收到的消息:partition= %d,offset= %d,key= %s,value=%s %n",System.currentTimeMillis(), record.partition(),
                        record.offset(), record.key(), record.value());
            }
        }
    }
}

producertest001

-------------------------------------------------------------------------------------------------

package org.kafkatools;

import org.apache.kafka.clients.producer.KafkaProducer;
import org.apache.kafka.clients.producer.Producer;
import org.apache.kafka.clients.producer.ProducerRecord;

import java.util.Properties;

public class ProducerTestd001 {

    public static void main(String[] args) {

        // 开始时间
        long stime = System.currentTimeMillis();


        Properties props = new Properties();
        // 必须
        props.put("bootstrap.servers", "11.11.11.11:9092");

        // 被发送到broker的任何消息的格式都必须是字节数组
        props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");
        props.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer");
        // 非必须参数配置
        // acks=0表明producer完全不管发送结果;
        // acks=all或-1表明producer会等待ISR所有节点均写入后的响应结果;
        // acks=1,表明producer会等待leader写入后的响应结果
        props.put("acks", "-1");
        // 发生可重试异常时的重试次数
        props.put("retries", 3);
        // producer会将发往同一分区的多条消息封装进一个batch中,
        // 当batch满了的时候,发送其中的所有消息,不过并不总是等待batch满了才发送消息;
        props.put("batch.size", 323840);
        // 控制消息发送延时,默认为0,即立即发送,无需关心batch是否已被填满。
        props.put("linger.ms", 10);
        // 指定了producer用于缓存消息的缓冲区大小,单位字节,默认32MB
        // producer启动时会首先创建一块内存缓冲区用于保存待发送的消息,然后由另一个专属线程负责从缓冲区中读取消息执行真正的发送
        props.put("buffer.memory", 33554432);
        // 设置producer能发送的最大消息大小
        props.put("max.request.size", 10485760);
        // 设置是否压缩消息,默认none
        props.put("compression.type", "lz4");
        // 设置消息发送后,等待响应的最大时间
        props.put("request.timeout.ms", 30);

        Producer<String, String> producer = new KafkaProducer<String, String>(props);
        for (int i = 0; i < 100; i++) {
            producer.send(new ProducerRecord<>("python_test", "key" + i, "value" + i + "--"+System.currentTimeMillis()));
        }

        producer.close();


        // 结束时间
        long etime = System.currentTimeMillis();
        // 计算执行时间
        System.out.println();
        System.out.printf("执行时长:%d 毫秒.", (etime - stime));
    }
}
==================================================================

Python 

consumer

----------------------------------------------------------------------------------------

#!/bin/env python
# -*- coding: utf-8 -*-
from kafka import KafkaConsumer

# connect to Kafka server and pass the topic we want to consume

consumer = KafkaConsumer('python_test', group_id='test_group0',
                         bootstrap_servers='11.11.11.11:9092')


try:
    for msg in consumer:
        print(msg)
        print("%s:%d:%d: key=%s value=%s" % (msg.topic, msg.partition, msg.offset, msg.key, msg.value))
except KeyboardInterrupt as e:
    print(e)

producertest001

--------------------------------------------------------------------------

#!/usr/bin/env python
# -*- coding: utf-8 -*-
import datetime
import json
import time
import uuid
from kafka import KafkaProducer
from kafka.errors import KafkaError


# 自己的服务:
producer = KafkaProducer(bootstrap_servers='11.11.11.11:9092')

topic = 'python_test'


def kafkap01():
    print('begin')
    try:
        n = 0
        while True:
            dic = {}
            dic['id'] = n
            n = n + 1
            dic['myuuid'] = str(uuid.uuid4().hex)
            dic['time'] = datetime.datetime.now().strftime("%Y%m%d %H:%M:%S")
            producer.send(topic, json.dumps(dic).encode('utf-8'))
            print("send:" + json.dumps(dic))
            time.sleep(2)
    except KafkaError as e:
        print(e)
    finally:
        producer.close()
        print('done')


if __name__ == '__main__':
    kafkap01()
    # kafkap02()

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 1
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值