消息中间件

kafka

在这里插入图片描述
Producer:消息的生产者,消息的生产者
kafka cluster:
Broker:Broker是kafka实例,每个服务器上有一个或多个kafka的实例,暂且当作每个broker对应一台服务器。每kafka集群内的broker都有一个不重复的编号
Topic:消息的主题,可以理解为消息的分类
Partition: Topic的分区,每个topic有不同的分区,分区的作用是做负载,提高kafka的吞吐量。同一个topic在不同的分区上数据是不同的
Replication:每个分区都有多个副本,当主分区故障时会选择一个副本成为Leader。在kafka中默认副本数量是10,且副本数量不能大于Broker的数量
Consumer:消息的消费者
Consumer Group:一个分只能 被消费者组中一个消费者消费
Zookeeper:保存集群的元信息,来保证系统的可用性。
简单使用:

下载
修改zoo_sample.conf---->zoo.conf
启动ZK
./zkServer.sh start | stop | status | restart

安装kafka
修改配置文件

broker.id=0 #节点id保证唯一
listeners = PLAINTEXT://192.168.200.130:9092 #节点的ip和端口
log.dirs=/root/kafkas/kafka1/logs #kafka数据目录
zookeeper.connect=192.168.200.130:2181 #zk的地址

启动:
./kafka-server-start.sh …/config/server.properties &
pom.xml

<properties>
    <kafka.client.version>2.0.1</kafka.client.version>
</properties>
<dependencies>
    <dependency>
        <groupId>org.apache.kafka</groupId>
        <artifactId>kafka-clients</artifactId>
        <version>${kafka.client.version}</version>
    </dependency>
</dependencies>

消息生产者:

 //添加kafka的配置信息
        Properties properties = new Properties();
        //配置broker信息
        properties.put("bootstrap.servers","192.168.200.130:9092");
        properties.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG,"org.apache.kafka.common.serialization.StringSerializer");
        properties.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG,"org.apache.kafka.common.serialization.StringSerializer");
        properties.put(ProducerConfig.RETRIES_CONFIG,10);

        //生产者对象
        KafkaProducer<String,String> producer = new KafkaProducer<String, String>(properties);

        //封装消息
        ProducerRecord<String,String> record = new ProducerRecord<String, String>(TOPIC,"00001","hello kafka !");
        //发送消息
        try {
            producer.send(record);
        }catch (Exception e){
            e.printStackTrace();
        }

        //关系消息通道
        producer.close();

消息消费者:


    private static final String TOPIC = "itcast-heima";

    public static void main(String[] args) {

        //添加配置信息
        Properties properties = new Properties();
        properties.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG,"192.168.200.130:9092");
        properties.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG,"org.apache.kafka.common.serialization.StringDeserializer");
        properties.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG,"org.apache.kafka.common.serialization.StringDeserializer");
        //设置分组
        properties.put(ConsumerConfig.GROUP_ID_CONFIG,"group2");

        //创建消费者
        KafkaConsumer<String, String> consumer = new KafkaConsumer<String, String>(properties);
        //订阅主题
        consumer.subscribe(Collections.singletonList(TOPIC));

        while (true){
            ConsumerRecords<String, String> records = consumer.poll(Duration.ofMillis(1000));
            for (ConsumerRecord<String, String> record : records) {
                System.out.println(record.value());
                System.out.println(record.key());
            }
        }

    }

Springboot 整合kafka
pom.xml

<properties>
    <kafka.version>2.2.7.RELEASE</kafka.version>
    <kafka.client.version>2.0.1</kafka.client.version>
    <fastjson.version>1.2.58</fastjson.version>
</properties>
<dependencies>
    <dependency>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-starter-web</artifactId>
    </dependency>
    <!-- kafkfa -->
    <dependency>
        <groupId>org.springframework.kafka</groupId>
        <artifactId>spring-kafka</artifactId>
        <version>${kafka.version}</version>
        <exclusions>
            <exclusion>
                <groupId>org.apache.kafka</groupId>
                <artifactId>kafka-clients</artifactId>
            </exclusion>
        </exclusions>
    </dependency>
    <dependency>
        <groupId>org.apache.kafka</groupId>
        <artifactId>kafka-streams</artifactId>
        <version>${kafka.client.version}</version>
        <exclusions>
            <exclusion>
                <artifactId>connect-json</artifactId>
                <groupId>org.apache.kafka</groupId>
            </exclusion>
            <exclusion>
                <groupId>org.apache.kafka</groupId>
                <artifactId>kafka-clients</artifactId>
            </exclusion>
        </exclusions>
    </dependency>
    <dependency>
        <groupId>org.apache.kafka</groupId>
        <artifactId>kafka-clients</artifactId>
        <version>${kafka.client.version}</version>
    </dependency>
    <dependency>
        <groupId>com.alibaba</groupId>
        <artifactId>fastjson</artifactId>
        <version>${fastjson.version}</version>
    </dependency>

application.yml

server:
  port: 9991
spring:
  application:
    name: kafka-demo
  kafka:
    bootstrap-servers: 192.168.200.130:9092==>kafka地址
    producer:
      retries: 10==》重试次数
      key-serializer: org.apache.kafka.common.serialization.StringSerializer==》序列化不同类型使用不用的序列化
      value-serializer: org.apache.kafka.common.serialization.StringSerializer
    consumer:
      group-id: test-hello-group==》 组名
      key-deserializer: org.apache.kafka.common.serialization.StringDeserializer==》反序列化
      value-deserializer: org.apache.kafka.common.serialization.StringDeserializer
@RestController
public class HelloController {

    @Autowired
    private KafkaTemplate<String,String> kafkaTemplate;

    @GetMapping("/hello")
    public String hello(){
        //第一个参数:topics  
        //第二个参数:消息内容
        kafkaTemplate.send("kafka-hello","你好世界");
        return "ok";
    }
}

新建监听类:


@Component
public class HelloListener {

    @KafkaListener(topics = {"hello-itcast"})
    public void receiverMessage(ConsumerRecord<?,?> record){
        Optional<? extends ConsumerRecord<?, ?>> optional = Optional.ofNullable(record);
        if(optional.isPresent()){
            Object value = record.value();
            System.out.println(value);
        }
    }
}
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值