大数据学习12之分布式事件流平台Kafka——Kafka API编程

IDEA+Maven构建开发环境

1.新建scala项目

在这里插入图片描述
在这里插入图片描述

2.修改scala版本

  <properties>
    <scala.version>2.11.8</scala.version>
    <kafka.version>0.9.0.0</kafka.version>
  </properties>

3.添加kafka依赖

在这里插入图片描述

artifactId是scala版本,version是kafka版本,可以通过$KAFKA_HOME来查看。
新建java源码包,并将其目录标注修改为蓝色源码目录标注。在java包下新建com.imooc.spark.kafka包
在这里插入图片描述

Producer API的使用

这个是kafka的旧版本的使用

1.新建配置类

在com.imooc.spark.kafka包下新建类KafkaProperties

/**
 * kafka常用配置文件
 */
public class KafakProperties {

    public static final String ZK = "192.168.121.131:2181";

    public static final String TOPIC = "hellp_topic";

    public static final String BROKER_LIST = "192.168.121.131:9092";
    
    public static final String GROUP_ID = "test_group1";

}

2.新建kafka生产者类

注意所选包
import kafka.javaapi.producer.Producer;
import kafka.producer.KeyedMessage;
import kafka.producer.ProducerConfig;
import java.util.Properties;

package com.imooc.spark.kafka;

import kafka.javaapi.producer.Producer;
import kafka.producer.KeyedMessage;
import kafka.producer.ProducerConfig;

import java.util.Properties;

/**
 * kafka生产者
 */
public class KafkaProducer extends Thread{

    private String topic;

    private Producer<Integer,String> producer;

    private KafkaProducer(String topic){
        this.topic = topic;

        Properties properties = new Properties();
        properties.put("metadata.broker.list",KafakProperties.BROKER_LIST);
        properties.put("serializer.class","kafka.serializer.StringEncoder");
        properties.put("request.required.acks","1");

        producer = new Producer<Integer, String>(new ProducerConfig(properties));
    }

    @Override
    public void run() {
        int messageNo = 1;

        while(true){
            String message = "message_" +messageNo;
            producer.send(new KeyedMessage<Integer, String>(topic,message));
            System.out.println("Sent : "+ message);

            messageNo ++ ;

            try {
                Thread.sleep(2000);
            } catch (Exception e){
                e.printStackTrace();
            }
        }
    }
}

3.新建测试类

/**
 * kafka Java API测试
 */
public class KafkaClientApp {

    public static void main(String[] args) {

        new KafkaProducer(KafakProperties.TOPIC).start();
    }
}

4.环境准备

zookeeper开启失败,使用zkCli.sh查看报错

2021-05-30 14:19:28,435 [myid:] - INFO  [main-SendThread(hadoop000:2181):ClientCnxn$SendThread@975] - Opening socket connection to server hadoop000/192.168.107.128:2181. Will not attempt to authenticate using SASL (unknown error)
JLine support is enabled
[zk: localhost:2181(CONNECTING) 0] 2021-05-30 14:19:50,025 [myid:] - WARN  [main-SendThread(hadoop000:2181):ClientCnxn$SendThread@1102] - Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect
java.net.ConnectException: Connection refused
	at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
	at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
	at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:350)
	at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1081)
2021-05-30 14:19:51,187 [myid:] - INFO  [main-SendThread(hadoop000:2181):ClientCnxn$SendThread@975] - Opening socket connection to server hadoop000/192.168.107.128:2181. Will not attempt to authenticate using SASL (unknown error)
2021-05-30 14:20:12,221 [myid:] - WARN  [main-SendThread(hadoop000:2181):ClientCnxn$SendThread@1102] - Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect
java.net.ConnectException: Connection refused
	at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
	at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
	at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:350)
	at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1081)

根据ifconfig可以得知,现在的虚拟机ip地址为192.168.121.131

[hadoop@hadoop000 Desktop]$ ifconfig
eth3      Link encap:Ethernet  HWaddr 00:0C:29:95:DD:9A  
          inet addr:192.168.121.131  Bcast:192.168.121.255  Mask:255.255.255.0
          inet6 addr: fe80::20c:29ff:fe95:dd9a/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:14783 errors:0 dropped:0 overruns:0 frame:0
          TX packets:9430 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:8889974 (8.4 MiB)  TX bytes:1294114 (1.2 MiB)

lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:16436  Metric:1
          RX packets:34 errors:0 dropped:0 overruns:0 frame:0
          TX packets:34 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:2036 (1.9 KiB)  TX bytes:2036 (1.9 KiB)

而主机名hadoop000对应的主机地址映射为192.168.107.128

[hadoop@hadoop000 Desktop]$ cat /etc/hosts
192.168.107.128 hadoop000
192.168.107.128 localhost

sudo su修改/etc/hosts文件192.168.121.131
开启zookeeper和kafka,检验一下,jps -m可以看到具体的配置

[hadoop@hadoop000 Desktop]$ jps
9104 Jps
7915 QuorumPeerMain
8603 Kafka

并启动一个消费者

kafka-console-consumer.sh --zookeeper  hadoop000:2181 --topic hellp_topic 

5.运行结果

使用windows上的IDEA编写kafka程序,向虚拟机中的kafka消费者发送消息,此时发送消息报错:
kafka.common.FailedToSendMessageException: Failed to send messages after 3 tries.
首先,虚拟机中jps显示zookeeper和kafka服务正常
然后,在windows中使用telnet ip port 的方式,检验windows是否能访问虚拟机的zookeeper2181和kafka端口9092
最后,可以修改kafka config 下的server.properties ,将ip英文表示改为虚拟机的ip,或是修改window 系统的C:\Windows\System32\drivers\etc\hosts文件,在这里添加映射格式为:虚拟机的ip 虚拟机的名称
192.168.121.131 hadoop000
(原文链接:https://blog.csdn.net/qq_35394891/article/details/80573150)
问题解决
在这里插入图片描述
在这里插入图片描述

Consumer API的使用

1.新建kafka消费者类

注意包的选择:
import kafka.consumer.Consumer;
import kafka.consumer.ConsumerConfig;
import kafka.consumer.ConsumerIterator;
import kafka.consumer.KafkaStream;
import kafka.javaapi.consumer.ConsumerConnector;

package com.imooc.spark.kafka;

import kafka.consumer.Consumer;
import kafka.consumer.ConsumerConfig;
import kafka.consumer.ConsumerIterator;
import kafka.consumer.KafkaStream;
import kafka.javaapi.consumer.ConsumerConnector;

import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.Properties;

/**
 * kafka消费者
 */
public class KafkaConsumer extends Thread{

    private String topic;

    public KafkaConsumer(String topic){
        this.topic = topic;
    }

    private ConsumerConnector createConnector(){
        Properties properties = new Properties();
        properties.put("zookeeper.connect",KafakProperties.ZK);
        properties.put("group.id",KafakProperties.GROUP_ID);
        
        return Consumer.createJavaConsumerConnector(new ConsumerConfig(properties));
    }

    @Override
    public void run() {
        ConsumerConnector consumer = createConnector();

        Map<String,Integer> topicCountMap = new HashMap<String, Integer>();
        topicCountMap.put(topic,1);

        /**
         * String : topic
         * List<KafkaStream<byte[], byte[]>>   :   对应的数据流
         */
        Map<String, List<KafkaStream<byte[], byte[]>>> messageStreams = consumer.createMessageStreams(topicCountMap);

        //获取我们每次收到的数据
        KafkaStream<byte[], byte[]> stream = messageStreams.get(topic).get(0);
        ConsumerIterator<byte[], byte[]> iterator = stream.iterator();

        while(iterator.hasNext()){
            String message = new String(iterator.next().message());
            System.out.println("rec:" + message);
        }
    }
}

2.修改测试类

package com.imooc.spark.kafka;

/**
 * kafka Java API测试
 */
public class KafkaClientApp {

    public static void main(String[] args) {

        new KafkaProducer(KafakProperties.TOPIC).start();

        new KafkaConsumer(KafakProperties.TOPIC).start();
    }
}


3.运行结果

在这里插入图片描述
在这里插入图片描述

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 3
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 3
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值