先用kafka消费者API接受数据,然后在用hbase的api写入hbase,需要创建一个maven工程,
一、创建工程如下:
1、constant包:主要存放常用的常量比如:hbase的配置文件
public static final Configuration hBaseConfiguration = HBaseConfiguration.create();
2、utils包:存放的是工具类,一个是读配置文件的,一个是封装了hbase的操作,具体的见后面
3、hbaseconsumer包:里面写的就是对生产者消费的过程,其中,Hbase封装了对本项目Hbase表的创建等等
4、resources:常用的kafka的配置和hbase的配置项在kafka.properties
二、引入以下pom文件:
主要是kafka和hbase的客户端的jar包
<dependencies>
<dependency>
<groupId>junit</groupId>
<artifactId>junit</artifactId>
<version>4.12</version>
<scope>test</scope>
</dependency>
<!-- https://mvnrepository.com/artifact/org.apache.kafka/kafka-clients -->
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka-clients</artifactId>
<version>0.11.0.0</version>
</dependency>
<!-- https://mvnrepository.com/artifact/org.apache.hbase/hbase-client -->
<dependency>
<groupId>org.apache.hbase</groupId>
<artifactId>hbase-client</artifactId>
<version>1.3.1</version>
</dependency>
<!-- https://mvnrepository.com/artifact/org.apache.hbase/hbase-server -->
<dependency>
<groupId>org.apache.hbase</groupId>
<artifactId>hbase-server</artifactId>
<version>1.3.1</version>
</dependency>
</dependencies>
三、kafka.properties
bootstrap.servers=hadoop102:9092,hadoop103:9092,hadoop104:9092
group.id=g1
enable.auto.commit=true
auto.commit.interval.ms=30000
key.deserializer=org.apache.kafka.common.serialization.StringDeserializer
value.deserializer=org.apache.kafka.common.serialization.StringDeserializer
#设置kafka主题
kafka.topics=calllog
hbase.namespace=ns_telecom
hbase.table.name=ns_telecom:calllog
hbase.regions=6
#flag = 0
habse.cf1=f1
# flag = 1
habse.cf2=f2
四、PropertyUtil读配置文件的工具类
public class PropertyUtil {
public static Properties properties = null;
static{
InputStream inputStream = ClassLoader.getSystemResourceAsStream("kafka.properties");
try {
properties = new Properties();
properties.load(inputStream);
} catch (IOException e) {
e.printStackTrace();
}
}
public static String getProperty(String key){
return properties.getProperty(key);
}
}
五、Constant常量类
public class Constant {
public static final Configuration hBaseConfiguration = HBaseConfiguration.create();
}
六、HbaseConsumer消费类
public class HbaseConsumer {
public static void main(String[] args) throws IOException {
KafkaConsumer<String, String> consumer = new KafkaConsumer<String, String>(PropertyUtil.properties);
consumer.subscribe(Collections.singleton(PropertyUtil.getProperty("kafka.topics")));
HbaseDao hbaseDao = new HbaseDao();
while(true) {
ConsumerRecords<String, String> records = consumer.poll(100);
for (ConsumerRecord<String, String> record : records) {
hbaseDao.puts(record.value());
System.out.println(record.topic() + "---" + record.value());
}
}
}
}
七、HbaseUtil和HbaseDao
见博客https://blog.csdn.net/student__software/article/details/81782327