kafka to hdfs

版权声明:本文为博主原创文章,未经博主允许不得转载。 https://blog.csdn.net/bigdataf/article/details/78596810

KaBoom - A High Performance Consumer Client for Kafka
https://github.com/blackberry/KaBoom

Flume is a distributed, reliable, and available service for efficiently collecting, aggregating, and moving large amounts of log data.
http://flume.apache.org/

Gobblin is a universal data ingestion framework for extracting, transforming, and loading large volume of data from a variety of data sources, e.g., databases, rest APIs, FTP/SFTP servers, filers, etc., onto Hadoop.
http://gobblin.readthedocs.io/en/latest/case-studies/Kafka-HDFS-Ingestion/

The HDFS connector allows you to export data from Kafka topics to HDFS files in a variety of formats and integrates with Hive to make data immediately available for querying with HiveQL.
https://docs.confluent.io/current/connect/connect-hdfs/docs/hdfs_connector.html

This is a hadoop job for incremental loading of kafka topics into hdfs
https://github.com/amient/kafka-hadoop-loader

https://www.cnblogs.com/zdfjf/p/5646525.html

展开阅读全文

没有更多推荐了,返回首页