常用flume操作

flume官网(官网介绍及使用方式都比较全面):http://flume.apache.org/index.html

1、netcat–logger

# example.conf: A single-node Flume configuration

# Name the components on this agent
a1.sources = r1
a1.sinks = k1
a1.channels = c1

# Describe/configure the source
a1.sources.r1.type = netcat
a1.sources.r1.bind = localhost
a1.sources.r1.port = 44444

# Describe the sink
a1.sinks.k1.type = logger

# Use a channel which buffers events in memory
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100

# Bind the source and sink to the channel
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1

启动flume

flume-ng agent --conf-file /root/flume/tt --name a1 -Dflume.root.logger=INFO,console

在source对应的节点上启动telnet

telnet localhost 44444

在telnet输入相应东西,回车,flume端会显示打印

2、exec–logger

a1.sources=r1
a1.channels=c1
a1.sinks=k1

a1.sources.r1.type=exec
a1.sources.r1.command=tail -F /root/flume/log

a1.sinks.k1.type=logger

a1.channels.c1.type=memory
a1.channels.c1.capacity=1000
a1.channels.c1.transactionCapacity=100

a1.sources.r1.channels=c1
a1.sinks.k1.channel=c1

启动flume对应文件

flume-ng agent --conf-file /root/flume/exec --name a1 -Dflume.root.logger=INFO,console

此时会监测/root/flume/log文件,一旦有文件输入,flume会获取并打印展示

3、avro–logger

 a1.sources = r1
 a1.sinks = k1
 a1.channels = c1

 a1.sources.r1.type=avro
 a1.sources.r1.channels = c1
 a1.sources.r1.bind=192.168.30.101
 a1.sources.r1.port=55555

 # Describe the sink
 a1.sinks.k1.type = logger

 # Use a channel which buffers events in memory
 a1.channels.c1.type = memory
 a1.channels.c1.capacity = 1000
 a1.channels.c1.transactionCapacity = 100

 # Bind the source and sink to the channel

 a1.sinks.k1.channel = c1
 a1.sinks.k1.type=logger

启动flume

flume-ng agent --conf-file /root/flume/avro --name a1 -Dflume.root.logger=INFO,console
flume-ng avro-client -H localhost -p 55555 -F /root/flume/log

4、netcat–hdfs

a1.channels=c1
a1.sources=r1
a1.sinks=k1

a1.sources.r1.type=netcat
a1.sources.r1.bind=localhost
a1.sources.r1.port=41416

a1.sinks.k1.type=hdfs
a1.sinks.k1.hdfs.path=hdfs://sx/myflume/%y-%m-%d
a1.sinks.k1.hdfs.useLocalTimeStamp=true

a1.channels.c1.type=memory

a1.sources.r1.channels=c1
a1.sinks.k1.channel=c1

启动flume

flume-ng agent --conf-file /root/flume/netcat_hdfs --name a1 -Dflume.root.logger=INFO,console

启动telnet(对应端口)

telnet localhost 41416

5、avro–hdfs

# a1 which ones we want to activate.
a1.channels = c1
a1.sources = r1
a1.sinks = k1

a1.sources.r1.type = avro
a1.sources.r1.bind=localhost
a1.sources.r1.port=55555

a1.sinks.k1.type = hdfs
a1.sinks.k1.hdfs.path = hdfs://sxt/myflume

# Define a memory channel called c1 on a1
a1.channels.c1.type = memory
a1.channels.c1.capacity=1000
a1.channels.c1.transactionCapacity = 100

# Define an Avro source called r1 on a1 and tell it
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1

flume

flume-ng agent --conf-file /root/flume/avro_hdfs --name a1 -Dflume.root.logger=INFO,console
flume-ng avro-client -H localhost -p 55555 -F /root/flume/log

6、exec–hdfs

a1.sources=r1
a1.channels=c1
a1.sinks=k1

a1.sources.r1.type=exec
a1.sources.r1.command=tail -F /root/flume/log

a1.sinks.k1.type=hdfs
a1.sinks.k1.hdfs.path = hdfs://sxt/myflume/qq

a1.channels.c1.type=memory
a1.channels.c1.capacity=1000
a1.channels.c1.transactionCapacity=100

a1.sources.r1.channels=c1
a1.sinks.k1.channel=c1

启动flume

flume-ng agent --conf-file /root/flume/exec_hdfs --name a1 -Dflume.root.logger=INFO,console

此时会监测/root/flume/log文件,一旦有文件输入,会写入hdfs

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
Kafka和Flume是两种常用的数据传输工具。它们有一些共同点和区别。 共同点是它们都可以用于数据采集和传输。它们都支持多个生产者的场景,可以从多个数据源获取数据。同时,它们都可以提供高吞吐量的数据传输能力。 Flume追求的是数据和数据源、数据流向的多样性。它有自己内置的多种source和sink组件,可以通过编写配置文件来定义数据的来源和目的地。Flume的配置文件中包含source、channel和sink的信息,通过启动Flume组件时关联配置文件来实现数据传输。 Kafka追求的是高吞吐量和高负载。它支持在同一个topic下拥有多个分区,适合多个消费者的场景。不同于Flume,Kafka没有内置的producer和consumer组件,需要用户自己编写代码来进行数据的发送和接收。 总的来说,Flume更适合于多个生产者的场景,而Kafka更适合于高吞吐量和高负载的场景,并且需要用户自己编写代码来操作数据的发送和接收。<span class="em">1</span><span class="em">2</span><span class="em">3</span> #### 引用[.reference_title] - *1* *3* [Flume和Kafka的区别与联系](https://blog.csdn.net/wx1528159409/article/details/88257693)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v92^chatsearchT3_1"}}] [.reference_item style="max-width: 50%"] - *2* [大数据之Kafka(三):Kafka 与 Flume的整合及架构之道](https://blog.csdn.net/weixin_44291548/article/details/119839752)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v92^chatsearchT3_1"}}] [.reference_item style="max-width: 50%"] [ .reference_list ]

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值