绝对原创。
1、flume
主要用于日志采集
核心配置文件:
agent002.sources = sources002
agent002.channels = channels002
agent002.sinks = sinks002
## define sources
agent002.sources.sources002.type = exec
agent002.sources.sources002.command = tail -F /log.input
## define channels
agent002.channels.channels002.type = memory
agent002.channels.channels002.capacity = 1000
agent002.channels.channels002.transactionCapacity = 1000
agent002.channels.channels002.byteCapacityBufferPercentage = 20
agent002.channels.channels002.byteCapacity = 8000
##define sinks
agent002.sinks.hostname=8.8.8.2
agent002.sinks.sinks002.type =org.apache.flume.sink.kafka.KafkaSink
agent002.sinks.sinks002.brokerList=8.8.8.2:9093
agent002.sinks.sinks002.topic=topicTest
##relationship
agent002.sources.sources002.channels = channels002
agent002.sinks.sinks002.channel = channels002
启动命令: /home/flume/bin/flume-ng agent -n agent002 -c /home/flume/conf -f /home/flume/conf/flume-kafka001.properties -Dflume.root.logger=DEBUG,console
2、kafka
1、 启动kafka bin/kafka-server-start.sh config/server.properties
2、 创建topic bin/kafka-topics.sh --create --zookeeper 8.8.8.2:2181 --replication-factor 1 --partitions 1 --topic topicTest
3、接受信息(消费者) bin/kafka-console-consumer.sh --zookeeper 8.8.8.2:2181 --topic topicTest --from-beginning
测试 flume 和 kafaka ,: