flume-ng命令帮助:
[root@hadoop01 apache-flume-1.6.0-bin]# ./bin/flume-ng help
Usage: ./flume-ng <command> [options]...commands:
help display this help textagent run a Flume agent
avro-client run an avro Flume client
version show Flume version info
global options:--conf,-c <conf> use configs in <conf> directory
--classpath,-C <cp> append to the classpath
--dryrun,-d do not actually start Flume, just print the command
--plugins-path <dirs> colon-separated list of plugins.d directories. See the
plugins.d section in the user guide for more details.
Default: $FLUME_HOME/plugins.d
-Dproperty=value sets a Java system property value
-Xproperty=value sets a Java -X option
agent options:
--name,-n <name> the name of this agent (required)
--conf-file,-f <file> specify a config file (required if -z missing)
--zkConnString,-z <str> specify the ZooKeeper connection to use (required if -f missing)
--zkBasePath,-p <path> specify the base path in ZooKeeper for agent configs
--no-reload-conf do not reload config file if changed
--help,-h display help text
avro-client options:
--rpcProps,-P <file> RPC client properties file with server connection params
--host,-H <host> hostname to which events will be sent
--port,-p <port> port of the avro source
--dirname <dir> directory to stream to avro source
--filename,-F <file> text file to stream to avro source (default: std input)
--headerFile,-R <file> File containing event headers as key/value pairs on each new line
--help,-h display help text
Either --rpcProps or both --host and --port must be specified.
Note that if <conf> directory is specified, then it is always included first
in the classpath.
测试flume操作
测试服务器
ip:192.168.226.151 主机名:hadoop01
ip:192.168.226.152 主机名:hadoop02
ip:192.168.226.153 主机名:hadoop03
source测试
1、netcat source
配置文件:example01.conf
#配置agent a1
a1.sources=r1
a1.channels=c1
a1.sinks=k1
#配置对应的sourcea1.sources.r1.type=netcat
a1.sources.r1.bind=0.0.0.0
a1.sources.r1.port=8888
#配置对应的sinka1.sinks.k1.type=logger
#配置对应的channel
a1.channels.c1.type=memorya1.channels.c1.capacity=1000
a1.channels.c1.transactionCapacity=100
#配置绑定关系(一个sink对应一个channel)a1.sources.r1.channels=c1
a1.sinks.k1.channel=c1
启动angent命令(hadoop01)
.......
telnet 192.168.226.151 8888
agent端(hadoop01)接收到内容
2、avro source
配置文件:example02.conf
#配置agent a1
a1.sources=r1
a1.channels=c1
a1.sinks=k1
#配置对应的source
a1.sources.r1.type=avro
a1.sources.r1.bind=0.0.0.0
a1.sources.r1.port=8888
#配置对应的sink
a1.sinks.k1.type=logger
#配置对应的channel
a1.channels.c1.type=memory
a1.channels.c1.capacity=1000
a1.channels.c1.transactionCapacity=100
#配置绑定关系(一个sink对应一个channel)
a1.sources.r1.channels=c1
a1.sinks.k1.channel=c1
启动angent命令(hadoop01)
.......
启动avro客户端(hadoop01)
........
angent端接收到log1.txt文件内容
3、exec source
配置文件:example03.conf