Chukwa配置及运行实例

Goal:

Three nodes: as1, as2, as3:

as1 will be the Collector Node,as the same it will provide HDFS storage service.

as2 and as3 will be the Agent Nodes, they will collecting local files and send to as1, as last store in HDFS.

Pre-Requirement:

Linux + Java + SSH + Hadoop

Version Information:

hadoop-0.22.0

chukwa-incubating-0.5.0

Chukwa Configuration:

For Agent (as2 and as3):

Edit file chukwa-env.sh under /$CHUKWA_HOME$/etc/chukwa:

export JAVA_HOME=your java home
comment the belows:
#export HADOOP_HOME
#export HADOOP_CONF_DIR
#export CHUKWA_PID_DIR
#export CHUKWA_LOG_DIR

Edit file collectors under /$CHUKWA_HOME$/etc/chukwa:

http://as1:8080

Edit file initial_adaptors under /$CHUKWA_HOME$/etc/chukwa:

add filetailer.FileTailingAdaptor FooData /tmp/chukwa_testing/testing 0

Notes: This adapter will repeatedly tails a file (/tmp/chukwa_testing/testing) , again ignoring content and with unspecified Chunk boundaries.

For Collector (as1):
How to handle the IPC version issue?

http://my.oschina.net/xiangchen/blog/100359

Edit file chukwa-env.sh under /$CHUKWA_HOME$/etc/chukwa:


export JAVA_HOME=your jave home
export HADOOP_HOME=your hadoop home
export HADOOP_CONF_DIR=your hadoop conf home
comment the belows:
#export CHUKWA_PID_DIR
#export CHUKWA_LOG_DIR


Edit file chukwa-collector-conf.xml under /$CHUKWA_HOME$/etc/chukwa:


<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl"  href="configuration.xsl"?>

<!-- Put site-specific property overrides in this file. -->

<configuration>
	
  <property>
    <name>chukwaCollector.writerClass</name>
    <value>org.apache.hadoop.chukwa.datacollection.writer.PipelineStageWriter</value>
  </property>

  <property>
    <name>chukwaCollector.pipeline</name>
    <value>org.apache.hadoop.chukwa.datacollection.writer.SocketTeeWriter,org.apache.hadoop.chukwa.datacollection.writer.SeqFileWriter</value>
  </property>
  
  <property>
    <name>chukwaCollector.localOutputDir</name>
    <value>/tmp/chukwa/dataSink/</value>
    <description>Chukwa local data sink directory, see LocalWriter.java</description>
  </property>

  <property>
    <name>chukwaCollector.writerClass</name>
    <value>org.apache.hadoop.chukwa.datacollection.writer.localfs.LocalWriter</value>
    <description>Local chukwa writer, see LocalWriter.java</description>
  </property>

  <!-- When writing to HBase, uncomment the following parameters. If you're running
  HBase in distributed mode, you'll also need to copy your hbase-site.xml file with
  your hbase.zookeeper.quorum setting to the conf/ dir. -->
  <!-- HBaseWriter parameters -->
  <!--
  <property>
    <name>chukwaCollector.pipeline</name>
    <value>org.apache.hadoop.chukwa.datacollection.writer.SocketTeeWriter,org.apache.hadoop.chukwa.datacollection.writer.hbase.HBaseWriter</value>
  </property>

  <property>
    <name>hbase.demux.package</name>
    <value>org.apache.hadoop.chukwa.extraction.demux.processor</value>
    <description>Demux parser class package, HBaseWriter uses this package name to validate HBase for annotated demux parser classes.</description>
  </property>

  <property>
    <name>hbase.writer.verify.schema</name>
    <value>false</value>
    <description>Verify HBase Table schema with demux parser schema, log
    warning if there are mismatch between hbase schema and demux parsers.
    </description>
  </property>

  <property>
    <name>hbase.writer.halt.on.schema.mismatch</name>
    <value>false</value>
    <description>If this option is set to true, and HBase table schema 
    is mismatched with demux parser, collector will shut down itself.
    </description>
  </property>
	-->
  <!-- End of HBaseWriter parameters -->
  
  <property>
    <name>writer.hdfs.filesystem</name>
    <value>hdfs://as1:9000</value>
    <description>HDFS to dump to</description>
  </property>
  
  <property>
    <name>chukwaCollector.outputDir</name>
    <value>/chukwa/logs/</value>
    <description>Chukwa data sink directory</description>
  </property>

  <property>
    <name>chukwaCollector.rotateInterval</name>
    <value>300000</value>
    <description>Chukwa rotate interval (ms)</description>
  </property>

  <property>
    <name>chukwaCollector.isFixedTimeRotatorScheme</name>
    <value>false</value>
    <description>A flag to indicate that the collector should close at a fixed
    offset after every rotateInterval. The default value is false which uses
    the default scheme where collectors close after regular rotateIntervals.
    If set to true then specify chukwaCollector.fixedTimeIntervalOffset value.
    e.g., if isFixedTimeRotatorScheme is true and fixedTimeIntervalOffset is
    set to 10000 and rotateInterval is set to 300000, then the collector will
    close its files at 10 seconds past the 5 minute mark, if
    isFixedTimeRotatorScheme is false, collectors will rotate approximately
    once every 5 minutes
    </description>
  </property>

  <property>
    <name>chukwaCollector.fixedTimeIntervalOffset</name>
    <value>30000</value>
    <description>Chukwa fixed time interval offset value (ms)</description>
  </property>

  <property>
    <name>chukwaCollector.http.port</name>
    <value>8080</value>
    <description>The HTTP port number the collector will listen on</description>
  </property>

</configuration>
Notes: Enable writing to local file system and HDFS, disable writing to HBase.


Run Chukwa and Example:

Start Collector (as1):

Go to /$CHUKWA_HOME$/ of as1:


./bin/chukwa collector
./bin/chukwa hicc


Monitor Collector (as1):

http://as1:8080/chukwa?ping=true

Start Agents (as2 and as3):


./bin/chukwa agent
Monitor Agents (as2 and as3):


telnet as2 9093
telnet as3 9093

Logs for Collector and Agents:

Under /$CHUKWA_HOME$/logs

Run Example:

For as2:


echo " Hello World From AS 2">>/tmp/chukwa_testing/testing
echo " Hello World From AS 2 :)">>/tmp/chukwa_testing/testing
For as3:



echo " Hello World From AS 3">>/tmp/chukwa_testing/testing
echo " Hello World From AS 3 :)">>/tmp/chukwa_testing/testing
Logs of agents:



INFO HTTP post thread ChukwaHttpSender - collected 1 chunks for post_1
INFO HTTP post thread ChukwaHttpSender - >>>>>> HTTP post_1 to http://as1:8080/ length = 118
INFO HTTP post thread ChukwaHttpSender - >>>>>> HTTP Got success back from http://as1:8080/chukwa; response length 33
INFO HTTP post thread ChukwaHttpSender - post_1 sent 1 chunks, got back 1 acks


Files translated to HDFS:




转载于:https://my.oschina.net/xiangchen/blog/100424

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值