Contents
Overview
This document describes how Chukwa data is stored in HDFS and the processes that act on it.
HDFS File System Structure
The general layout of the Chukwa filesystem is as follows.
/chukwa/ archivesProcessing/ dataSinkArchives/ demuxProcessing/ finalArchives/ logs/ postProcess/ repos/ rolling/ temp/
Raw Log Collection and Aggregation Workflow
What data is stored where is best described by stepping through the Chukwa workflow.
-
Collectors write chunks to logs/*.chukwa files until a 64MB chunk size is reached or a given time interval has passed.
-
to: logs/*.chukwa
-
-
Collectors close chunks and rename them to *.done
-
from: logs/*.chukwa
-
to: logs/*.done
-
-
DemuxManager checks for *.done files every 20 seconds.
-
If *.done files exist, moves files in place for demux processing:
-
from: logs/*.done
-
to: demuxProcessing/mrInput
-
- If demux is successful within 3 attempts, archives the completed files:
-
from: demuxProcessing/mrOutput
-
to: dataSinkArchives/[yyyyMMdd]/*/*.done
2.1
moveDemuxOutputDirToPostProcessDirectory(demuxOutputDir, postProcessDir)
demuxOutputDir:demuxProcessing/mrOutput
postProcessDir:postProcess/demuxOutputDir_*/[clusterName]/[dataType]/
[dataType]_[yyyyMMdd]_[HH].R.evt
2.2
moveDataSinkFilesToArchiveDirectory(demuxInputDir, archiveDir)
demuxInputDir:demuxProcessing/mrInput
archiveDir:dataSinkArchives/
-
- Otherwise moves the completed files to an error folder:
-
from: demuxProcessing/mrOutput
-
to: dataSinkArchives/InError/[yyyyMMdd]/*/*.done
-
-
-
PostProcessManager wakes up every few minutes and aggregates, orders and de-dups record files.
-
from: postProcess/demuxOutputDir_*/[clusterName]/[dataType]/[dataType]_[yyyyMMdd]_[HH].R.evt
-
to: repos/[clusterName]/[dataType]/[yyyyMMdd]/[HH]/[mm]/[dataType]_[yyyyMMdd]_[HH]_[N].[N].evt
-
-
HourlyChukwaRecordRolling runs M/R jobs at 16 past the hour to group 5 minute logs to hourly.
-
from: repos/[clusterName]/[dataType]/[yyyyMMdd]/[HH]/[mm]/[dataType]_[yyyyMMdd]_[mm].[N].evt
-
to: temp/hourlyRolling/[clusterName]/[dataType]/[yyyyMMdd]
-
to: repos/[clusterName]/[dataType]/[yyyyMMdd]/[HH]/[dataType]_HourlyDone_[yyyyMMdd]_[HH].[N].evt
-
leaves: repos/[clusterName]/[dataType]/[yyyyMMdd]/[HH]/rotateDone/
HourlyChukwaRecordRolling merge files by set Num of ReduceTasks(1),and then all files in an hour would be merge together.
-
-
DailyChukwaRecordRolling runs M/R jobs at 1:30AM to group hourly logs to daily.
-
from: repos/[clusterName]/[dataType]/[yyyyMMdd]/[HH]/[dataType]_[yyyyMMdd]_[HH].[N].evt
-
to: temp/dailyRolling/[clusterName]/[dataType]/[yyyyMMdd]
-
to: repos/[clusterName]/[dataType]/[yyyyMMdd]/[dataType]_DailyDone_[yyyyMMdd].[N].evt
-
leaves: repos/[clusterName]/[dataType]/[yyyyMMdd]/rotateDone/
-
-
ChukwaArchiveManager every half hour or so aggregates and removes dataSinkArchives data using M/R.
-
from: dataSinkArchives/[yyyyMMdd]/*/*.done
-
to: archivesProcessing/mrInput
-
to: archivesProcessing/mrOutput
-
to: finalArchives/[yyyyMMdd]/*/chukwaArchive-part-*
-
Log Directories Requiring Cleanup
The following directories will grow over time and will need to be periodically pruned:
-
finalArchives/[yyyyMMdd]/*
-
repos/[clusterName]/[dataType]/[yyyyMMdd]/*.evt
refer:http://wiki.apache.org/hadoop/Chukwa_Processes_and_Data_Flow
refer:http://wiki.apache.org/hadoop/Chukwa?highlight=%28%28Chukwa_Processes_and_Data_Flow%29%29
后台发现有一个job named :Chukwa-HourlyArchiveBuilder-Stream . to see .....