hadoop php mapreduce

hadoop  2.6.0


1.  /home/hadoop/mapper.php

#!/usr/bin/php
<?php
 
$word2count = array();
 
// input comes from STDIN (standard input)
while (($line = fgets(STDIN)) !== false) {
   // remove leading and trailing whitespace and lowercase
   $line = strtolower(trim($line));
   // split the line into words while removing any empty string
   $words = preg_split('/\W/', $line, 0, PREG_SPLIT_NO_EMPTY);
   // increase counters
   foreach ($words as $word) {
       $word2count[$word] += 1;
   }
}
 
// write the results to STDOUT (standard output)
// what we output here will be the input for the
// Reduce step, i.e. the input for reducer.py
foreach ($word2count as $word => $count) {
   // tab-delimited
   echo $word, chr(9), $count, PHP_EOL;
}
 
?>

2./home/hadoop/reducer.php

#!/usr/bin/php
<?php
 
$word2count = array();
 
// input comes from STDIN
while (($line = fgets(STDIN)) !== false) {
    // remove leading and trailing whitespace
    $line = trim($line);
    // parse the input we got from mapper.php
    list($word, $count) = explode(chr(9), $line);
    // convert count (currently a string) to int
    $count = intval($count);
    // sum counts
    if ($count > 0) $word2count[$word] += $count;
}
 
// sort the words lexigraphically
//
// this set is NOT required, we just do it so that our
// final output will look more like the official Hadoop
// word count examples
ksort($word2count);
 
// write the results to STDOUT (standard output)
foreach ($word2count as $word => $count) {
    echo $word, chr(9), $count, PHP_EOL;
}
 
?>

3.


chmod +x /home/hadoop/mapper.php /home/hadoop/reducer.php

4.创建文件夹:/home/hadoop/test

创建文件/home/hadoop/test.words  ,内容如下:

i am Running the PHP code on Hadoop:
Download example input data
Like Michael, we will use three ebooks from Project Gutenberg for this example:

我们要统计这里面出现的文字的个数

5.把test文件夹加入到   hdfs的gutenberg文件夹,这样,gutenberg下面有一个words文件

bin/hadoop dfs -put /home/hadoop/test    gutenberg

gutenberg,在hdfs中,路径是:/user/hadoop/gutenberg(绝对路径)

6.执行:

bin/hadoop  jar share/hadoop/tools/lib/hadoop-streaming-2.6.0.jar  -mapper /home/hadoop/mapper.php -reducer /home/hadoop/reducer.php  -input gutenberg/* -output gutenberg-output


7.查看结果:

bin/hadoop dfs -cat gutenberg-output/part-00000



执行logs

15/11/23 16:11:01 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
15/11/23 16:11:02 INFO Configuration.deprecation: session.id is deprecated. Instead, use dfs.metrics.session-id
15/11/23 16:11:02 INFO jvm.JvmMetrics: Initializing JVM Metrics with processName=JobTracker, sessionId=
15/11/23 16:11:02 INFO jvm.JvmMetrics: Cannot initialize JVM Metrics with processName=JobTracker, sessionId= - already initialized
15/11/23 16:11:02 ERROR streaming.StreamJob: Error Launching job : Output directory hdfs://grande:9000/user/hadoop/gutenberg-output already exists
Streaming Command Failed!
[hadoop@grande hadoop]$ bin/hadoop  jar share/hadoop/tools/lib/hadoop-streaming-2.6.0.jar  -mapper /home/hadoop/mapper.php -reducer /home/hadoop/reducer.php  -input gutenberg/* -output gutenberg-output2
15/11/23 16:11:13 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
15/11/23 16:11:13 INFO Configuration.deprecation: session.id is deprecated. Instead, use dfs.metrics.session-id
15/11/23 16:11:13 INFO jvm.JvmMetrics: Initializing JVM Metrics with processName=JobTracker, sessionId=
15/11/23 16:11:13 INFO jvm.JvmMetrics: Cannot initialize JVM Metrics with processName=JobTracker, sessionId= - already initialized
15/11/23 16:11:14 INFO mapred.FileInputFormat: Total input paths to process : 1
15/11/23 16:11:14 INFO mapreduce.JobSubmitter: number of splits:1
15/11/23 16:11:14 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_local636012455_0001
15/11/23 16:11:14 INFO mapreduce.Job: The url to track the job: http://localhost:8080/
15/11/23 16:11:14 INFO mapreduce.Job: Running job: job_local636012455_0001
15/11/23 16:11:14 INFO mapred.LocalJobRunner: OutputCommitter set in config null
15/11/23 16:11:14 INFO mapred.LocalJobRunner: OutputCommitter is org.apache.hadoop.mapred.FileOutputCommitter
15/11/23 16:11:14 INFO mapred.LocalJobRunner: Waiting for map tasks
15/11/23 16:11:14 INFO mapred.LocalJobRunner: Starting task: attempt_local636012455_0001_m_000000_0
15/11/23 16:11:14 INFO mapred.Task:  Using ResourceCalculatorProcessTree : [ ]
15/11/23 16:11:14 INFO mapred.MapTask: Processing split: hdfs://grande:9000/user/hadoop/gutenberg/words:0+145
15/11/23 16:11:14 INFO mapred.MapTask: numReduceTasks: 1
15/11/23 16:11:15 INFO mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584)
15/11/23 16:11:15 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
15/11/23 16:11:15 INFO mapred.MapTask: soft limit at 83886080
15/11/23 16:11:15 INFO mapred.MapTask: bufstart = 0; bufvoid = 104857600
15/11/23 16:11:15 INFO mapred.MapTask: kvstart = 26214396; length = 6553600
15/11/23 16:11:15 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
15/11/23 16:11:15 INFO streaming.PipeMapRed: PipeMapRed exec [/home/hadoop/mapper.php]
15/11/23 16:11:15 INFO Configuration.deprecation: mapred.tip.id is deprecated. Instead, use mapreduce.task.id
15/11/23 16:11:15 INFO Configuration.deprecation: mapred.local.dir is deprecated. Instead, use mapreduce.cluster.local.dir
15/11/23 16:11:15 INFO Configuration.deprecation: map.input.file is deprecated. Instead, use mapreduce.map.input.file
15/11/23 16:11:15 INFO Configuration.deprecation: mapred.skip.on is deprecated. Instead, use mapreduce.job.skiprecords
15/11/23 16:11:15 INFO Configuration.deprecation: map.input.length is deprecated. Instead, use mapreduce.map.input.length
15/11/23 16:11:15 INFO Configuration.deprecation: mapred.work.output.dir is deprecated. Instead, use mapreduce.task.output.dir
15/11/23 16:11:15 INFO Configuration.deprecation: map.input.start is deprecated. Instead, use mapreduce.map.input.start
15/11/23 16:11:15 INFO Configuration.deprecation: mapred.job.id is deprecated. Instead, use mapreduce.job.id
15/11/23 16:11:15 INFO Configuration.deprecation: user.name is deprecated. Instead, use mapreduce.job.user.name
15/11/23 16:11:15 INFO Configuration.deprecation: mapred.task.is.map is deprecated. Instead, use mapreduce.task.ismap
15/11/23 16:11:15 INFO Configuration.deprecation: mapred.task.id is deprecated. Instead, use mapreduce.task.attempt.id
15/11/23 16:11:15 INFO Configuration.deprecation: mapred.task.partition is deprecated. Instead, use mapreduce.task.partition
15/11/23 16:11:15 INFO streaming.PipeMapRed: R/W/S=1/0/0 in:NA [rec/s] out:NA [rec/s]
15/11/23 16:11:15 INFO streaming.PipeMapRed: MRErrorThread done
15/11/23 16:11:15 INFO streaming.PipeMapRed: Records R/W=3/1
15/11/23 16:11:15 INFO streaming.PipeMapRed: mapRedFinished
15/11/23 16:11:15 INFO mapred.LocalJobRunner: 
15/11/23 16:11:15 INFO mapred.MapTask: Starting flush of map output
15/11/23 16:11:15 INFO mapred.MapTask: Spilling map output
15/11/23 16:11:15 INFO mapred.MapTask: bufstart = 0; bufend = 182; bufvoid = 104857600
15/11/23 16:11:15 INFO mapred.MapTask: kvstart = 26214396(104857584); kvend = 26214304(104857216); length = 93/6553600
15/11/23 16:11:15 INFO mapred.MapTask: Finished spill 0
15/11/23 16:11:15 INFO mapred.Task: Task:attempt_local636012455_0001_m_000000_0 is done. And is in the process of committing
15/11/23 16:11:15 INFO mapred.LocalJobRunner: Records R/W=3/1
15/11/23 16:11:15 INFO mapred.Task: Task 'attempt_local636012455_0001_m_000000_0' done.
15/11/23 16:11:15 INFO mapred.LocalJobRunner: Finishing task: attempt_local636012455_0001_m_000000_0
15/11/23 16:11:15 INFO mapred.LocalJobRunner: map task executor complete.
15/11/23 16:11:15 INFO mapred.LocalJobRunner: Waiting for reduce tasks
15/11/23 16:11:15 INFO mapred.LocalJobRunner: Starting task: attempt_local636012455_0001_r_000000_0
15/11/23 16:11:15 INFO mapred.Task:  Using ResourceCalculatorProcessTree : [ ]
15/11/23 16:11:15 INFO mapred.ReduceTask: Using ShuffleConsumerPlugin: org.apache.hadoop.mapreduce.task.reduce.Shuffle@929976
15/11/23 16:11:15 INFO reduce.MergeManagerImpl: MergerManager: memoryLimit=334338464, maxSingleShuffleLimit=83584616, mergeThreshold=220663392, ioSortFactor=10, memToMemMergeOutputsThreshold=10
15/11/23 16:11:15 INFO reduce.EventFetcher: attempt_local636012455_0001_r_000000_0 Thread started: EventFetcher for fetching Map Completion Events
15/11/23 16:11:15 INFO reduce.LocalFetcher: localfetcher#1 about to shuffle output of map attempt_local636012455_0001_m_000000_0 decomp: 232 len: 236 to MEMORY
15/11/23 16:11:15 INFO reduce.InMemoryMapOutput: Read 232 bytes from map-output for attempt_local636012455_0001_m_000000_0
15/11/23 16:11:15 INFO reduce.MergeManagerImpl: closeInMemoryFile -> map-output of size: 232, inMemoryMapOutputs.size() -> 1, commitMemory -> 0, usedMemory ->232
15/11/23 16:11:15 INFO reduce.EventFetcher: EventFetcher is interrupted.. Returning
15/11/23 16:11:15 INFO mapred.LocalJobRunner: 1 / 1 copied.
15/11/23 16:11:15 INFO reduce.MergeManagerImpl: finalMerge called with 1 in-memory map-outputs and 0 on-disk map-outputs
15/11/23 16:11:15 INFO mapred.Merger: Merging 1 sorted segments
15/11/23 16:11:15 INFO mapred.Merger: Down to the last merge-pass, with 1 segments left of total size: 227 bytes
15/11/23 16:11:15 INFO reduce.MergeManagerImpl: Merged 1 segments, 232 bytes to disk to satisfy reduce memory limit
15/11/23 16:11:15 INFO reduce.MergeManagerImpl: Merging 1 files, 236 bytes from disk
15/11/23 16:11:15 INFO reduce.MergeManagerImpl: Merging 0 segments, 0 bytes from memory into reduce
15/11/23 16:11:15 INFO mapred.Merger: Merging 1 sorted segments
15/11/23 16:11:15 INFO mapred.Merger: Down to the last merge-pass, with 1 segments left of total size: 227 bytes
15/11/23 16:11:15 INFO mapred.LocalJobRunner: 1 / 1 copied.
15/11/23 16:11:15 INFO streaming.PipeMapRed: PipeMapRed exec [/home/hadoop/reducer.php]
15/11/23 16:11:15 INFO Configuration.deprecation: mapred.job.tracker is deprecated. Instead, use mapreduce.jobtracker.address
15/11/23 16:11:15 INFO Configuration.deprecation: mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps
15/11/23 16:11:15 INFO streaming.PipeMapRed: R/W/S=1/0/0 in:NA [rec/s] out:NA [rec/s]
15/11/23 16:11:15 INFO streaming.PipeMapRed: R/W/S=10/0/0 in:NA [rec/s] out:NA [rec/s]
15/11/23 16:11:15 INFO streaming.PipeMapRed: Records R/W=24/1
15/11/23 16:11:15 INFO streaming.PipeMapRed: MRErrorThread done
15/11/23 16:11:15 INFO streaming.PipeMapRed: mapRedFinished
15/11/23 16:11:15 INFO mapred.Task: Task:attempt_local636012455_0001_r_000000_0 is done. And is in the process of committing
15/11/23 16:11:15 INFO mapred.LocalJobRunner: 1 / 1 copied.
15/11/23 16:11:15 INFO mapred.Task: Task attempt_local636012455_0001_r_000000_0 is allowed to commit now
15/11/23 16:11:15 INFO output.FileOutputCommitter: Saved output of task 'attempt_local636012455_0001_r_000000_0' to hdfs://grande:9000/user/hadoop/gutenberg-output2/_temporary/0/task_local636012455_0001_r_000000
15/11/23 16:11:15 INFO mapred.LocalJobRunner: Records R/W=24/1 > reduce
15/11/23 16:11:15 INFO mapred.Task: Task 'attempt_local636012455_0001_r_000000_0' done.
15/11/23 16:11:15 INFO mapred.LocalJobRunner: Finishing task: attempt_local636012455_0001_r_000000_0
15/11/23 16:11:15 INFO mapred.LocalJobRunner: reduce task executor complete.
15/11/23 16:11:15 INFO mapreduce.Job: Job job_local636012455_0001 running in uber mode : false
15/11/23 16:11:15 INFO mapreduce.Job:  map 100% reduce 100%
15/11/23 16:11:15 INFO mapreduce.Job: Job job_local636012455_0001 completed successfully
15/11/23 16:11:15 INFO mapreduce.Job: Counters: 38
	File System Counters
		FILE: Number of bytes read=210758
		FILE: Number of bytes written=732456
		FILE: Number of read operations=0
		FILE: Number of large read operations=0
		FILE: Number of write operations=0
		HDFS: Number of bytes read=290
		HDFS: Number of bytes written=182
		HDFS: Number of read operations=15
		HDFS: Number of large read operations=0
		HDFS: Number of write operations=4
	Map-Reduce Framework
		Map input records=3
		Map output records=24
		Map output bytes=182
		Map output materialized bytes=236
		Input split bytes=98
		Combine input records=0
		Combine output records=0
		Reduce input groups=24
		Reduce shuffle bytes=236
		Reduce input records=24
		Reduce output records=24
		Spilled Records=48
		Shuffled Maps =1
		Failed Shuffles=0
		Merged Map outputs=1
		GC time elapsed (ms)=0
		CPU time spent (ms)=0
		Physical memory (bytes) snapshot=0
		Virtual memory (bytes) snapshot=0
		Total committed heap usage (bytes)=588251136
	Shuffle Errors
		BAD_ID=0
		CONNECTION=0
		IO_ERROR=0
		WRONG_LENGTH=0
		WRONG_MAP=0
		WRONG_REDUCE=0
	File Input Format Counters 
		Bytes Read=145
	File Output Format Counters 
		Bytes Written=182
15/11/23 16:11:15 INFO streaming.StreamJob: Output directory: gutenberg-output2











  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值