5.4 YARN集群运行

第5章 YARN:资源调度平台

5.4 YARN集群运行

HDFS已经启动


     
     
  1. [root @node1 ~] # jps
  2. 2247 NameNode
  3. 2584 Jps
  4. 2348 DataNode
  • 1
  • 2
  • 3
  • 4

     
     
  1. [root @node2 ~] # jps
  2. 2279 Jps
  3. 2137 DataNode
  4. 2201 SecondaryNameNode
  • 1
  • 2
  • 3
  • 4

     
     
  1. [root @node3 ~] # jps
  2. 5179 DataNode
  3. 7295 Jps
  • 1
  • 2
  • 3

5.4.1 分发文件


     
     
  1. [root @node1 hadoop] # scp yarn-site.xml node2:/opt/hadoop-2.7.3/etc/hadoop/
  2. yarn-site.xml 100% 938 0. 9KB/s 00 : 00
  3. [root @node1 hadoop] # scp mapred-site.xml node2:/opt/hadoop-2.7.3/etc/hadoop/
  4. mapred-site.xml 100% 856 0. 8KB/s 00 : 00
  5. [root @node1 hadoop] # scp yarn-site.xml node3:/opt/hadoop-2.7.3/etc/hadoop/
  6. yarn-site.xml 100% 938 0. 9KB/s 00 : 00
  7. [root @node1 hadoop] # scp mapred-site.xml node3:/opt/hadoop-2.7.3/etc/hadoop/
  8. mapred-site.xml 100% 856 0. 8KB/s 00 : 00
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8

5.4.2 启动YARN


     
     
  1. [root@node1 ~] # start-yarn.sh
  2. starting yarn daemons
  3. starting resourcemanager, logging to /opt/hadoop- 2.7 .3/logs/yarn-root-resourcemanager-node1 .out
  4. node3: starting nodemanager, logging to /opt/hadoop- 2.7 .3/logs/yarn-root-nodemanager-node3 .out
  5. node2: starting nodemanager, logging to /opt/hadoop- 2.7 .3/logs/yarn-root-nodemanager-node2 .out
  6. node1: starting nodemanager, logging to /opt/hadoop- 2.7 .3/logs/yarn-root-nodemanager-node1 .out
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6

     
     
  1. [root @node1 ~] # jps
  2. 2753 NodeManager
  3. 3041 Jps
  4. 2247 NameNode
  5. 2649 ResourceManager
  6. 2348 DataNode
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6

     
     
  1. [root @node2 ~] # jps
  2. 2341 NodeManager
  3. 2137 DataNode
  4. 2201 SecondaryNameNode
  5. 2443 Jps
  • 1
  • 2
  • 3
  • 4
  • 5

     
     
  1. [root @node3 ~] # jps
  2. 7350 NodeManager
  3. 5179 DataNode
  4. 7451 Jps
  • 1
  • 2
  • 3
  • 4

5.4.3 Web页面

http://192.168.80.131:8088

这里写图片描述

这里写图片描述

5.4.4 Hadoop自带样例程序


     
     
  1. [ root@node1 ~ ] # cd /opt/hadoop - 2 . 7 . 3/share/hadoop/mapreduce/
  2. [ root@node1 mapreduce ] # ll
  3. total 4972
  4. - rw - r - - r - - 1 root root 537521 Aug 17 2016 hadoop - mapreduce - client - app - 2 . 7 . 3 . jar
  5. - rw - r - - r - - 1 root root 773501 Aug 17 2016 hadoop - mapreduce - client - common - 2 . 7 . 3 . jar
  6. - rw - r - - r - - 1 root root 1554595 Aug 17 2016 hadoop - mapreduce - client - core - 2 . 7 . 3 . jar
  7. - rw - r - - r - - 1 root root 189714 Aug 17 2016 hadoop - mapreduce - client - hs - 2 . 7 . 3 . jar
  8. - rw - r - - r - - 1 root root 27598 Aug 17 2016 hadoop - mapreduce - client - hs - plugins - 2 . 7 . 3 . jar
  9. - rw - r - - r - - 1 root root 61745 Aug 17 2016 hadoop - mapreduce - client - jobclient - 2 . 7 . 3 . jar
  10. - rw - r - - r - - 1 root root 1551594 Aug 17 2016 hadoop - mapreduce - client - jobclient - 2 . 7 . 3 - tests . jar
  11. - rw - r - - r - - 1 root root 71310 Aug 17 2016 hadoop - mapreduce - client - shuffle - 2 . 7 . 3 . jar
  12. - rw - r - - r - - 1 root root 295812 Aug 17 2016 hadoop - mapreduce - examples - 2 . 7 . 3 . jar
  13. drwxr - xr - x 2 root root 4096 Aug 17 2016 lib
  14. drwxr - xr - x 2 root root 30 Aug 17 2016 lib - examples
  15. drwxr - xr - x 2 root root 4096 Aug 17 2016 sources
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15

求解PI值


     
     
  1. [root@node1 mapreduce]# hadoop jar hadoop-mapreduce-examples- 2.7 .3.jar pi 3 3
  2. Number of Maps = 3
  3. Samples per Map = 3
  4. Wrote input for Map # 0
  5. Wrote input for Map # 1
  6. Wrote input for Map # 2
  7. Starting Job
  8. 17/ 05/ 23 10: 57: 55 INFO client.RMProxy: Connecting to ResourceManager at node1/ 192.168 .80 .131: 8032
  9. 17/ 05/ 23 10: 57: 56 INFO input.FileInputFormat: Total input paths to process : 3
  10. 17/ 05/ 23 10: 57: 56 INFO mapreduce.JobSubmitter: number of splits: 3
  11. 17/ 05/ 23 10: 57: 57 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1495550966527_0001
  12. 17/ 05/ 23 10: 57: 58 INFO impl.YarnClientImpl: Submitted application application_1495550966527_0001
  13. 17/ 05/ 23 10: 57: 58 INFO mapreduce.Job: The url to track the job: http: //node1: 8088 /proxy/application_1495550966527_0001/
  14. 17/ 05/ 23 10: 57: 58 INFO mapreduce.Job: Running job: job_1495550966527_0001
  15. 17/ 05/ 23 10: 58: 17 INFO mapreduce.Job: Job job_1495550966527_0001 running in uber mode : false
  16. 17/ 05/ 23 10: 58: 17 INFO mapreduce.Job: map 0% reduce 0%
  17. 17/ 05/ 23 10: 59: 02 INFO mapreduce.Job: map 100% reduce 0%
  18. 17/ 05/ 23 10: 59: 15 INFO mapreduce.Job: map 100% reduce 100%
  19. 17/ 05/ 23 10: 59: 16 INFO mapreduce.Job: Job job_1495550966527_0001 completed successfully
  20. 17/ 05/ 23 10: 59: 16 INFO mapreduce.Job: Counters: 49
  21. File System Counters
  22. FILE: Number of bytes read= 72
  23. FILE: Number of bytes written= 475761
  24. FILE: Number of read operations= 0
  25. FILE: Number of large read operations= 0
  26. FILE: Number of write operations= 0
  27. HDFS: Number of bytes read= 777
  28. HDFS: Number of bytes written= 215
  29. HDFS: Number of read operations= 15
  30. HDFS: Number of large read operations= 0
  31. HDFS: Number of write operations= 3
  32. Job Counters
  33. Launched map tasks= 3
  34. Launched reduce tasks= 1
  35. Data- local map tasks= 3
  36. Total time spent by all maps in occupied slots (ms)= 127167
  37. Total time spent by all reduces in occupied slots (ms)= 9302
  38. Total time spent by all map tasks (ms)= 127167
  39. Total time spent by all reduce tasks (ms)= 9302
  40. Total vcore-milliseconds taken by all map tasks= 127167
  41. Total vcore-milliseconds taken by all reduce tasks= 9302
  42. Total megabyte-milliseconds taken by all map tasks= 130219008
  43. Total megabyte-milliseconds taken by all reduce tasks= 9525248
  44. Map-Reduce Framework
  45. Map input records= 3
  46. Map output records= 6
  47. Map output bytes= 54
  48. Map output materialized bytes= 84
  49. Input split bytes= 423
  50. Combine input records= 0
  51. Combine output records= 0
  52. Reduce input groups= 2
  53. Reduce shuffle bytes= 84
  54. Reduce input records= 6
  55. Reduce output records= 0
  56. Spilled Records= 12
  57. Shuffled Maps = 3
  58. Failed Shuffles= 0
  59. Merged Map outputs= 3
  60. GC time elapsed (ms)= 1847
  61. CPU time spent (ms)= 12410
  62. Physical memory (bytes) snapshot= 711430144
  63. Virtual memory (bytes) snapshot= 8312004608
  64. Total committed heap usage (bytes)= 436482048
  65. Shuffle Errors
  66. BAD_ID= 0
  67. CONNECTION= 0
  68. IO_ERROR= 0
  69. WRONG_LENGTH= 0
  70. WRONG_MAP= 0
  71. WRONG_REDUCE= 0
  72. File Input Format Counters
  73. Bytes Read= 354
  74. File Output Format Counters
  75. Bytes Written= 97
  76. Job Finished in 81.368 seconds
  77. Estimated value of Pi is 3.55555555555555555556
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56
  • 57
  • 58
  • 59
  • 60
  • 61
  • 62
  • 63
  • 64
  • 65
  • 66
  • 67
  • 68
  • 69
  • 70
  • 71
  • 72
  • 73
  • 74
  • 75
  • 76
  • 77

     
     
  1. [root@node1 mapreduce]# hadoop jar hadoop-mapreduce-examples- 2.7 .3.jar wordcount /user/root/input /user/root/output
  2. 17/ 05/ 23 11: 01: 34 INFO client.RMProxy: Connecting to ResourceManager at node1/ 192.168 .80 .131: 8032
  3. 17/ 05/ 23 11: 01: 36 INFO input.FileInputFormat: Total input paths to process : 2
  4. 17/ 05/ 23 11: 01: 36 INFO mapreduce.JobSubmitter: number of splits: 2
  5. 17/ 05/ 23 11: 01: 37 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1495550966527_0002
  6. 17/ 05/ 23 11: 01: 37 INFO impl.YarnClientImpl: Submitted application application_1495550966527_0002
  7. 17/ 05/ 23 11: 01: 37 INFO mapreduce.Job: The url to track the job: http: //node1: 8088 /proxy/application_1495550966527_0002/
  8. 17/ 05/ 23 11: 01: 37 INFO mapreduce.Job: Running job: job_1495550966527_0002
  9. 17/ 05/ 23 11: 01: 58 INFO mapreduce.Job: Job job_1495550966527_0002 running in uber mode : false
  10. 17/ 05/ 23 11: 01: 58 INFO mapreduce.Job: map 0% reduce 0%
  11. 17/ 05/ 23 11: 02: 15 INFO mapreduce.Job: map 100% reduce 0%
  12. 17/ 05/ 23 11: 02: 25 INFO mapreduce.Job: map 100% reduce 100%
  13. 17/ 05/ 23 11: 02: 26 INFO mapreduce.Job: Job job_1495550966527_0002 completed successfully
  14. 17/ 05/ 23 11: 02: 26 INFO mapreduce.Job: Counters: 49
  15. File System Counters
  16. FILE: Number of bytes read= 89
  17. FILE: Number of bytes written= 355953
  18. FILE: Number of read operations= 0
  19. FILE: Number of large read operations= 0
  20. FILE: Number of write operations= 0
  21. HDFS: Number of bytes read= 301
  22. HDFS: Number of bytes written= 46
  23. HDFS: Number of read operations= 9
  24. HDFS: Number of large read operations= 0
  25. HDFS: Number of write operations= 2
  26. Job Counters
  27. Launched map tasks= 2
  28. Launched reduce tasks= 1
  29. Data- local map tasks= 2
  30. Total time spent by all maps in occupied slots (ms)= 29625
  31. Total time spent by all reduces in occupied slots (ms)= 7154
  32. Total time spent by all map tasks (ms)= 29625
  33. Total time spent by all reduce tasks (ms)= 7154
  34. Total vcore-milliseconds taken by all map tasks= 29625
  35. Total vcore-milliseconds taken by all reduce tasks= 7154
  36. Total megabyte-milliseconds taken by all map tasks= 30336000
  37. Total megabyte-milliseconds taken by all reduce tasks= 7325696
  38. Map-Reduce Framework
  39. Map input records= 6
  40. Map output records= 14
  41. Map output bytes= 140
  42. Map output materialized bytes= 95
  43. Input split bytes= 216
  44. Combine input records= 14
  45. Combine output records= 7
  46. Reduce input groups= 6
  47. Reduce shuffle bytes= 95
  48. Reduce input records= 7
  49. Reduce output records= 6
  50. Spilled Records= 14
  51. Shuffled Maps = 2
  52. Failed Shuffles= 0
  53. Merged Map outputs= 2
  54. GC time elapsed (ms)= 574
  55. CPU time spent (ms)= 4590
  56. Physical memory (bytes) snapshot= 514162688
  57. Virtual memory (bytes) snapshot= 6236823552
  58. Total committed heap usage (bytes)= 301146112
  59. Shuffle Errors
  60. BAD_ID= 0
  61. CONNECTION= 0
  62. IO_ERROR= 0
  63. WRONG_LENGTH= 0
  64. WRONG_MAP= 0
  65. WRONG_REDUCE= 0
  66. File Input Format Counters
  67. Bytes Read= 85
  68. File Output Format Counters
  69. Bytes Written= 46
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56
  • 57
  • 58
  • 59
  • 60
  • 61
  • 62
  • 63
  • 64
  • 65
  • 66
  • 67
  • 68
  • 69

求解wordcount过程中,我们可以观察页面http://192.168.80.131:8088, 
这里写图片描述

这里写图片描述

这里写图片描述

灰常灰常感谢原博主的辛苦工作,为防止删博,所以转载,只供学习使用,不做其他任何商业用途。 https://blog.csdn.net/chengyuqiang/article/details/72666942
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值