yarn rmadmin -getServiceState rm2 得到 rm1 或 rm2 的状态
yarn rmadmin -transitionToActive rm2 --forcemanual 切换 rm2 为主
yarn node -list
hdfs dfsadmin -refreshNodes
yarn rmadmin -refreshNodes
yarn rmadmin -refreshQueues 队列刷新
yarn application -list
yarn application -status application_1499826928702_868175
yarn application -kill application_1499826928702_868175
yarn logs -applicationId appid 查看日志
ws/v1/cluster/apps?states=RUNNING
http://namenode00.host-mtime.com:8088/ws/v1/cluster/apps?states=RUNNING
curl --compressed -H "Accept: application/json" -X GET "http://lyhadoop4.com:8088/ws/v1/cluster/apps/application_1465461051654_0001"
http://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/ResourceManagerRest.html api 接口
job 的cpu使用
http://namenode01.host-mtime.com:19888/jobhistory/jobcounters/job_1494493840980_428367
查找 CPU time spent (ms) 中的 total 值
job 内存
http://namenode01.host-mtime.com:19888/jobhistory/conf/job_1494493840980_428367
查找 (mapreduce.map.memory.mb * map数) + (mapreduce.reduce.memory.mb * reduce数)
map 和 reduce 数 可以在 http://namenode01.host-mtime.com:19888/jobhistory/job/job_1494493840980_428367 中找到
job 的 io
http://namenode01.host-mtime.com:19888/jobhistory/jobcounters/job_1494493840980_428367
查找 FILE: Number of bytes read + FILE: Number of bytes written + HDFS: Number of bytes read + ( HDFS: Number of bytes written * 3 )
HDFS: Number of bytes written *3 是因为一个文件要写三份。
job的sql
http://namenode01.host-mtime.com:19888/jobhistory/conf/job_1494493840980_428367
查找 hive.query.string 得到的是 urlencode 可以在转码网站上解码。
正在运行的job查看sql http://namenode01.host-mtime.com:8088/proxy/application_1499826928702_1021497/mapreduce/job/job_1499826928702_1021497
点击 running -> application ID -> Tracking URL: ApplicationMaster -> job ID -> Configuration 页面中查找 hive.query.string
shell 解码 urlencode
echo "hive.query.string urlencode 内容" > urlfile.txt
for url in `cat urlfile.txt`
do
printf $(echo -n $url | sed 's/\\/\\\\/g;s/\(%\)\([0-9a-fA-F][0-9a-fA-F]\)/\\x\2/g')"\n"
done
yarn 测试
1) hadoop jar /home/hadoop/apache-hadoop/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.2.jar pi 100 10000
2) hdfs dfs -put words.txt hdfs://cloud01:9000/
hadoop jar //home/hadoop/apache-hadoop/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.2.jar wordcount /words.txt /output
用于和Map Reduce作业交互和命令(jar)
用法:hadoop job [GENERIC_OPTIONS] [-submit ] | [-status ] | [-counter ] | [-kill ] | [-events ] | [-history [all] ] | [-list [all]] | [-kill-task ] | [-fail-task ]
命令选项 描述
-submit 提交作业
-status 打印map和reduce完成百分比和所有计数器。
-counter 打印计数器的值。
-kill 杀死指定作业。
-events 打印给定范围内jobtracker接收到的事件细节。
-history [all] -history 打印作业的细节、失败及被杀死原因的细节。更多的关于一个作业的细节比如 成功的任务,做过的任务尝试等信息可以通过指定[all]选项查看。
-list [all] -list all 显示所有作业。-list只显示将要完成的作业。
-kill-task 杀死任务。被杀死的任务不会不利于失败尝试。
-fail-task 使任务失败。被失败的任务会对失败尝试不利。