目录
目标:通过Oozie调度hdfs上的shell脚本
步骤:
1.解压Oozie官方案例模板
[feng@hadoop129 oozie-4.0.0-cdh5.3.6]$ tar -zxf oozie-examples.tar.gz
2.创建工作目录
[feng@hadoop129 oozie-4.0.0-cdh5.3.6]$ mkdir feng-apps/
3.拷贝案例模板
[feng@hadoop129 oozie-4.0.0-cdh5.3.6]$ cp -a examples/apps/shell/ feng-apps/
4.创建Shell脚本
[feng@hadoop129 shell]$ touch feng.sh
[feng@hadoop129 shell]$ vim feng.sh
[feng@hadoop129 shell]$
[feng@hadoop129 shell]$
[feng@hadoop129 shell]$ more feng.sh
#!/bin/bash
ssh feng@hadoop129 'echo "Oozie invoked:Hello, world" >> /home/feng/feng_oozie.log'
5.修改property文件
<!--Namenode地址-->
nameNode=hdfs://hadoop130:8020
<!--Yarn地址-->
jobTracker=hadoop:8021
<!-队列名称-->
queueName=default
<!-作业根目录-->
examplesRoot=feng-apps
<!--指定oozie的shell脚本在HDFS中的路径-->
oozie.wf.application.path=${nameNode}/user/${user.name}/${examplesRoot}/shell
<!--执行的Shell脚本->
EXEC=feng.sh
6.修改workflow.xml
<workflow-app xmlns="uri:oozie:workflow:0.4" name="shell-wf">
<start to="shell-feng-node"/>
<action name="shell-feng-node">
<shell xmlns="uri:oozie:shell-action:0.2">
<job-tracker>${jobTracker}</job-tracker>
<name-node>${nameNode}</name-node>
<configuration>
<property>
<name>mapred.job.queue.name</name>
<value>${queueName}</value>
</property>
</configuration>
<exec>${EXEC}</exec>
<!--
<argument>my_output=Hello Oozie</argument>
-->
<!-- 指定hdfs执行脚本的路径-->
<file>/user/feng/feng-apps/shell/${EXEC}#${EXEC}</file>
<capture-output/>
</shell>
<ok to="end"/>
<error to="fail"/>
</action>
<decision name="check-output">
<switch>
<case to="end">
${wf:actionData('shell-node')['my_output'] eq 'Hello Oozie'}
</case>
<default to="fail-output"/>
</switch>
</decision>
<kill name="fail">
<message>Shell action failed, error message[${wf:errorMessage(wf:lastErrorNode())}]</message>
</kill>
<kill name="fail-output">
<message>Incorrect output, expected [Hello Oozie] but was [${wf:actionData('shell-node')['my_output']}]</message>
</kill>
<end name="end"/>
</workflow-app>
7.将整个job目录上传HDFS
/opt/module/cdh/hadoop-2.5.0-cdh5.3.6/bin/hadoop fs -rm -r feng-apps
/opt/module/cdh/hadoop-2.5.0-cdh5.3.6/bin/hadoop fs -put /opt/module/oozie-4.0.0-cdh5.3.6/feng-apps/
先删除原hdfs上的feng-apps目录再上传新的
8.运行job
bin/oozie job -oozie http://hadoop129:11000/oozie -config feng-apps/shell/job.properties -run
效果图:
打开Oozie 页面找到刚执行的job:
运行成功,查看对应的shell脚本打印日志是否成功:
大功告成。