mapreduce远程调试

  1. JVM本身就支持远程调试,Eclipse也支持JDWP,只需要在各模块的JVM启动时加载以下参数:

-Xdebug -Xrunjdwp:transport=dt_socket, address=8000,server=y,suspend=y

 

各参数的含义:

-Xdebug

启用调试特性

-Xrunjdwp

启用JDWP实现,包含若干子选项:

transport=dt_socket

JPDA front-end和back-end之间的传输方法。dt_socket表示使用套接字传输。

address=8000

JVM在8000端口上监听请求,这个设定为一个不冲突的端口即可。

server=y

y表示启动的JVM是被调试者。如果为n,则表示启动的JVM是调试器。

suspend=y

y表示启动的JVM会暂停等待,直到调试器连接上才继续执行。suspend=n,则JVM不会暂停等待。
								

注意在工程里面改就可以了。。。。

 

<?xml version="1.0"?>

<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

 

<!-- Put site-specific property overrides in this file. -->

 

<configuration>

    <property>

        <name>mapred.job.tracker</name>

        <value>192.168.1.219:9001</value>

    </property>

 

<property>

<name>mapred.child.java.opts</name>

<value>-agentlib:jdwp=transport=dt_socket,address=8883,server=y,suspend=y</value>

</property>

 

    <property>

        <name>mapred.tasktracker.map.tasks.maximum</name>

        <value>1</value>

    </property>

    <property>

        <name>mapred.tasktracker.reduce.tasks.maximum</name>

        <value>1</value>

    </property>

 

    <property>

        <name>mapred.job.reuse.jvm.num.tasks</name>

        <value>-1</value>

    </property>

</configuration>

Eclipse中使用方法:

 

打开eclipse,找到Debug Configurations...,添加一个Remout Java Application:

source中可以关联到hive的源代码,然后,单击Debug按钮进入远程debug模式。

 

HADOOP_NAMENODE_OPTS="-agentlib:jdwp=transport=dt_socket,address=8888,server=y,suspend=y"

#HADOOP_SECONDARYNAMENODE_OPTS="-agentlib:jdwp=transport=dt_socket,address=8789,server=y,suspend=y"

#HADOOP_DATANODE_OPTS="-agentlib:jdwp=transport=dt_socket,address=8790,server=y,suspend=y"

#HADOOP_BALANCER_OPTS="-agentlib:jdwp=transport=dt_socket,address=8791,server=y,suspend=y"

#HADOOP_JOBTRACKER_OPTS="-agentlib:jdwp=transport=dt_socket,address=8792,server=y,suspend=y"

#HADOOP_TASKTRACKER_OPTS="-agentlib:jdwp=transport=dt_socket,address=8793,server=y,suspend=y"

 

 

 

对NameNode,SecondaryName,DataNode,JobTracker,TaskTracker进行远程调试,则需要修改一下bin/hadoop文件:

 

if [ "$COMMAND" = "namenode" ] ; then

 

CLASS='org.apache.hadoop.hdfs.server.namenode.NameNode'

 

HADOOP_OPTS="$HADOOP_OPTS $HADOOP_NAMENODE_OPTS -agentlib:jdwp=transport=dt_socket,address=8888,server=y,suspend=n"

 

elif [ "$COMMAND" = "secondarynamenode" ] ; then

 

CLASS='org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode'

 

HADOOP_OPTS="$HADOOP_OPTS $HADOOP_SECONDARYNAMENODE_OPTS -agentlib:jdwp=transport=dt_socket,address=8887,server=y,suspend=n"

 

elif [ "$COMMAND" = "datanode" ] ; then

 

CLASS='org.apache.hadoop.hdfs.server.datanode.DataNode'

 

HADOOP_OPTS="$HADOOP_OPTS $HADOOP_DATANODE_OPTS -agentlib:jdwp=transport=dt_socket,address=8886,server=y,suspend=n"

 

……

 

elif [ "$COMMAND" = "jobtracker" ] ; then

 

CLASS=org.apache.hadoop.mapred.JobTracker

 

HADOOP_OPTS="$HADOOP_OPTS $HADOOP_JOBTRACKER_OPTS -agentlib:jdwp=transport=dt_socket,address=8885,server=y,suspend=n"

 

elif [ "$COMMAND" = "tasktracker" ] ; then

 

CLASS=org.apache.hadoop.mapred.TaskTracker

 

HADOOP_OPTS="$HADOOP_OPTS $HADOOP_TASKTRACKER_OPTS -agentlib:jdwp=transport=dt_socket,address=8884,server=y,suspend=n"

 

 

Namenode触发:

public DirectoryListing getListing(String src, byte[] startAfter)

throws IOException {

DirectoryListing files = namesystem.getListing(src, startAfter);

myMetrics.incrNumGetListingOps();

if (files != null) {

myMetrics.incrNumFilesInGetListingOps(files.getPartialListing().length);

}

return files;

}

 

Secondnamenode

private void initialize(final Configuration conf) throws IOException {

final InetSocketAddress infoSocAddr = getHttpAddress(conf);

infoBindAddress = infoSocAddr.getHostName();

if (UserGroupInformation.isSecurityEnabled()) {

SecurityUtil.login(conf,

DFSConfigKeys.DFS_SECONDARY_NAMENODE_KEYTAB_FILE_KEY,

DFSConfigKeys.DFS_SECONDARY_NAMENODE_USER_NAME_KEY,

infoBindAddress);

}

 

 

Datenode 触发:

DataNode(final Configuration conf,

final AbstractList<File> dataDirs, SecureResources resources) throws IOException {

super(conf);

SecurityUtil.login(conf, DFSConfigKeys.DFS_DATANODE_KEYTAB_FILE_KEY,

DFSConfigKeys.DFS_DATANODE_USER_NAME_KEY);

 

datanodeObject = this;

supportAppends = conf.getBoolean("dfs.support.append", false);

this.userWithLocalPathAccess = conf

.get(DFSConfigKeys.DFS_BLOCK_LOCAL_PATH_ACCESS_USER_KEY);

try {

startDataNode(conf, dataDirs, resources);

} catch (IOException ie) {

shutdown();

throw ie;

}

}

 

 

Jobtracker

 

public static JobTracker startTracker(JobConf conf, String identifier)

throws IOException, InterruptedException {

 

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值