Hadoop上配置jconsole

其实总结起来在hadoop上配置Jconsole有关键的两步:

1、 建立jmxremote.password文件 ,文件内容为用户名和密码;然后重新设定文件权限:使用语句chmod 600 xx/xxx/xxx/jmxremote.password ,其中xx/xxx/xxx/表示文件所在的绝对路径,必须做这一步,否则会提示 error jmxremote.password access must be restricted。

2、 修改hadoop环境变量,即hadoop-env.sh该文件中的内容。主要修改NameNode和DataNode中的选项参数:(1)-Dcom.sun.management.jmxremote.ssl=false (2)-Dcom.sun.management.jmxremote.password.file=/separatExpHadoop/hadoop-0.20.205.0/jmxremote.password//指出了jmxremote.password的路径要与第一步是建立jmxremote.password文件存放的位置保持一致。(3)-Dcom.sun.management.jmxremote.port=8004 //指出了jconsole连接的端口

 

 

 

下面是一个hadoop中的hadoop-env.sh的配置成功的例子:

# Set Hadoop-specific environment variables here.

 

# The only required environment variable is JAVA_HOME.  All others are

# optional.  When running adistributed configuration it is best to

# set JAVA_HOME in this file, so that it is correctly defined on

# remote nodes.

 

# The java implementation to use. Required.

export JAVA_HOME=/usr/lib64/jvm/java-1.6.0-sun-1.6.0.u7/jre-------设定JAVA_HOME参数

 

# Extra Java CLASSPATH elements. Optional.

#exportHADOOP_CLASSPATH=$HADOOP_CLASSPATH:/separatExpHadoop/hadoop-0.20.205.0

------设定hadoop的classpath

# The maximum amount of heap to use, in MB. Default is 1000.

export HADOOP_HEAPSIZE=3500

------设定对内存大小

# Extra Java runtime options. Empty by default.

 export HADOOP_OPTS=-server

 

# Command specific options appended to HADOOP_OPTS when specified

Export HADOOP_NAMENODE_OPTS="-Dcom.sun.management.jmxremote-Dcom.sun.management.jmxremote.ssl=false-Dcom.sun.management.jmxremote.password.file=/separatExpHadoop/hadoop-0.20.205.0/jmxremote.password-Dcom.sun.management.jmxremote.port=8004 $HADOOP_NAMENODE_OPTS -Xmx3500m $HADOOP_NAMENODE_OPTS"

exportHADOOP_SECONDARYNAMENODE_OPTS="-Dcom.sun.management.jmxremote -Xmx300m$HADOOP_SECONDARYNAMENODE_OPTS"

exportHADOOP_DATANODE_OPTS="-Dcom.sun.management.jmxremote-Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.password.file=/separatExpHadoop/hadoop-0.20.205.0/jmxremote.password-Dcom.sun.management.jmxremote.port=8004 $HADOOP_DATANODE_OPTS"

exportHADOOP_BALANCER_OPTS="-Dcom.sun.management.jmxremote$HADOOP_BALANCER_OPTS"

export HADOOP_JOBTRACKER_OPTS="-Dcom.sun.management.jmxremote-Xmx1500m $HADOOP_JOBTRACKER_OPTS"

# export HADOOP_TASKTRACKER_OPTS=

# The following applies to multiple commands (fs, dfs, fsck, distcpetc)

# export HADOOP_CLIENT_OPTS

 

# Extra ssh options.  Emptyby default.

 export HADOOP_SSH_OPTS="-oConnectTimeout=1 -o SendEnv=HADOOP_CONF_DIR -o StrictHostKeyChecking=no"

 

# Where log files are stored. $HADOOP_HOME/logs by default.

# export HADOOP_LOG_DIR=${HADOOP_HOME}/logs

 

# File naming remote slave hosts. $HADOOP_HOME/conf/slaves by default.

# export HADOOP_SLAVES=${HADOOP_HOME}/conf/slaves

 

# host:path where hadoop code should be rsync'd from.  Unset by default.

 exportHADOOP_MASTER=hadoop:/separatExpHadoop/hadoop-0.20.205.0

 

# Seconds to sleep between slave commands.  Unset by default.  This

# can be useful in large clusters, where, e.g., slave rsyncs can

# otherwise arrive faster than the master can service them.

 exportHADOOP_SLAVE_SLEEP=0.1

 

# The directory where pid files are stored. /tmp by default.

# export HADOOP_PID_DIR=/hadoop/pids

 

# A string representing this instance of hadoop. $USER by default.

# export HADOOP_IDENT_STRING=$USER

 

# The scheduling priority for daemon processes.  See 'man nice'.

# export HADOOP_NICENESS=10

【总结】jconsole是用来监控指定java应用程序的jvm内存使用情况,例如tomcat、hadoop等,在这些应用程序中jvm的内存大小都是设定好的,在环境变量中设定好的(即java启动参数),所以配置方法是一致的,只要进行推广就可以了。

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值