hadoop2.2.0 配置

进入:~/etc/hadoop/

hadoop-env.sh:

export JAVA_HOME=/home/s011/jdk1.7.0_25
export HADOOP_LOG_DIR=/home/s011/hadoop-2.2.0/logs

---------------------------------------------------------------------------------------

core-site.xml :

<configuration>
  <property>
<name>fs.default.name</name>
<value>hdfs://s011:9000</value>
  </property>


  <property>
<name>dfs.replication</name>
<value>1</value>
  </property>


  <property>
        <name>hadoop.tmp.dir</name>
<value>/home/s011/hadoop-2.2.0/mytmp</value>
  </property>
</configuration>

-----------------------------------------------------------------------

hdfs-site.xml :

<configuration>
  <property>
    <name>dfs.namenode.name.dir</name>
    <value>/home/s011/hadoop-2.2.0/hdfs/name</value>
  </property>
  <property>
    <name>dfs.namenode.data.dir</name>
    <value>/home/s011/hadoop-2.2.0/hdfs/data</value>
  </property>
 <property>
        <name>hadoop.tmp.dir</name>
<value>/home/s011/hadoop-2.2.0/mytmp</value>
  </property>
  <property>
    <name>dfs.replication</name>
    <value>1</value>
  </property>
  
  <property>
    <name>dfs.permissions</name>
    <value>false</value>
  </property>
</configuration>

--------------------------------------------------------

mapred-site.xml :

<configuration>
  <property>
    <name>mapreduce.cluster.temp.dir</name>
    <value></value>
    <description>No description</description>
    <final>true</final>
  </property>


  <property>
    <name>mapreduce.cluster.local.dir</name>
    <value></value>
    <description>No description</description>
    <final>true</final>
  </property>
  
  <property>
    <name>mapreduce.framework.name</name>
    <value>yarn</value>
  </property>
</configuration>

----------------------------------------------------------------

yarn-site.xml :

<configuration>
<!-- Site specific YARN configuration properties -->
<property>
    <name>yarn.resourcemanager.resource-tracker.address</name>
    <value>s011:8990</value>
    <description>host is the hostname of the resource manager and 
    port is the port on which the NodeManagers contact the Resource Manager.
    </description>
  </property>

  <property>
    <name>yarn.resourcemanager.scheduler.address</name>
   <value>s011:8991</value> 
    <description>host is the hostname of the resourcemanager and port is the port
    on which the Applications in the cluster talk to the Resource Manager.
    </description>
  </property>

  <property>
    <name>yarn.resourcemanager.scheduler.class</name>
    <value>org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler</value>
    <description>In case you do not want to use the default scheduler</description>
  </property>

  <property>
    <name>yarn.resourcemanager.address</name>
    <value>s011:8993</value>
    <description>the host is the hostname of the ResourceManager and the port is the port on
    which the clients can talk to the Resource Manager. </description>
  </property>

  <property>
    <name>yarn.nodemanager.local-dirs</name>
    <value>/home/s011/hadoop-2.2.0/mytmp/local</value>
    <description>the local directories used by the nodemanager</description>
  </property>

  <property>
    <name>yarn.nodemanager.address</name>
   <value>s011:8994</value> 
<description>the nodemanagers bind to this port</description>
  </property>  

  <property>
    <name>yarn.nodemanager.resource.memory-mb</name>
    <value>10240</value>
    <description>the amount of memory on the NodeManager in GB</description>
  </property>
 
  <property>
    <name>yarn.nodemanager.remote-app-log-dir</name>
    <value>/home/s011/hadoop-2.2.0/mytmp/nodemanager/remote</value>
    <description>directory on hdfs where the application logs are moved to </description>
  </property>

   <property>
    <name>yarn.nodemanager.log-dirs</name>
    <value>/home/s011/hadoop-2.2.0/mytmp/nodemanager/logs</value>
    <description>the directories used by Nodemanagers as log directories</description>
  </property>

  <property>
    <name>yarn.nodemanager.aux-services</name>
    <value>mapreduce_shuffle</value>
    <description>shuffle service that needs to be set for Map Reduce to run </description>
  </property>
</configuration>

--------------------------------------------------------------------------------

格式化:hdfs namenode -format

13/11/15 10:33:27 INFO namenode.NameNode: STARTUP_MSG: 
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = s001/127.0.0.1
STARTUP_MSG:   args = [-format]
STARTUP_MSG:   version = 2.2.0
STARTUP_MSG:   classpath = /home/s011/hadoop-2.2.0/etc/hadoop:/home/s011/hadoop-2.2.0/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:。。。。。

STARTUP_MSG:   build = https://svn.apache.org/repos/asf/hadoop/common -r 1529768; compiled by 'hortonmu' on 2013-10-07T06:28Z
STARTUP_MSG:   java = 1.7.0_25
************************************************************/
13/11/15 10:33:27 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
13/11/15 10:33:28 WARN common.Util: Path /home/s011/hadoop-2.2.0/hdfs/name should be specified as a URI in configuration files. Please update hdfs configuration.
13/11/15 10:33:28 WARN common.Util: Path /home/s011/hadoop-2.2.0/hdfs/name should be specified as a URI in configuration files. Please update hdfs configuration.
Formatting using clusterid: CID-42b1bdd4-c027-4d03-8aa7-1aa525d435be
13/11/15 10:33:29 INFO namenode.HostFileManager: read includes:
HostSet(
)
13/11/15 10:33:29 INFO namenode.HostFileManager: read excludes:
HostSet(
)
13/11/15 10:33:29 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000
13/11/15 10:33:29 INFO util.GSet: Computing capacity for map BlocksMap
13/11/15 10:33:29 INFO util.GSet: VM type       = 64-bit
13/11/15 10:33:29 INFO util.GSet: 2.0% max memory = 888.9 MB
13/11/15 10:33:29 INFO util.GSet: capacity      = 2^21 = 2097152 entries
13/11/15 10:33:29 INFO blockmanagement.BlockManager: dfs.block.access.token.enable=false
13/11/15 10:33:29 INFO blockmanagement.BlockManager: defaultReplication         = 1
13/11/15 10:33:29 INFO blockmanagement.BlockManager: maxReplication             = 512
13/11/15 10:33:29 INFO blockmanagement.BlockManager: minReplication             = 1
13/11/15 10:33:29 INFO blockmanagement.BlockManager: maxReplicationStreams      = 2
13/11/15 10:33:29 INFO blockmanagement.BlockManager: shouldCheckForEnoughRacks  = false
13/11/15 10:33:29 INFO blockmanagement.BlockManager: replicationRecheckInterval = 3000
13/11/15 10:33:29 INFO blockmanagement.BlockManager: encryptDataTransfer        = false
13/11/15 10:33:29 INFO namenode.FSNamesystem: fsOwner             = s011 (auth:SIMPLE)
13/11/15 10:33:29 INFO namenode.FSNamesystem: supergroup          = supergroup
13/11/15 10:33:29 INFO namenode.FSNamesystem: isPermissionEnabled = false
13/11/15 10:33:29 INFO namenode.FSNamesystem: HA Enabled: false
13/11/15 10:33:29 INFO namenode.FSNamesystem: Append Enabled: true
13/11/15 10:33:29 INFO util.GSet: Computing capacity for map INodeMap
13/11/15 10:33:29 INFO util.GSet: VM type       = 64-bit
13/11/15 10:33:29 INFO util.GSet: 1.0% max memory = 888.9 MB
13/11/15 10:33:29 INFO util.GSet: capacity      = 2^20 = 1048576 entries
13/11/15 10:33:29 INFO namenode.NameNode: Caching file names occuring more than 10 times
13/11/15 10:33:29 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
13/11/15 10:33:29 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
13/11/15 10:33:29 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension     = 30000
13/11/15 10:33:29 INFO namenode.FSNamesystem: Retry cache on namenode is enabled
13/11/15 10:33:29 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
13/11/15 10:33:29 INFO util.GSet: Computing capacity for map Namenode Retry Cache
13/11/15 10:33:29 INFO util.GSet: VM type       = 64-bit
13/11/15 10:33:29 INFO util.GSet: 0.029999999329447746% max memory = 888.9 MB
13/11/15 10:33:29 INFO util.GSet: capacity      = 2^15 = 32768 entries
Re-format filesystem in Storage Directory /home/s011/hadoop-2.2.0/hdfs/name ? (Y or N) Y
13/11/15 10:33:52 INFO common.Storage: Storage directory /home/s011/hadoop-2.2.0/hdfs/name has been successfully formatted.
13/11/15 10:33:52 INFO namenode.FSImage: Saving image file /home/s011/hadoop-2.2.0/hdfs/name/current/fsimage.ckpt_0000000000000000000 using no compression
13/11/15 10:33:53 INFO namenode.FSImage: Image file /home/s011/hadoop-2.2.0/hdfs/name/current/fsimage.ckpt_0000000000000000000 of size 196 bytes saved in 0 seconds.
13/11/15 10:33:53 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0
13/11/15 10:33:53 INFO util.ExitUtil: Exiting with status 0
13/11/15 10:33:53 INFO namenode.NameNode: SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at s001/127.0.0.1
************************************************************/

启动:

1、start-dfs.sh

Starting namenodes on [s011]
s011: starting namenode, logging to /home/s011/hadoop-2.2.0/logs/hadoop-s011-namenode-s001.out
s011: starting datanode, logging to /home/s011/hadoop-2.2.0/logs/hadoop-s011-datanode-s001.out
Starting secondary namenodes [0.0.0.0]
0.0.0.0: starting secondarynamenode, logging to /home/s011/hadoop-2.2.0/logs/hadoop-s011-secondarynamenode-s001.out

查看已经启动的进程:jps

13844 SecondaryNameNode
13430 NameNode
13607 DataNode
14011 Jps

2、start-yarn.sh

starting resourcemanager, logging to /home/s011/hadoop-2.2.0/logs/yarn-s011-resourcemanager-s001.out
s011: starting nodemanager, logging to /home/s011/hadoop-2.2.0/logs/yarn-s011-nodemanager-s001.out

查看进程:jps

14615 Jps
14134 ResourceManager
14315 NodeManager
13844 SecondaryNameNode
13430 NameNode
13607 DataNode

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值