hadoop单机版安装

修改hadoop环境

Vi ~/profile

exportHADOOP_HOME="/opt/gnweb/Hadoop/hadoop-2.2.0"

export PATH=$PATH:$HADOOP_HOME/bin

source ~/profile

 

1,修改hadoop-env.sh中修改JAVA_HOME

       2,修改core-site.xml配置文件

?

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

<?xml version="1.0" encoding="UTF-8"?>

<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<configuration>

    <property>

        <name>hadoop.tmp.dir</name>

        <value>/data/hadoop/tmp</value>

    </property>

      

    <property>  

      <name>fs.defaultFS</name>  

      <value>hdfs://localhost:9000</value>  

      <final>true</final>  

    </property>  

      

</configuration>

     3,修改hdfs-site.xml配置文件

?

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

<STRONG><?xml version="1.0" encoding="UTF-8"?>

<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<configuration>

    <property>

      <name>dfs.namenode.name.dir</name>

      <value>file:///data/hadoop/dfs/name</value>

      <final>true</final>

    </property>

  

    <property>

      <name>dfs.datanode.data.dir</name>

      <value>file:///data/hadoop/dfs/data</value>

      <final>true</final>

    </property>

  

    <property>

      <name>dfs.replication</name>

      <value>1</value>

    </property>

  

    <property>

      <name>dfs.permissions.enabled</name>

      <value>false</value>

    </property>

  

</configuration> </STRONG>

     4,复制mapred-site.xml.template成mapred-site.xml,修改mapred-site.xml

?

1

2

cp mapred-site.xml.template mapred-site.xml vi mapred-site.xml

?

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

<?xml version="1.0"?>

<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<configuration>

    <property>

      <name>mapreduce.framework.name</name>

      <value>yarn</value>

    </property>

    <!--

    <property>

      <name>mapreduce.cluster.temp.dir</name>

      <value></value>

      <final>true</final>

    </property>

  

   <property>

     <name>mapreduce.cluster.local.dir</name>

     <value></value>

     <final>true</final>

   </property>

   -->

</configuration>

   5,修改yarn-site.xml配置文件

?

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

48

49

50

51

52

53

54

55

56

57

58

59

60

61

62

63

64

65

66

67

68

69

70

71

72

73

74

75

<?xml version="1.0"?>

<configuration>

    <property>

      <name>yarn.resourcemanager.hostname</name>

      <value>localhost</value>

      <description>hostanem of RM</description>

    </property>

  

  

    <property>

    <name>yarn.resourcemanager.resource-tracker.address</name>

    <value>localhost:5274</value>

    <description>host is the hostname of the resource manager and 

    port is the port on which the NodeManagers contact the Resource Manager.

    </description>

  </property>

  

  <property>

    <name>yarn.resourcemanager.scheduler.address</name>

    <value>localhost:5273</value>

    <description>host is the hostname of the resourcemanager and port is the port

    on which the Applications in the cluster talk to the Resource Manager.

    </description>

  </property>

  

  <property>

    <name>yarn.resourcemanager.scheduler.class</name>

    <value>org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler</value>

    <description>In case you do not want to use the default scheduler</description>

  </property>

  

  <property>

    <name>yarn.resourcemanager.address</name>

    <value>localhost:5271</value>

    <description>the host is the hostname of the ResourceManager and the port is the port on

    which the clients can talk to the Resource Manager. </description>

  </property>

  

  <property>

    <name>yarn.nodemanager.local-dirs</name>

    <value></value>

    <description>the local directories used by the nodemanager</description>

  </property>

  

  <property>

    <name>yarn.nodemanager.address</name>

    <value>localhost:5272</value>

    <description>the nodemanagers bind to this port</description>

  </property>  

  

  <property>

    <name>yarn.nodemanager.resource.memory-mb</name>

    <value>10240</value>

    <description>the amount of memory on the NodeManager in GB</description>

  </property>

   

  <property>

    <name>yarn.nodemanager.remote-app-log-dir</name>

    <value>/app-logs</value>

    <description>directory on hdfs where the application logs are moved to </description>

  </property>

  

   <property>

    <name>yarn.nodemanager.log-dirs</name>

    <value></value>

    <description>the directories used by Nodemanagers as log directories</description>

  </property>

  

  <property>

    <name>yarn.nodemanager.aux-services</name>

    <value>mapreduce_shuffle</value>

    <description>shuffle service that needs to be set for Map Reduce to run </description>

  </property>

      

</configuration>


到此为止,hadoop单机版配置已经完成。

1)接下来我们先格式化namenode,然后启动namenode

?

1

hadoop namenode –format

格式化命令的横杆可能是中文,会报错?

1

hadoop-daemon.sh start namenode

可以查看http://localhost:50070/dfshealth.jsp中logs的日志 (带namenode*.log字眼),确认是否启动成功,如果没有报错则启动成功。

2)接着启动hdfs datanode

?

1

hadoop-daemon.sh start datanode

同时也可以在开始页面上查询对应的日志文件(带datanode*.log字眼),如果没有报错,和namenode通信成功,即启动成功。

还可以在命令行数据Jps查看是否有结果

3)继续启动yarn

?

1

2

yarn-daemon.sh start resourcemanager

yarn-daemon.sh start nodemanager

判断启动成功与否方法同上面一致。

最后进入hadoop-2.2.0\share\hadoop\mapreduce录入中,测试运行

hadoop jar hadoop-mapreduce-examples-2.2.0.jarrandomwriter out

查看运行是否成功


  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值