Setup Hadoop On Ubuntu 14.04 Linux
—Multi-Node Cluster
Create user Hadoop
$ sudo useradd -m hadoop -s /bin/bash
$ sudo passwd Hadoop
$ sudo adduser Hadoop sudo
Then use user hadoop to login.
To install software by apt, we should update apt first: (if there can’t update, please change the software source)
$ sudo apt-get update
Install SSH server
$ sudo apt-get install openssh-server
$ ssh localhost
$ exit
$cd ~/.ssh
$ ssh-keygen –t rsa # if wait here, just push enter
$ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys # then can use ‘$ ssh Master’ to test
$ scp ~/.ssh/id_rsa.pub hadoop@Slave1:/home/hadoop/ # copy to Slave1, the same to Slave2
Then on node Slave1 and Slave2, run:
$ cat ~/id_rsa.pub >> ~/.ssh/authorized_keys
Master can passphraseless SSH to Slave1 and Slave2.
Note: every node should install SSH server.
Install and configure Java
There are 2 ways to install java:
1) OpenJDK 7 (convenience)
$ sudo apt-get install openjdk-7-jre openjdk-7-jdk
Default position of OpenJDK is /usr/lib/jvm/java-7-openjdk-amd64, use ‘$ java –version’ to test.
Configure variable JAVA_HOME:
$ vim ~/.bashrc
Add a line like this in the first line place in this file and save:
export JAVA_HOME=/usr/lib/jvm/java-7-openjdk-amd64
$ source ~/.bashrc # set the variable to take effect
$ echo $JAVA_HOME # test
2) Oracle JDK7
See http://www.cnblogs.com/kingatnuaa/p/4151824.html
Note: java version.
Install Hadoop
cd ~/Download
sudo tar -zxvf ./hadoop-2.6.0.tar.gz -C /usr/local # decompress to /usr/local
cd /usr/local/
sudo mv ./hadoop-2.6.0/ ./hadoop # change file name to hadoop
sudo chown -R hadoop:hadoop ./hadoop # change file owner
cd ./hadoop
./bin/Hadoop # test hadoop
Network configure
Edit /etc/hostname on every node to their own name.
Edit /etc/hosts on every node: (use ‘$ ifconfig’ to check the IP on every node)
Master 192.168.216.128
Slave1 192.168.216.129
Slave2 192.168.216.130
Then ping (e.g. ‘$ ping Slave1’) to each other to test if it has worked.
Configuration setups
Under /usr/local/Hadoop/etc/Hadoop/, edit the following files.
File: slaves # your slave node name
Slave1
Slave2
File: core-site.xml
<property>
<name>fs.defaultFS</name>
<value>hdfs://Master:9000</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>file:/usr/local/hadoop/tmp</value>
<description>Abase for other temporary directories.</description>
</property>
File: hdfs-site.xml # dfs.replication value is number of Slave
<property>
<name>dfs.namenode.secondary.http-address</name>
<value>Master:50090</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:/usr/local/hadoop/tmp/dfs/name</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:/usr/local/hadoop/tmp/dfs/data</value>
</property>
<property>
<name>dfs.replication</name>
<value>2</value>
</property>
this file is not exist, you should copy from template like this:
$ cp mapred-site.xml.template mapred-site.xml
File: mapred-site.xml
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
File: yarn-site.xml
<property>
<name>yarn.resourcemanager.hostname</name>
<value>Master</value>
</property>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
Finish like this, copy Hadoop file on Master to other nodes (Slave1 & Slave2), run on Master:
$ sudo tar –zcf hadoop.tar.gz hadoop
$ scp Hadoop.tar.gz Slave1:/home/hadoop
Run on Slave1 (the same as Slave2):
$ sudo tar -zxf ~/hadoop.tar.gz -C /usr/local
$ sudo chown -R hadoop:hadoop /usr/local/hadoop
First run
$ bin/hdfs namenode –format #just run once
Start cluster
$ sbin/start-dfs.sh
$ sbin/start-yarn.sh
Hadoop web interfaces
http://master:50070/ - web UI for HDFS name node(s)
Run a Map Reduce job, Wordcount or Grep
Create a directory in HDFS, and put some files to input, and list the files:
$ bin/hdfs dfs –mkdir –p /user/hadoop
$ bin/hdfs dfs –put etc/hadoop input
$ bin/hdfs dfs –ls etc/Hadoop/input
Run WordCount:
$ bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.0.jar wordcount input output
Or run Grep:
$ bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.0.jar grep input output 'dfs[a-z.]+'
Look up job’s schedule in address: http://master:8088/cluster
Stop cluster
$ sbin/stop-dfs.sh
$ sbin/stop-yarn.sh