Ubuntu配置Hadoop 2.7.6
Java环境配置
采用jdk1.8
下载JDK
小编给大家提供了jdk1.8点击链接
- 将文件解压至/usr;
tar -zxvf jdk-1-8.tar.gz -C /usr
cd /usr
sudo mv jdk-1-8 jdk1.8
2.设置环境变量及生效
sudo vim ~/.bashrc
export JAVA_HOME=/usr/jdk1.8
export JRE_HOME=${JAVA_HOME}/jre
export CLASSPATH=.:${JAVA_HOME}/lib:${JRE_HOME}/lib
export PATH=.:${JAVA_HOME}/bin:$PATH
最后运行如下命令使其生效
source ~/.bashrc
验证java环境是否成功
java -version
安装ssh-server并设置免密登录
这里还是要检查一下自己是否真的有ssh-server,请不要直接忽略
- 下载ssh-server
sudo apt-get install openssh-server
2.启动ssh
sudo /etc/init.d/ssh start
3.设置免密登录
一直按回车!!!
ssh-keygen -t rsa
导入到authorized_keys
cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
测试是否免密登录
ssh localhost
Hadoop配置
1.解压hadoop到/usr
sudo tar -zxvf hadoop-2.7.6.tar.gz -C /usr
切换到/usr/下,将hadoop-2.7.6重命名为hadoop,并给hadoop和hdfs设置775权限
cd /usr
sudo mv hadoop-2.7.6-tar.gz hadoop
sudo chmod 775 /usr/hadoop/bin/hadoop
sudo chmod 775 /usr/hadoop/bin/hdfs
2.配置hadoop环境变量
sudo vim ~/.bashrc
######################
export HADOOP_HOME=/usr/hadoop
export PATH=$PATH:$HADOOP_HOME/bin
export PATH=$PATH:$HADOOP_HOME/sbin
export HADOOP_MAPRED_HOME=$HADOOP_HOME
export HADOOP_COMMON_HOME=$HADOOP_HOME
export HADOOP_HDFS_HOME=$HADOOP_HOME
export YARN_HOME=$HADOOP_HOME
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
export HADOOP_OPTS="-Djava.library.path=$HADOOP_HOME/lib"
###########################
最后生效
source ~/.bashrc
3.hadoop配置
配置hadoop-env.sh
sudo vim /usr/hadoop/etc/hadoop/hadoop-env.sh
export JAVA_HOME=/usr/jdk1.8
export HADOOP=/usr/hadoop
export PATH=$PATH:/usr/hadoop/bin
配置yarn-env.sh
sudo vim /usr/hadoop/etc/hadoop/yarn-env.sh
JAVA_HOME=/usr/jdk1.8
配置core-site.xml,在home目录下创建/home/sube/hadoop_tmp目录,然后执行以下操作
sudo mkdir /home/sube/hadoop_tmp
sudo vim /usr/hadoop/etc/hadoop/core-site.xml
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://127.0.0.1:9000</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/home/sube/hadoop_tmp</value>
</property>
</configuration>
配置hdfs-site.xml
sudo vim /usr/hadoop/etc/hadoop/hdfs-site.xml
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>dfs.namenode.http-address</name>
<value>127.0.0.1:50070</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>/home/abu/sube/hadoop_tmp</value>
</property>
</configuration>
配置yarn-site.xml
sudo vim /usr/hadoop/etc/hadoop/yarn-site.xml
<configuration>
<!-- Site specific YARN configuration properties -->
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
<property>
<name>yarn.resourcemanager.address</name>
<value>127.0.0.1:8032</value>
</property>
<property>
<name>yarn.resourcemanager.scheduler.address</name>
<value>127.0.0.1:8030</value>
</property>
<property>
<name>yarn.resourcemanager.resource-tracker.address</name>
<value>127.0.0.1:8031</value>
</property>
</configuration>
测试Hadoop是否配置成功
1.验证一下下
hadoop version
2.格式化namenode
sudo hdfs namenode -format
3.启动守护进程
cd /usr/hadoop/sbin
sudo ./start-all.sh
查看是否启动
jps
启动成功的话应该是5个以上
4.web页面
浏览器中输入http://127.0.0.1:50070,成功则可看到页面
5.停止hdfs
在sbin中
sudo ./stop-all.sh
好了,至此相信你已经成功启动,good luck。