文章目录
准备安装包
- java安装包
https://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html
Linux x64 182.93 MB jdk-xxxxxx-linux-x64.tar.gz - hadoop国内镜像
http://mirror.bit.edu.cn/apache/hadoop/common/
hadoop-xxx.tar.gz
准备工作
创建hadoop用户
sudo useradd -m hadoop -s /bin/bash
设置hadoop密码
sudo passwd hadoop
sudo adduser hadoop sudo
为了以后方便,装个vim编辑器
sudo apt-get install vim
安装SSH并配置SSH无密码登录
登录到hadoop用户下
su hadoop
输入密码
sudo apt-get install openssh-server
安装成功后,使用命令登录本机
ssh localhost
接下来配置SSH无密码登录。首先退出登录的SSHexit
cd ~/.ssh/
ssh-keygen -t rsa //遇到提示,一路回车即可
cat ./id_rsa.pub >> ./authorized_keys //加入授权
安装java
cd ~/Downloads
sudo mv ~/Downloads/jdk-xxxxxx-linux-x64.tar.gz /usr/ //移动tar.gz文件去/usr/下
sudo tar -zxvf /usr/jdk-xxxxxx-linux-x64.tar.gz //解压
sudo mv /usr/java-x.x.x /usr/java/
vim ~/.bashrc //此时是用户hadoop, 不是的话运行su hadoop
在文件最后加入
export JAVA_HOME=/usr/java/jdkxxxx
export PATH=$PATH:/usr/java/jdkxxxx/bin
‘ESC’ + ‘:’ + ‘wq’
保存退出
更新环境变量
source ~/.bashrc
安装Hadoop
cd ~/Downloads
sudo mv ~/Downloads/hadoop-x.x.x.tar.gz /usr/local/
sudo tar -xvf /usr/local/hadoop-x.x.x.tar.gz
sudo mv /usr/local/hadoopxxx /usr/local/hadoop
vim ~/.bashrc
文件末尾写入
export HADOOP_HOME=/usr/local/hadoop
export CLASSPATH=$($HADOOP_HOME/bin/hadoop classpath):$CLASSPATH
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin
保存退出
source ~/.bashrc
Hadoop伪分布式配置
配置core-site.xml
sudo vim /usr/local/hadoop/etc/hadoop/core-site.xml
写入
<configuration>
<property>
<name>hadoop.tmp.dir</name>
<value>file:/usr/local/hadoop/tmp</value>
<description>Abase for other temporary directories.</description>
</property>
<property>
<name>fs.defaultFS</name>
<value>hdfs://localhost:9000</value>
</property>
</configuration>
配置 h a d o o p − e n v . s h hadoop-env.sh hadoop−env.sh
sudo vim /usr/local/hadoop/etc/hadoop/hadoop-env.sh
更改export JAVA_HOME=/usr/java/jdkxxxxx
配置hdfs-site.xml
sudo vim /usr/local/hadoop/etc/hadoop/hdfs-site.xml
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:/usr/local/hadoop/tmp/dfs/name</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:/usr/local/hadoop/tmp/dfs/data</value>
</property>
</configuration>
执行NameNode的格式化
/usr/local/hadoop/bin/hdfs namenode -format
若成功进行下一步
开启NameNode和DataNode
/usr/local/hadoop/sbin/start-dfs.sh
WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform… using builtin-java classes where applicable
该解决办法请自行问度娘,加一个环境变量就可以。
查看
jps
成功!