通过ctrl+alt+t打开终端编辑器。
创建hadoop用户组:输入命令sudo addgroup hadoop
创建hadoop用户: 输入命令sudo adduser -ingroup hadoop hadoop
在出现enter new unix password是输入密码,再重复输入一遍,剩下的就一路enter下去
给hadoop用户添加权限,打开/etc/sudoers文件:sudo gedit /etc/sudoers
在root ALL=(ALL:ALL) ALL下添加hadoop ALL=(ALL:ALL) ALL
安装jdk:sudo apt-get install openjdk-7-jdk
安装ssh : sudo apt-get install ssh openssh-server
建立ssh无密码登陆本机
首先要转换到hadoop用户,通过:su - hadoop
创建ssh-key: ssh-keygen -t rsa -p ""当没生成如下时换成ssh-keygen -t rsa
8.进入~/.ssh/目录下,将id_rsa.pub追加到authorized_keys授权文件中,开始是没有authorized_keys文件的:cd ~/.ssh 然后 cat id_rsa.pub >> authorized_keys
9.首先把下载的hadoop压缩包放在/usr/local目录下
通过sudo nautilus高级用户权限来打开文件夹从HOME/DESKTOP下移动到/usr/local文件夹下.
解压hadoop-0.23.10.tar.gz在/usr/local/下,然后改名为hadoop文件名,命令为如下:
cd /usr/local
sudo tar -zxf hadoop-0.23.10.tar.gz
sudo mv hadoop-0.23.10 hadoop
10.将该hadoop文件夹的属主用户设置为hadoop:sudo chown -R hadoop:hadoop hadoop
11.打开hadoop/conf/hadoop-env.sh文件:sudo gedit hadoop/conf/hadoop-env.sh
配置conf/hadoop-env.sh(找到#export JAVA_HOME=......去掉#,改为:
export JAVA_HOME=/usr/lib/jvm/java-6-openjdk
12.打开conf/core-site.xml文件:
sudo gedit hadoop/conf/core-site.xml
编辑
1 <?xml version="1.0"?>
2 <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
3 <!-- Put site-specific property overrides in this file. -->
4
5 <configuration>
6 <property>
7 <name>fs.default.name</name>
8 <value>hdfs://localhost:9000</value>
9 </property>
10 </configuration>
13.打开conf/mapred-site.xml文件:
sudo gedit hadoop/conf/mapred-site.xml
1 <?xml version="1.0"?>
2 <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
3 <!-- Put site-specific property overrides in this file. -->
4 <configuration>
5 <property>
6 <name>mapred.job.tracker</name>
7 <value>localhost:9001</value>
8 </property>
9 </configuration>
14.打开conf/hdfs-site.xml文件:
1 <configuration>
2 <property>
3 <name>dfs.name.dir</name>
4 <value>/usr/local/hadoop/datalog1,/usr/local/hadoop/datalog2</value>
5 </property>
6 <property>
7 <name>dfs.data.dir</name>
8 <value>/usr/local/hadoop/data1,/usr/local/hadoop/data2</value>
9 </property>
10 <property>
11 <name>dfs.replication</name>
12 <value>2</value>
13 </property>
14 </configuration>
15.进入hadoop目录下,格式化hdfs文件系统,初次运行hadoop时一定要有该
cd /usr/local/hadoop/
bin/hadoop namenode -format
16.当你看到下图时,就说明你的hdfs文件系统格式化成功了
17.启动bin/start-all.sh: bin/start-all.sh
18.检测hadoop是否启动成功:输入jps
说明你的hadoop单机版环境变量配置好了。
19.在浏览器中输入localhost:50030就可进入hadoop管理界面了。