1.安装JDK
2.安装Cygwin
3.配置Windows系统变量
4.安装并配置sshd服务
5.配置Hadoop安装包
1. 安装crgwin
下载软件
安装crgwin
选择相应的安装包
Base->sed
Editors->vim
libs->libintl3, libintl8
Net->OpenSSH
配置Window系统变量
JAVA_HOME=JRE安装目录
Path=JDK的bin目录;Crgwin的bin目录;Crgwin的usr\bin
CYGWIN=ntsec mintty
安装并配置sshd服务
ssh-host-config
Should privilege separation be used? (yes/no) no
//注意点
Do you want to use a different name? (yes/no) yes
//要创建一个用户
Enter the new user name: cygusr
//生成密钥对
$ ssh-keygen
//命令
$ cd .ssh/
$ ls
id_rsa id_rsa.pub
$ cp id_rsa.pub authorized_keys
$ ls
authorized_keys id_rsa id_rsa.pub
//检查是否完成
ssh localhost
who
安装hadoop
1. 下载Hadoop安装包
修改配置文件
a) hadoop-env.sh
b) core-site.xml
c) hdfs-site.xml
d) mapred-site.xml
启动Hadoop
配置文件:
(1) 修改hadoop-env.sh:
只需要将JAVA_HOME 修改成JDK 的安装目录即可
export JAVA_HOME=/cygdrive/d/Program\ Files/java/jdk1.6.0_11
(注意:路径不能是windows 风格的目录d:\Program Files\java\jdk1.6.0_11,
而是LINUX 风格/cygdrive/d/java/jdk1.6.0_25)
(路径中如果有空格,要用“\”转义)
(2) 修改core-site.xml:(指定namenode)
<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:9000</value>
</property>
</configuration>
(3)修改hdfs-site.xml(指定副本为1)
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
</configuration>
(4) 修改mapred-site.xml (指定jobtracker)
[html] view plain
copy
<configuration>
<property>
<name>mapred.job.tracker</name>
<value>localhost:9001</value>
</property>
</configuration>
注如果路径中有空格,可以这样处理
LN -s /cygdrive/d/Program\ Files/Java/jdk1.6.0_11 /usr/local/jdk1.6.0_11
启动Hadoop
进入hadoop的bin目录
格式化HDFS文件系统:
$./hadoop namenode -format
启动Hadoop守护进程:
$./start-all.sh
验证Hadoop是否启动:
$./hadoop dfsadmin -report
关闭Hadoop
$./stop-all.sh
2.安装Cygwin
3.配置Windows系统变量
4.安装并配置sshd服务
5.配置Hadoop安装包
1. 安装crgwin
下载软件
安装crgwin
选择相应的安装包
Base->sed
Editors->vim
libs->libintl3, libintl8
Net->OpenSSH
配置Window系统变量
JAVA_HOME=JRE安装目录
Path=JDK的bin目录;Crgwin的bin目录;Crgwin的usr\bin
CYGWIN=ntsec mintty
安装并配置sshd服务
ssh-host-config
Should privilege separation be used? (yes/no) no
//注意点
Do you want to use a different name? (yes/no) yes
//要创建一个用户
Enter the new user name: cygusr
//生成密钥对
$ ssh-keygen
//命令
$ cd .ssh/
$ ls
id_rsa id_rsa.pub
$ cp id_rsa.pub authorized_keys
$ ls
authorized_keys id_rsa id_rsa.pub
//检查是否完成
ssh localhost
who
安装hadoop
1. 下载Hadoop安装包
修改配置文件
a) hadoop-env.sh
b) core-site.xml
c) hdfs-site.xml
d) mapred-site.xml
启动Hadoop
配置文件:
(1) 修改hadoop-env.sh:
只需要将JAVA_HOME 修改成JDK 的安装目录即可
export JAVA_HOME=/cygdrive/d/Program\ Files/java/jdk1.6.0_11
(注意:路径不能是windows 风格的目录d:\Program Files\java\jdk1.6.0_11,
而是LINUX 风格/cygdrive/d/java/jdk1.6.0_25)
(路径中如果有空格,要用“\”转义)
(2) 修改core-site.xml:(指定namenode)
<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:9000</value>
</property>
</configuration>
(3)修改hdfs-site.xml(指定副本为1)
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
</configuration>
(4) 修改mapred-site.xml (指定jobtracker)
[html] view plain
copy
<configuration>
<property>
<name>mapred.job.tracker</name>
<value>localhost:9001</value>
</property>
</configuration>
注如果路径中有空格,可以这样处理
LN -s /cygdrive/d/Program\ Files/Java/jdk1.6.0_11 /usr/local/jdk1.6.0_11
启动Hadoop
进入hadoop的bin目录
格式化HDFS文件系统:
$./hadoop namenode -format
启动Hadoop守护进程:
$./start-all.sh
验证Hadoop是否启动:
$./hadoop dfsadmin -report
关闭Hadoop
$./stop-all.sh