部署部署hadoop:
1. ssh免密码登陆
2. jdk安装
3. hadoop安装
发现没有安装则重新安装 sudo yum install ** .jdk
cd 到usr/lib/jvm 找到jdk的安装目录然后vim .bash_profile
export JAVA_HOME=/etc/alternatives/java_sdk
2)查看ssh是否安装,搜索sudo yum search ssh
选择并安装sudo yum install openssh-server.x86_64
启动ssh,service sshd start
查看进程 ps aux | grep sshd
ssh-keygen
cp id_rsa.pub authorized_keys
chmod 644 authorized_keys
检查免密码登陆
ssh localhost
3)启动hadoop
#cd .home/hadoop-2.6.1/
#sudo service sshd restart
修改配置文件
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://localhost:9000</value>
</property>
</configuration>
#etc/hadoop/hdfs-site.xml:
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
</configuration>
#./sbin/start-all.sh
1. ssh免密码登陆
2. jdk安装
3. hadoop安装
进阶:jdk切换
发现没有安装则重新安装 sudo yum install ** .jdk
cd 到usr/lib/jvm 找到jdk的安装目录然后vim .bash_profile
export JAVA_HOME=/etc/alternatives/java_sdk
2)查看ssh是否安装,搜索sudo yum search ssh
选择并安装sudo yum install openssh-server.x86_64
启动ssh,service sshd start
查看进程 ps aux | grep sshd
ssh-keygen
cp id_rsa.pub authorized_keys
chmod 644 authorized_keys
检查免密码登陆
ssh localhost
3)启动hadoop
#cd .home/hadoop-2.6.1/
#sudo service sshd restart
修改配置文件
#etc/hadoop/core-site.xml
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://localhost:9000</value>
</property>
</configuration>
#etc/hadoop/hdfs-site.xml:
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
</configuration>
#./sbin/start-all.sh