准备:
三台虚拟机,JAVA,配置好网络
参考系列1: https://blog.csdn.net/tanxiang21/article/details/104206881
1、配置
<property>
<name>fs.defaultFS</name>
<value>hdfs://bigdata-pro01.kfk.com:9000</value>
</property>
<property>
<name>dfs.replication</name>
<value>2</value>
</property>
2、启动
bin/hdfs namenode -format
sbin/hadoop-daemon.sh start namenode
sbin/hadoop-daemon.sh start datanode
查看
http://bigdata-pro01.kfk.com:50070/dfshealth.html#tab-overview
http://bigdata-pro01.kfk.com:50070/dfshealth.html#tab-datanode
3、传输
scp -r hadoop-2.5.0/ kfk@bigdata-pro02.kfk.com:/opt/modules/
scp -r hadoop-2.5.0/ kfk@bigdata-pro03.kfk.com:/opt/modules/
4、启动另外两台机器的datanode
sbin/hadoop-daemon.sh start datanode
5、创建一个文件试试
bin/hdfs dfs -mkdir -p /user/kfk/data/
http://bigdata-pro01.kfk.com:50070/explorer.html#/
bin/hdfs dfs -put /opt/modules/hadoop-2.5.0/etc/hadoop/core-site.xml /user/kfk/data
bin/hdfs dfs -text /user/kfk/data/core-site.xml
–看文件