hostname查询用户名
Use the following:
etc/hadoop/core-site.xml:
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://用户名:8020</value>
</property>
<property>
<name>hadoop.tmp.dir</name> //hadoop运行的临时目录
<value>目录的路径</value>
</property>
</configuration>
上面配置的解释
hdfs在那台机器:端口号8020
临时目录的创建
cd hadoop-2.5.0
mkdir data
cd data
mkdir tmp
cd tmp
pwd
目录的路径
etc/hadoop/hdfs-site.xml:
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
</configuration>
到安装目录cd hadoop-2.5.0
首先文件格式化
bin/hdfs namenode -format
启用脚本全部在ls sbin/目录下
sbin/hadoop-daemon.sh start namenode//主节点
sbin/hadoop-daemon.sh start datanode//从节点
启动界面端口号 50070
jps 有两个进程
ls
logs
ll
ll logs
bin/hdfs dfs -mkdir -p /user/cj //在用户目录创建文件夹cj
bin/hdfs dfs -ls -R / //循环查看目录
bin/hdfs dfs -mkdir -p /user/cj/mapreduce/wordcount/input 创建上传目录
bin/hdfs dfs -put wcinput/wc.input /user/cj/mapreduce/wordcount/input/ //将wc.input上传到刚才创建的文件
bin/hdfs dfs -cat /user/cj/mapreduce/wordcount/input/wc.input //查看上传的内容
bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.4.1.jar wordcount /user/cj/mapreduce/wordcount/input/ /user/cj/mapreduce/wordcount/output
bin/hdfs dfs -cat /user/cj/mapreduce/wordcount/output/part* //查看结果
和 cat wcoutput/part-r-00000 结果是一样的