下载好Hadoop 解压后的目录
bin 执行文件
sbin 启动hadoop相关执行文件
etc 配置文件
单机模式 grep 指令 计算单词个数
jun@iZuf6472qo7wdkygk8aqreZ:~/hadoop-2.10.0$ bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.10.0.jar grep input/wc.c output/ '[a-z.]+'
伪分布式启动,配置
官方文档 官方文档https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-common/SingleCluster.html
进入etc/hadoop 下配置相关文件
`jun@iZuf6472qo7wdkygk8aqreZ:~/hadoop-2.10.0/etc/hadoop$ ls
capacity-scheduler.xml hadoop-metrics2.properties httpfs-signature.secret log4j.properties ssl-client.xml.example
configuration.xsl hadoop-metrics.properties httpfs-site.xml mapred-env.cmd ssl-server.xml.example
container-executor.cfg hadoop-policy.xml kms-acls.xml mapred-env.sh yarn-env.cmd
core-site.xml hdfs-site.xml kms-env.sh mapred-queues.xml.template yarn-env.sh
hadoop-env.cmd httpfs-env.sh kms-log4j.properties mapred-site.xml.template yarn-site.xml
hadoop-env.sh httpfs-log4j.properties kms-site.xml slaves
jun@iZuf6472qo7wdkygk8aqreZ:~/hadoop-2.10.0/etc/hadoop$ pwd
/home/jun/hadoop-2.10.0/etc/hadoop
jun@iZuf6472qo7wdkygk8aqreZ:~/hadoop-2.10.0/etc/hadoop$
配置hadoop-env.sh JAVA_HOME
set to the root of your Java installation
export JAVA_HOME=/usr/java/latest
jun@iZuf6472qo7wdkygk8aqreZ:~/hadoop-2.10.0/etc/hadoop$ vim core-site.xml
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://localhost:9000</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/home/jun/hadoop-tmp-dir</value>
</property>
</configuration>
jun@iZuf6472qo7wdkygk8aqreZ:~/hadoop-2.10.0/etc/hadoop$ vim hdfs-site.xml
etc/hadoop/hdfs-site.xml:
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
</configuration>
启动namenode datanode
~/hadoop-2.10.0$ sbin/hadoop-daemon.sh start namenode
~/hadoop-2.10.0$ sbin/hadoop-daemon.sh start datanode
jun@iZuf6472qo7wdkygk8aqreZ:~/hadoop-2.10.0$ jps
306 NameNode
551 Jps
408 DataNode
伪分布式简单搭建完成。
明天再继续看,
zookeeper 的笔记没了,w(゚Д゚)w,看在多,还不如烂笔头,铁一样的教训。