1. 安装jdk1.7的环境 下载hadoop. hadoop-2.5.2.tar.gz版本
2.进入hadoop的网站,点击左下角的Documentation,找到对应的版本 ,点击进入,找到Single Node Setup 的说明文档,按照文档说明开始配置
编辑etc/hadoop/hadoop-env.sh
# set to the root of your Java installation
export JAVA_HOME=/usr/java/latest (java的环境位置)
Use the following:
etc/hadoop/core-site.xml:
fs.defaultFS
hdfs://localhost:8020 (ip地址)
etc/hadoop/hdfs-site.xml:
dfs.replication
1
配置YARN on Single Node
Configure parameters as follows:
etc/hadoop/mapred-site.xml:
mapreduce.framework.name
yarn
etc/hadoop/yarn-site.xml:
yarn.nodemanager.aux-services
mapreduce_shuffle
配置 hadoop.tmp.dir
在hadoop文件中创建data目录
在etc/hadoop/core-site.xml:中添加
hadoop.tmp.dir
/bigdata/softwares/hadoop-2.5.2/data/tmp
执行操作
Format the filesystem: 格式化信息 $ bin/hdfs namenode -format
Start NameNode daemon and DataNode daemon: $ sbin/start-dfs.sh
$ sbin/start-yarn.sh
The hadoop daemon log output is written to the $HADOOP_LOG_DIR directory (defaults to $HADOOP_HOME/logs).
Browse the web interface for the NameNode; by default it is available at:
NameNode - http://localhost:50070/
hbase
Example hbase-site.xml for Standalone HBase
hbase.rootdir
hdfs://ip:8080/data
hbase.zookeeper.property.dataDir
/bigdata/softwares/hbase-1.2.3/data/zkData
hbase.cluster.distributed
true
修改 regionserver 中的ip地址
bin/hbase-daemon.sh start zookeeper
bin/hbase-daemon.sh start master
bin/hbase-daemon.sh start regionserver
http://192.168.1.23:16030/rs-status 端口边了