软件版本:
hadoop-2.6.4;hbase-0.98.20-hadoop2;zookeeper-3.4.6
使用的源:
deb http://mirrors.ustc.edu.cn/raspbian/raspbian/ jessie main contrib non-free rpi
deb-src http://mirrors.ustc.edu.cn/raspbian/raspbian/ jessie main contrib non-free rpi
结构:
主机名 IP 安装的软件 运行的进程
nna 192.168.11.81 jdk、hadoop NameNode、DFSZKFailoverController(zkfc)
nns 192.168.11.82 jdk、hadoop NameNode、DFSZKFailoverController(zkfc)
rma 192.168.11.83 jdk、hadoop ResourceManager
rms 192.168.11.84 jdk、hadoop ResourceManager
hba 192.168.11.85 jdk、hadoop、hbase HMaster
hbs 192.168.11.86 jdk、hadoop、hbase HMaster
dn1 192.168.11.91 jdk、hadoop、zookeeper、hbase DataNode、NodeManager、JournalNode、QuorumPeerMain、HRegionServer
dn2 192.168.11.92 jdk、hadoop、zookeeper、hbase DataNode、NodeManager、JournalNode、QuorumPeerMain、HRegionServer
dn3 192.168.11.93 jdk、hadoop、zookeeper、hbase DataNode、NodeManager、JournalNode、QuorumPeerMain、HRegionServer
1.创建hadoop用户(root下操作)
adduser hadoop
chmod +w /etc/sudoers
hadoop ALL=(root)NOPASSWD:ALL
chmod -w /etc/sudoers
2.同步时间
sudo cp /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
3.U盘开机自动挂载
U盘格式为fat32 == vfat
uid为用户ID,gid为用户组ID,id命令查看
修改/etc/fstab,在末尾添加
/dev/sda1 /hadoop vfat suid,exec,dev,noatime,user,utf8,rw,auto,async,uid=1001,gid=1001 0 0
4.配置hosts
修改/etc/hosts
192.168.11.81 nna
192.168.11.82 nns
192.168.11.83 mra
192.168.11.84 mrs
192.168.11.91 dn1
192.168.11.92 dn2
192.168.11.93 dn3
修改/etc/hotname
nna
5.安装jdk
安装openjdk或orcaljdk
sudo apt-cache search jdk
sudo apt-get install openjdk-8-jdk
sudo apt-get install oracle-java8-jdk
6.配置环境变量
修改/etc/profile
# set java environment
export JAVA_HOME=/usr/lib/jvm/jdk-8-oracle-arm32-vfp-hflt/
export JRE_HOME=/usr/lib/jvm/jdk-8-oracle-arm32-vfp-hflt/jre
export CLASSPATH=.:$CLASSPATH:$JAVA_HOME/lib:$JRE_HOME/lib
export PATH=$PATH:$JAVA_HOME/bin:$JRE_HOME/bin
# set hadoop environment
export HADOOP_HOME=/home/hadoop/hadoop-2.6.4
export PATH=$PATH:$HADOOP_HOME/bin
# set zookeeper environment
export ZK_HOME=/home/hadoop/zookeeper-3.4.6
export PATH=$PATH:$ZK_HOME/bin
# set hbase environment
export HBASE_HOME=/home/hadoop/hbase-0.98.20-hadoop2
export PATH=$PATH:$HBASE_HOME/bin
7.创建目录
mkdir -p /hadoop/tmp
mkdir -p /hadoop/data/tmp/journal
mkdir -p /hadoop/data/dfs/name
mkdir -p /hadoop/data/dfs/data
mkdir -p /hadoop/data/yarn/local
mkdir -p /hadoop/data/zookeeper
mkdir -p /hadoop/log/yarn
8.安装zookeeper
修改 ~/zookeeper-3.4.6/conf/zoo.cfg
# The number of milliseconds of each tick
# 服务器与客户端之间交互的基本时间单元(ms)
tickTime=2000
# The number of ticks that the initial
# synchronization phase can take
# zookeeper所能接受的客户端数量
initLimit=10
# The number of ticks that can pass between
# sending a request and getting an acknowledgement
# 服务器和客户端之间请求和应答之间的时间间隔
syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just
# example sakes.
# 保存zookeeper数据,日志的路径
dataDir=/hadoop/data/zookeeper
# the port at which the clients will connect
# 客户端与zookeeper相互交互的端口
clientPort=2181
server.1=dn1:2888:3888
server.2=dn2:2888:3888
server.3=dn3:2888:3888
# server.A=B:C:D
# 其中A是一个数字,代表这是第几号服务器;B是服务器的IP地址;
# C表示服务器与群集中的“领导者”交换信息的端口;当领导者失效后,D表示用来执行选举时服务器相互通信的端口。
# the maximum number of client connections.
# increase this if you need to handle more clients
#maxClientCnxns=60
#
# Be sure to read the maintenance section of the
# administrator guide before turning on autopurge.
#
# http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
#
# The number of snapshots to retain in dataDir
#autopurge.snapRetainCount=3
# Purge task interval in hours
# Set to "0" to disable auto purge feature
#autopurge.purgeInterval=1
接下来,dn节点下的的dataDir目录下创建一个myid文件,里面写入一个0-255之间