1.1 操作系统
操作系统为:linux-3.10.0-327.el7.x86_64-x86_64-with-centos-7.2.1511-core
或:linux-2.6.32-504.el6.x86_64-x86_64-with-centos-6.6-final
2 软件环境
2.1 防火墙
关闭各节点防火墙
#service iptables stop
#chkconfig iptables off
3 安装部署
3.1 Saltstack安装
3.1.1 服务分配
| 系统版本 | 角色 |
10.10.2.34 | CentosOS 6.5 x86_64 | Master,Minion |
10.10.2.35 | CentosOS 6.5 x86_64 | Minion |
10.10.2.36 | CentosOS 6.5 x86_64 | Minion |
3.1.2 软件包安装
Master 端安装:
Master#:curl -L https://bootstrap.saltstack.com -o install_salt.sh
Master#:sh install_salt.sh -M -N
Minion端安装:
Minion#:curl -L https://bootstrap.saltstack.com -o install_salt.sh
Minion#:sh install_salt.sh
3.1.3 配置文件修改
/etc/salt/master:
##### File Server settings #####
##########################################
# Salt runs a lightweight file server written in zeromq to deliver files to
# minions. This file server is built into the master daemon and does not
# require a dedicated port.
# The file server works on environments passed to the master, each environment
# can have multiple root directories, the subdirectories in the multiple file
# roots cannot match, otherwise the downloaded files will not be able to be
# reliably ensured. A base environment is required to house the top file.
# Example:
file_roots:
base:
- /srv/salt/
# dev:
# - /srv/salt/dev/services
# - /srv/salt/dev/states
# prod:
# - /srv/salt/prod/services
# - /srv/salt/prod/states
#
#file_roots:
# base:
# - /srv/salt
#
##### Pillar settings #####
##########################################
# Salt Pillars allow for the building of global data that can be made selectively
# available to different minions based on minion grain filtering. The Salt
# Pillar is laid out in the same fashion as the file server, with environments,
# a top file and sls files. However, pillar data does not need to be in the
# highstate format, and is generally just key/value pairs.
pillar_roots:
base:
- /srv/pillar
#
#ext_pillar:
# - hiera: /etc/hiera.yaml
# - cmd_yaml: cat /etc/salt/yaml
/etc/salt/minion:
# Set the location of the salt master server. If the master server cannot be
# resolved, then the minion will fail to start.
master: 10.10.2.34
# Set http proxy information for the minion when doing requests
#proxy_host:
#proxy_port:
#proxy_username:
#proxy_password:
# will be the hostname as returned by the python call: socket.getfqdn()
# Since salt uses detached ids it is possible to run multiple minions on the
# same machine but with different ids, this can be useful for salt compute
# clusters.
id: 三台minion分别设置为hadoop001,hadoop002,hadoop003
# Cache the minion id to a file when the minion's id is not statically defined
# in the minion config. Defaults to "True". This setting prevents potential
# problems when automatic minion id resolution changes, which can cause the
# minion to lose connection with the master. To turn off minion id caching,
# set this config to ``False``.
#minion_id_caching: True
3.1.4 服务认证
Master#:service salt-master start
(所有)Minion#:service salt-minion start
Master#:salt-key -L
Accepted Keys:
Denied Keys:
Unaccepted Keys:
hadoop001
hadoop002
hadoop003
Rejected Keys:
Master#:salt-key -A
The following keys are going to be accepted:
Unaccepted Keys:
hadoop001
hadoop002
hadoop003
Proceed? [n/Y] Y
Key for minion hadoop001 accepted.
Key for minion hadoop002 accepted.
Key for minion hadoop003 accepted.
Master#:salt ‘*’ test.ping
hadoop003:
True
hadoop001:
True
hadoop002:
True
3.2 Saltstack部署三节点hadoop集群
3.2.1 软件包
拷贝脚本至/srv/salt目录下,同时拷贝beh.tar.gz至该目录。
3.2.2 添加用户
# salt ‘*’ state.sls useradd
3.2.3 赋予权限
客户端均执行
#echo "hadoop ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/hadoop
3.2.4 Grains值初始化
因为判断myid用到了自定义常量,如果没有该值,saltstack会报错。
$cd /srv/salt/python_work/
$python write_zookeeper.py
3.2.5 文件分发
$cd /srv/salt/python_work/
$python main.py copy
执行该命令后,会将beh.tar.gz同时分配至三台机器的/opt/beh目录下,并解压更改权限。
3.2.6 系统初始化
/srv/salt/hosts_add.sls:
修改为自己集群的ip
/etc/hosts:
file.append:
- text:
- "10.10.2.43 hadoop001"
- "10.10.2.44 hadoop002"
- "10.10.2.45 hadoop003"
/srv/pillar/ip4.sls:
此文件为ntp安装时所需要的参数
Ip4修改为ntp master ip
Ip代表这一段ip地址
ip4: 10.10.2.43
ip: 10.10.2.0
$cd /srv/salt/python_work/
$Python main.py init
执行该命令后,三台机器会自动添加ip列表至/etc/hosts,会对应自己的ip写入正确的myid文件,关闭防火墙。根据/srv/pillar/disk.sls文件中的disk值作mkfs.ext4,挂载磁盘并写入fstab文件。安装ntp服务,修改文件并启动。修改最大进程数,最大文件数,修改mysql配置文件,为hadoop赋权,启动mysql服务。
Ø
3.2.7 配置文件初始化
$cd /srv/salt/python_work/
$Python main.py create
执行该命令后,会将/opt/beh下所有组件的配置文件拷贝至/srv/salt/files/dir
3.2.8 配置文件更新
$cd /srv/salt/python_work/
$Python main.py refresh
在/srv/salt/files/dir目录下修改所需组件的配置文件后即可更新所有节点配置文件。
3.2.9 集群孵化
$cd /srv/salt/python_work/
$Python main.py hatch
集群会自动启动zookeeper服务,格式化HDFS的Zookeeper存储目录,启动JournalNode集群,格式化并启动第一个NameNode,格式化并启动第二个NameNode,启动所有DataNode,启动ZooKeeperFailoverController,启动yarn,jobhistory server,在hadoop001上启动timeline server,启动metastore,hmaster,hregionserver。