Zookeeper、 Hbase、 Hive、 Phoenix、 Kylin、 CM搭建的大体流程
zookeeper安装
- 解压
- 创建zkdata文件夹
- 创建myid文件
echo 1>myid
cat myid
- 重命名conf里的zoo_sample.cfg为zoo.cfg并配置zoo.cfg
dataDir=/opt/module/zookeeper-3.4.10/zkdata
server.1=node1:2888:3888
server.2=node2:2888:3888
server.3=node3:2888:3888
- 通过xsync.sh脚本分发zookeeper
- 修改其他机器myid
echo 2>myid
echo 3>myid
- 启动,查看状态
hbase安装
- 解压
- 配置hbase-env.sh
export HBASE_MANAGES_ZK=false
export HBASE_CLASSPATH=/opt/module/hbase-1.2.5/conf
export HBASE_LOG_DIR=/opt/module/hbase-1.2.5/logs
- 配置hbase-site.xml(hbase存放在hdfs上的路径,端口要和hadoop的 fs.defaultFS端口一致)
- 配置regionservers
hadoop102
hadoop103
hadoop104
- 配置log4j.properties
#hbase.log.dir=.hbase.log.dir=/opt/module/hbase-1.2.5/logs
- 通过xsjps.sh脚本分发hbase
- 浏览器访问http://hadoop102:16010
hive安装
- 解压
- 添加hive软连接
- 重命名hive-env.sh.template为hive-env.sh 并配置hive-env.sh
export HADOOP_HOME=/opt/module/hadoop-2.7.2
export HIVE_CONF_DIR=/opt/module/hive/conf
- 重命名hive-log4j.properties 配置hive-log4j.properties
hive.log.dir=/opt/module/apache-hive-1.2.1-bin/logs
- 替换jar包
[feng@node1 ~]# cd /opt/module/hive/lib/
[feng@node1 lib]# rm -rf zookeeper-3.4.6.jar
[feng@node1 lib]# cp /opt/module/zookeeper-3.4.10/zookeeper-3.4.10.jar ./
必须启动hdfs和yarn
- 创建Hive仓库地址
在HDFS上创建/tmp和/user/hive/warehouse两个目录并修改他们的同组权 限可写
[feng@node1 hadoop-2.8.2]$ bin/hadoop fs -mkdir /tmp
[feng@node1 hadoop-2.8.2]$ bin/hadoop fs -mkdir -p /user/hive/warehouse
[feng@node1 hadoop-2.8.2]$ bin/hadoop fs -chmod g+w /tmp
[feng@node1 hadoop-2.8.2]$ bin/hadoop fs -chmod g+w /user/hive/warehouse
[feng@node1 hadoop-2.8.2]$ bin/hadoop fs -ls -R /user
phoenix安装
- 升级hbasewei1.3.1
- 解压
- 配置hbase-env.sh
export JAVA_HOME=/opt/module/jdk1.8.0_144
export HBASE_CLASSPATH=/opt/module/hbase-1.3.1/conf
export HBASE_MANAGES_ZK=false
export HBASE_LOG_DIR=/opt/module/hbase-1.3.1/logs
- 配置hbase-site.xml
- 配置regionservers
hadoop102
hadoop103
hadoop104
- 配置log4j.properties
#hbase.log.dir=.hbase.log.dir=/opt/module/hbase-1.3.1/logs
- HBase需要依赖的Jar包 lib目录下jar包更新 创建软连接
[feng@node1 ~]$ cd /opt/module/hbase-1.3.1/conf/
[feng@node1 conf]$ ln -s /opt/module/hadoop-2.7.2/etc/hadoop/core-site.xml core-site.xml
[feng@node1 conf]$ ln -s /opt/module/hadoop-2.7.2/etc/hadoop/hdfs-site.xml hdfs-site.xml
- 分发hbase
- 安装phoenix
- 解压
- 创建软连接
- 拷贝jar包
[feng@hadoop102 phoenix]$ cp phoenix-core-4.14.3-HBase-1.3.jar …/hbase-1.3.1/lib/
[feng@hadoop102 phoenix]$ cp phoenix-4.14.3-HBase-1.3-server.jar …/hbase-1.3.1/lib/
[feng@hadoop102 phoenix]$ cp phoenix-4.14.3-HBase-1.3-client.jar …/hbase-1.3.1/lib/
- 分发拷贝的三个jar包
- 拷贝配置文件
[feng@hadoop102 module]$ cp hadoop-2.7.2/etc/hadoop/core-site.xml phoenix/bin/
[feng@hadoop102 module]$ cp hadoop-2.7.2/etc/hadoop/hdfs-site.xml phoenix/bin/
[feng@hadoop102 module]$ cp hbase-1.2.6/conf/hbase-site.xml phoenix/bin/
- 配置环境变量
export PHOENIX_HOME=/opt/module/phoenix
export PHOENIX_CLASSPATH= P H O E N I X H O M E e x p o r t P A T H = PHOENIX_HOME export PATH= PHOENIXHOMEexportPATH=PATH:$PHOENIX_HOME/bin
- 启动所有服务
- phoenix登录方式
方式一:bin/sqlline.py hadoop102,hadoop103,hadoop104:2181
方式二:bin/queryserver.py &
bin/sqlline-thin.py hadoop102:8765
kylin安装
- 修改用户组
- 解压
- 创建软连接
- 编辑kylin.sh脚本
- 配置环境变量
- 配置mapred-site.xml
- 分发mapred-site.xml
- 启动所有服务
- 浏览器访问http://hadoop102:7070/kylin
用户名密码
username:*****
password:*****
- 关闭服务
1)关闭 kylin 服务
[feng@hadoop102 kylin]$ bin/kylin.sh stop
2)关闭 hbase
[feng@hadoop102 hbase-1.3.1]$ bin/stop-hbase.sh
3)关闭 historyserver
[feng@hadoop102 hadoop-2.7.2]$ sbin/mr-jobhistory-daemon.sh stop historyserver stopping historyserver
4)关闭 yarn
[feng@hadoop103 hadoop-2.7.2]$ sbin/stop-yarn.sh
5)关闭 hdfs
[feng@hadoop102 hadoop-2.7.2]$ sbin/stop-dfs.sh
6)关闭 zookeeper
[feng@hadoop102 zookeeper-3.4.10]$ zk-stop.sh
- cloudera manager配置
- 制作MariaDB.repo
[root@hadoop102 ~]# vim /etc/yum.conf keepcache=1
[root@hadoop102 ~]# cd /etc/yum.repos.d/
[root@hadoop102 yum.repos.d]# touch MariaDB.repo
[root@hadoop102 yum.repos.d]# vim MariaDB.repo [MariaDB] name=MariaDB baseurl=http://yum.mariadb.org/10.1/centos7-amd64 gpgkey=https://yum.mariadb.org/RPM-GPG-KEY-MariaDB
gpgcheck=1
enabled=1
- yum安装MariaDB
yum install -y MariaDB-server MariaDB-client
- 启动服务
[root@hadoop102 ~]# service mysql start
[root@hadoop102 ~]# systemctl start mariadb
- 开机启动
[root@hadoop102 ~]# systemctl enable mariadb
[root@hadoop102 ~]# chkconfig mysql –list
- MariaDB 设置初始化密码及修改密码
方法 1
[root@hadoop102 ~]# mysql
MariaDB[(none)]> set password= password(‘000000’);
尖叫提示:语法 MariaDB[(none)]> set password= password(‘newpassward’);
方法 2
[root@hadoop102 ~]# mysqladmin -u root password 000000
如果 root 已经设置过密码
[root@hadoop102 ~]# mysqladmin -uroot -p000000 password 123456
- 允许远程访问
MariaDB [(none)]> use mysql;
MariaDB [mysql]> grant all privileges on . to ‘root’@’%’ identified by ‘000000’ with grant option;
MariaDB [mysql]> select host,user,password from user;
MariaDB [mysql]> update user set host=’%’ where user=‘root’ and host=‘localhost’; MariaDB [mysql]> delete from user where host!=’%’;
MariaDB [mysql]> flush privileges;
- 修改 Mariadb 存储路径
(1)确定 MariaDB 数据库能正常运行
[root@hadoop102 ~]# service mysql status
(2)确定正常后关闭服务
[root@hadoop102 ~]# service mysql stop
(3)建立要更改数据存放的目录
[root@hadoop102 /]# mkdir /data/mysql_data
[root@hadoop102 /]# chown -R mysql:mysql /data/mysql_data
(4)复制默认数据存放文件夹到/data/mysql_data
[root@hadoop102 /]# cp -a /var/lib/mysql /data/mysql_data
(5)修改 server.cnf
[root@hadoop102 /]# vim /etc/my.cnf.d/server.cnf
datadir=/data/mysql_data/mysql
socket=/var/lib/mysql/mysql.sock
#default-character-set=utf8
character_set_server=utf8
slow_query_log=on
slow_query_log_file=/data/mysql_data/slow_query_log.log
long_query_time=2
- 添加 mysql 的 Jdbc 驱动
[root@hadoop102 /]#cp mysql-connector-java-5.1.27-bin.jar \ /usr/share/java/mysql-connector-java.jar
- 分发
[root@hadoop102 java]# xsync mysql-connector-java.jar
- 创建数据库
MariaDB [mysql]> create database hive default charset utf8 collate utf8_general_ci;
MariaDB [mysql]> create database amon default charset utf8 collate utf8_general_ci;
MariaDB [mysql]> create database hue default charset utf8 collate utf8_general_ci;
MariaDB [mysql]> create database monitor default charset utf8 collate utf8_general_ci;
MariaDB [mysql]> create database oozie default charset utf8 collate utf8_general_ci;
MariaDB [mysql]> flush privileges;
ssh
ntp
cm安装
- 下载第三方依赖
[root@hadoop102 ~]# yum -y install chkconfig python bind-utils psmisc libxslt zlib sqlite cyrus-sasl-plain cyrus-sasl-gssapi fuse fuse-libs redhat-lsb mod_ssl httpd
[root@hadoop103 ~]# yum -y install chkconfig python bind-utils psmisc libxslt zlib sqlite cyrus-sasl-plain cyrus-sasl-gssapi fuse fuse-libs redhat-lsb mod_ssl httpd
[root@hadoop104 ~]# yum -y install chkconfig python bind-utils psmisc libxslt zlib sqlite cyrus-sasl-plain cyrus-sasl-gssapi fuse fuse-libs
redhat-lsb mod_ssl httpd
- 上传并解压安装包
- 创建用户 cloudera-scm
[root@hadoop102 module]# useradd --system --home=/opt/cm-5.14.1/run/cloudera-scm-server --no-create-home --shell=/bin/false --comment “Cloudera SCM User” cloudera-scm
[root@hadoop103 module]# useradd --system --home=/opt/cm-5.14.1/run/cloudera-scm-server --no-create-home --shell=/bin/false --comment “Cloudera SCM User” cloudera-scm
[root@hadoop104 module]# useradd --system --home=/opt/cm-5.14.1/run/cloudera-scm-server --no-create-home --shell=/bin/false --comment “Cloudera SCM User” cloudera-scm
参数说明 --system 创建一个系统账户 --home 指定用户登入时的主目录,替换系统默认值/home/<用户名> --no-create-home 不要创建用户的主目录 --shell 用户的登录 shell 名 --comment 用户的描述信息
- 查看用户
[root@hadoop102 cloudera-scm-server]# id cloudera-scm
- 查看解压后内容
[root@hadoop102 opt]# ls -l
drwxr-xr-x 4 1106 4001 36 Feb 8 2018 cloudera
drwxr-xr-x 9 1106 4001 88 Feb 8 2018 cm-5.14.1
drwxr-xr-x. 9 alex alex 212 May 5 09:51 module
drwxr-xr-x. 4 alex alex 4096 May 5 10:05 software
- 添加驱动包
1)位置 1
[root@hadoop102 software]# cp mysql-connector-java-5.1.27-bin.jar \ /opt/cm-5.14.1/share/cmf/lib/
2)位置 2
[root@hadoop102 /]# cp mysql-connector-java-5.1.27-bin.jar \ /usr/share/java/mysql-connector-java.jar
尖叫提示: 注意 jar 包名称要修改为 mysql-connector-java.jar
- 分发
[root@hadoop102 java]# xsync mysql-connector-java.jar
- 配置 agent
使从节点 cloudera-manger-agent 指向主节点服务器
[root@hadoop102 opt]# cd cm-5.14.1/
[root@hadoop102 cm-5.14.1]# vim etc/cloudera-scm-agent/config.ini
[General]
Hostname of the CM server.
server_host=hadoop102
Port that the CM server is listening on.
server_port=7182
- 在 mysql 中创建 cm 库
[root@hadoop102 opt]# cd cm-5.14.1/share/cmf/schema/
[root@hadoop102 schema]# ./scm_prepare_database.sh mysql cm -hhadoop102 -uroot -p000000 --scm-host hadoop102 scm scm scm
参数说明 -h:Database host -u:Database username -p:Database Password --scm-host:SCM server’s hostname
- 添加安装文件到 CM 仓目录
[root@hadoop102 software]# cp -r clouderamanager/CDH-5.14* \ /opt/cloudera/parcel-repo/
[root@hadoop102 software]# cp clouderamanager/manifest.json \ /opt/cloudera/parcel-repo/
[root@hadoop102 software]# cd /opt/cloudera/parcel-repo/
[root@hadoop102 parcel-repo]# ls
CDH-5.14.0-1.cdh5.14.0.p0.24-el7.parcel
CDH-5.14.0-1.cdh5.14.0.p0.24-el7.parcel.sha
manifest.json
- 创建 parcels 文件夹
[root@hadoop102 opt]# mkdir -p cloudera/parcels
尖叫提示:parcels 文件夹是安装路径
- 修改用户和用户组
[root@hadoop102 opt]# chown -R cloudera-scm:cloudera-scm cloudera/
[root@hadoop102 opt]# chown -R cloudera-scm:cloudera-scm cm-5.14.1/
- 分发 cm
[root@hadoop102 opt]# xsync.sh cm-5.14.1/
- 分发 cloudera
[root@hadoop102 opt]# xsync.sh cloudera/
- 修改 hadoop103 和 hadoop104 的用户组
[root@hadoop103 opt]# chown -R cloudera-scm:cloudera-scm cloudera/ [root@hadoop103 opt]# chown -R cloudera-scm:cloudera-scm cm-5.14.1/ [root@hadoop104 opt]# chown -R cloudera-scm:cloudera-scm cloudera/ [root@hadoop104 opt]# chown -R cloudera-scm:cloudera-scm cm-5.14.1/
- 集群启动
1、启动 cloudera-scm-server
[root@hadoop102 opt]# cd cm-5.14.1/etc/init.d/
[root@hadoop102 init.d]# ./cloudera-scm-server start
Starting cloudera-scm-server: [ OK ]
2、启动 cloudera-scm-agent
[root@hadoop102 opt]# cd cm-5.14.1/etc/init.d/
[root@hadoop102 init.d]# ./cloudera-scm-agent start
Starting cloudera-scm-agent: [ OK ]
[root@hadoop103 opt]# cd cm-5.14.1/etc/init.d/
[root@hadoop103 init.d]# ./cloudera-scm-agent start
Starting cloudera-scm-agent: [ OK ]
[root@hadoop104 opt]# cd cm-5.14.1/etc/init.d/
[root@hadoop104 init.d]# ./cloudera-scm-agent start
Starting cloudera-scm-agent: [ OK ]
- 浏览器访问
http://hadoop102:7180/cmf/login
- 端口查看
[root@hadoop102 ~]# netstat -anp | grep 7180