安装规划
hadoop1 | hadoop2 | hadoop3 | mysql | cmrepo | |
---|---|---|---|---|---|
cloudera-manager-server | √ | ||||
cloudera-manager-agent | √ | √ | √ | ||
NameNode | √ | ||||
DataNode | √ | √ | √ | ||
SecondaryNameNode | √ | ||||
ResouceManager | √ | ||||
NodeManager | √ | √ | √ | ||
MetaStore | √ | ||||
HiveServer2 | √ | ||||
HueServer | √ | ||||
ImpalaCatelogServer | √ | ||||
ImpalaDaemon | √ | √ | √ | ||
ImpalaStateStore | √ | ||||
Oozie Server | √ | ||||
Zookeeper | √ | √ | √ | ||
MySQL | √ | ||||
Repo源 | √ |
环境准备
Docker环境准备
docker pull centos:7
docker pull mysql:5.7
安装包准备
环境安装
-
【宿主机】 创建docker集群的网段
docker network create --subnet 172.20.0.0/16 bigdata
-
【宿主机】 创建三个centos容器,并指定22端口映射及主机名
docker run -itd --privileged -p 2221:22 --name hadoop1 --hostname hadoop1 --network bigdata centos:7 /usr/sbin/init docker run -itd --privileged -p 2222:22 --name hadoop2 --hostname hadoop2 --network bigdata centos:7 /usr/sbin/init docker run -itd --privileged -p 2223:22
-
进入三台主机
docker exec -it hadoop1 /bin/bash docker exec -it hadoop2 /bin/bash docker exec -it hadoop3 /bin/bash
-
【三台dokcer】 安装ssh,配置免密登录
passwd root yum -y install openssh-server openssh-clients systemctl start sshd ssh-keygen -t rsa # 三次回车 ssh-copy-id hadoop1 ssh-copy-id hadoop2 ssh-copy-id hadoop3
-
【三台dokcer】 关闭防火墙
yum -y install firewalld systemctl status firewalld systemctl stop firewalld systemctl disable firewalld
-
同步服务器时钟,静态ip,主机映射
-
【hadoop1】 作为时钟服务器
yum -y install ntp ln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime cp /etc/ntp.conf /etc/ntp.conf.bak cp /etc/sysconfig/ntpd /etc/sysconfig/ntpd.bak echo "restrict hadoop1 mask 255.255.0.0 nomodify notrap" >> /etc/ntp.conf echo "SYNC_HWCLOCK=yes" >> /etc/sysconfig/ntpd systemctl restart ntpd
-
【hadoop2,hadoop3】 与hadoop1时钟服务器同步
yum -y install ntpdate crontabs ln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime ntpdate hadoop1 echo "*/30 * * * * /usr/sbin/ntpdate hadoop1.bigdata" >> /var/spool/cron/root
-
【三台】 配置hosts和ip映射
vi /etc/hosts 172.20.0.2 hadoop1.bigdata 172.20.0.3 hadoop2.bigdata 172.20.0.4 hadoop3.bigdata
-
-
【宿主机】 创建mysql容器
docker run -itd -p 3306:3306 --name mysql --hostname mysql --network bigdata -e MYSQL_ROOT_PASSWORD=root -d mysql:5.7docker exec -it mysql /bin/bash
进入mysql
reate database metastore default character set utf8; CREATE USER 'hive'@'%' IDENTIFIED BY 'password'; GRANT ALL PRIVILEGES ON metastore. * TO 'hive'@'%'; FLUSH PRIVILEGES; create database cm default character set utf8; CREATE USER 'cm'@'%' IDENTIFIED BY 'password'; GRANT ALL PRIVILEGES ON cm. * TO 'cm'@'%'; FLUSH PRIVILEGES; create database am default character set utf8; CREATE USER 'am'@'%' IDENTIFIED BY 'password'; GRANT ALL PRIVILEGES ON am. * TO 'am'@'%'; FLUSH PRIVILEGES; create database rm default character set utf8; CREATE USER 'rm'@'%' IDENTIFIED BY 'password'; GRANT ALL PRIVILEGES ON rm. * TO 'rm'@'%'; FLUSH PRIVILEGES;create database hue default character set utf8; CREATE USER 'hue'@'%' IDENTIFIED BY 'password'; GRANT ALL PRIVILEGES ON hue. * TO 'hue'@'%'; FLUSH PRIVILEGES;create database oozie default character set utf8; CREATE USER 'oozie'@'%' IDENTIFIED BY 'password'; GRANT ALL PRIVILEGES ON oozie. * TO 'oozie'@'%'; FLUSH PRIVILEGES;create database sentry default character set utf8; CREATE USER 'sentry'@'%' IDENTIFIED BY 'password'; GRANT ALL PRIVILEGES ON sentry. * TO 'sentry'@'%'; FLUSH PRIVILEGES;create database nav_ms default character set utf8; CREATE USER 'nav_ms'@'%' IDENTIFIED BY 'password'; GRANT ALL PRIVILEGES ON nav_ms. * TO 'nav_ms'@'%'; FLUSH PRIVILEGES;create database nav_as default character set utf8; CREATE USER 'nav_as'@'%' IDENTIFIED BY 'password'; GRANT ALL PRIVILEGES ON nav_as. * TO 'nav_as'@'%'; FLUSH PRIVILEGES;
CM安装
-
【宿主机】 本地配置repo源的容器
docker run -itd --privileged -p 80:80 --name cmrepo --hostname cmrepo -v 本地的文件路径(需要包含cm6.2和cdh6.2):/opt/software/cdh6.2 --network bigdata centos:7 /usr/sbin/init#进入容器、安装http服务docker exec -it cmrepo /bin/bashyum -y install httpd createrepocd /opt/software/cdh6.2/cm6.2createrepo .systemctl start httpdsystemctl enable httpdln -s /opt/software/cdh6.2/cm6.2 /var/www/html/cm6.2ln -s /opt/software/cdh6.2/cdh6.2 /var/www/html/cdh6.2
-
【hadoop1】 编写yum文件
vi /etc/yum.repos.d/cm.repo#-------------------------[cmrepo]name = cm_repobaseurl = http://cmrepo/cm6.2enable = truegpgcheck = false#-------------------------yum repolist allcd /etc/yum.repos.d/scp -r cm.repo hadoop2:$PWDscp -r cm.repo hadoop3:$PWD
-
【三台】 安装oracle-j2sdk1.8-1.8.0+update181-1.x86_64
yum -y install oracle-j2sdk1.8-1.8.0+update181-1.x86_64
-
安装cloudera-manager-server和cloudera-manager-agent
【hadoop1】
yum -y install cloudera-manager-server cloudera-manager-agent cloudera-manager-daemons
【hadoop2、hadoop2】
yum -y install cloudera-manager-agent cloudera-manager-daemons
-
【三台】 拷贝JDBC驱动
mkdir /usr/share/javamv mysql-connector-java-5.1.31.jar /usr/share/java/mysql-connector-java.jar# 拷贝驱动包时需要注意,应把版本号去掉
-
[hadoop1] 初始化并启动cm
/opt/cloudera/cm/schema/scm_prepare_database.sh -h mysql mysql cm cm passwordsystemctl start cloudera-scm-server
集群安装
-
配置端口代理映射
-
访问hadoop1的7180端口,用户名和密码都为admin
- 继续!继续!选择试用
-
输入新的集群名称
-
这里选择搜索主机,如果之前已经安装过agent,可以直接从已有主机来创建
-
cm源设置。选择更多选项,配置CDH安装源cdh源设置
-
安装JDK,如果之前已经安装过,这里可以选择跳过
-
等待安装
-
检查,查看一下hosts就行,第一个检查要很久
-
选择需要安装的服务,这里选择Data Warehouse
-
进入集群角色分配
-
点击“继续”,进入下一步,测试数据库连接
-
测试成功,点击“继续”,进入目录设置,此处使用默认默认目录,根据实际情况进行目录修改
-
点击“继续”,进入各个服务启动
-
安装成功后进入home管理界面
集群的重启
全部重启后需要
-
hadoop1、hadoop2、hadoop3
systemctl start cloudera-scm-agent
-
hadoop1
systemctl start cloudera-scm-server
-
进入hadoop1的7180启动所有服务
常见问题
- ip端口映射注意
- 文件路径注意