Kylin3.1.0及相关组件安装文档

一、组件版本

JDK1.8
Hadoop-2.8.5
ZooKeeper-3.4.6
HBASE-1.2.7
Hive-1.2.1
Kylin-3.1.0

二、安装

1、准备环境
本文用三台节点,分别取名为dev-1,dev-2,dev-3,三台机器配置好主机名、免密,并配置好JDK1.8(此处就省略过程了)。

2、ZooKeeper安装
准备好ZooKeeper-3.4.6安装包,解压安装。

tar -zxvf zookeeper-3.4.6.tar.gz -C /usr/apps/

配置ZooKeeper
进入解压好的ZooKeeper

cd $ZOOKEEPER_HOME/conf

编辑zoo.cfg

# The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial 
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between 
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just 
# example sakes.
#这个路径是存放ZooKeeper数据的,同时也需要将ZooKeeper的各节点id放在里面
dataDir=/usr/apps/appdata/zkdata
# the port at which the clients will connect
clientPort=2181
# the maximum number of client connections.
# increase this if you need to handle more clients
#maxClientCnxns=60
#
# Be sure to read the maintenance section of the 
# administrator guide before turning on autopurge.
#
# http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
#
# The number of snapshots to retain in dataDir
#autopurge.snapRetainCount=3
# Purge task interval in hours
# Set to "0" to disable auto purge feature
#autopurge.purgeInterval=1
server.1=dev-1:2888:3888
server.2=dev-2:2888:3888
server.3=dev-3:2888:3888

在各个节点的数据存储目录中,生成一个myid文件,内容为它的id

[root@dev-1 ~]# echo 1 > /usr/apps/appdata/zkdata/myid
[root@dev-2 ~]# echo 2 > /usr/apps/appdata/zkdata/myid
[root@dev-3 ~]# echo 3 > /usr/apps/appdata/zkdata/myid

将配置好的ZooKeeper分发到其他节点

[root@dev-1 apps]# scp -r zookeeper-3.4.6/ dev-2:/usr/apps
[root@dev-1 apps]# scp -r zookeeper-3.4.6/ dev-3:/usr/apps

启动测试
ZooKeeper没有自带的一键启动集群脚本,需要在每一台节点上分别启动,具体一键启动脚本本文就不多说了。

$ZOOKEEPER_HOME/bin/zkServer.sh start

在每台节点上jps发现有QuorumPeerMain进程,说明ZooKeeper启动了。
注意:jps是JDK中的命令,不是Linux命令。不安装JDK不能使用jps

3、Hadoop安装
同样,准备好安装包并解压

tar -zxvf hadoop-2.8.5.tar.gz -C /usr/apps/
cd $HADOOP_HOME/etc/hadoop/

配置hadoop-env.sh

# The java implementation to use.
# 修改JDK,
export JAVA_HOME=/usr/apps/jdk1.8.0_141

配置core-site.xml

<configuration>
<!-- 指定HDFS中NameNode的地址 -->
<property>
<name>fs.defaultFS</name>
<value>hdfs://dev-1:9000</value>
</property>

<property>
<name>hadoop.http.staticuser.user</name>
<value>root</value>
</property>

<property>
<name>dfs.permissions.enabled</name>
<value>false</value>
</property>

<property>
<name>hadoop.proxyuser.root.hosts</name>
<value>*</value>
</property>

<property>
<name>hadoop.proxyuser.root.groups</name>
<value>*</value>
</property>

<!-- 指定Hadoop运行时产生文件的存储目录 -->
<property>
<name>hadoop.tmp.dir</name>
<value>/usr/apps/hadoop-2.8.5/data/tmp</value>
</property>
</configuration>

配置hdfs-site.xml

<configuration>

<property>
<name>dfs.namenode.rpc-address</name>
<value>dev-1:9000</value>
</property>

<property>
<name>dfs.namenode.name.dir</name>
<value>/usr/apps/appdata/hdpdata/name/</value>
</property>

<property>
<name>dfs.datanode.data.dir</name>
<value>/usr/apps/appdata/hdpdata/data/</value>
</property>

<property>
<name>dfs.namenode.secondary.http-address</name>
<value>dev-2:50090</value>
</property>

</configuration>

配置mapred-site.xml

<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
</configuration>

配置yarn-site.xml

<configuration>

<!--  主节点所在机器 -->
<property>
<name>yarn.resourcemanager.hostname</name>
<value>dev-1</value>
</property>
<!--  为mr程序提供shuffle服务 -->
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<!--  一台NodeManager的总可用内存资源 -->
<property>
<name>yarn.nodemanager.resource.memory-mb</name>
<value>2048</value>
</property>
<!--  一台NodeManager的总可用(逻辑)cpu核数 -->
<property>
<name>yarn.nodemanager.resource.cpu-vcores</name>
<value>2</value>
</property>
<!--  是否检查容器的虚拟内存使用超标情况 -->
<property>
<name>yarn.nodemanager.vmem-check-enabled</name>
<value>false</value>
</property>

<!--  容器的虚拟内存使用上限:与物理内存的比率 -->
<property>
<name>yarn.nodemanager.vmem-pmem-ratio</name>
<value>2.1</value>
</property>

<property>
<name>yarn.nodemanager.pmem-check-enabled</name>
<value>false</value>
</property>

</configuration>

配置slaves用于一键启动

dev-1
dev-2
dev-3

将配置好的hadoop拷贝到其他两台节点

[root@dev-1 apps]# scp -r  hadoop-2.8.5/ dev-2:/usr/apps
[root@dev-1 apps]# scp -r  hadoop-2.8.5/ dev-3:/usr/apps

启动集群
格式化NameNode(第一次启动时格式化,以后就不要总格式化)

$HADOOP_HOME/bin/hdfs namenode -format

此命令启动是包括NameNode、三台DataNode以及yarn的进程

$HADOOP_HOME/sbin/start-all.sh

jps查看启动情况,无误后便可以进入dev-1:50070 web端

4、HBASE安装
老样子,首先解压

tar -zxvf hbase-1.2.7-bin.tar.gz -C /usr/apps/

解压完成后改个名字 hbase-1.2.7进入$HBASE_HOME/conf目录
配置hbase-env.sh

# The java implementation to use.  Java 1.7+ required.
export JAVA_HOME=/usr/apps/jdk1.8.0_141

# Extra Java CLASSPATH elements.  Optional.
export HBASE_CLASSPATH=/usr/apps/hbase-1.2.7/conf

# Tell HBase whether it should manage it's own instance of Zookeeper or not.
# 这里是关闭HBASE启动自带的ZooKeeper,用自己撘的ZooKeeper
export HBASE_MANAGES_ZK=false

配置hbase-site.xml

<configuration>

<!--是否开启分布式-->
<property>
<name>hbase.cluster.distributed</name>
<value>true</value>
</property>
<!--对应的zookeeper集群,不用加端口-->
<property>
<name>hbase.zookeeper.quorum</name>
<value>dev-1,dev-2,dev-3</value>
</property>
<!--指定Zookeeper数据存储目录-->
<property>
<name>hbase.zookeeper.property.dataDir</name>
<value>/usr/apps/zookeeper-3.4.6/data</value>
</property>
<property>
<name>hbase.zookeeper.property.clientPort</name>
<value>2181</value>
</property>
<property>
<name>hbase.rootdir</name>
<value>hdfs://dev-1:9000/hbase</value>
</property>

</configuration>

配置regionservers

dev-1
dev-2
dev-3

将hadoop集群的hdfs-site.xml文件复制一份到conf目录下因为HBASE数据存储是要依赖于HDFS的,故需要将hadoop的hdfs-site.xml文件复制到HBASE的conf目录下
将配置好的HBASE拷贝到其他两台节点

[root@dev-1 apps]# scp -r  hbase-1.2.7/ dev-2:/usr/apps
[root@dev-1 apps]# scp -r  hbase-1.2.7/ dev-3:/usr/apps

启动后HBASE
在启动HBASE前需保证ZooKeeper与hadoop是正常启动情况

$HBASE_HOME/bin/start-hbase.sh

启动后jps查看HBASE进程HMaster、HRegionServer

5、Hive安装
安装Hive前需要先在节点上安装Mysql,这里Mysql的具体安装不做详细文档了。

解压hive

tar -zxvf apache-hive-1.2.1-bin.tar.gz.tar.gz -C /usr/apps/

解压后修改一下名字

mv apache-hive-1.2.1-bin.tar.gz hive-1.2.1

进入到$HIVE_HOME/conf
复制hive-env.sh.template名称为hive-env.sh

cp hive-env.sh.template hive-env.sh

配置hive-env.sh

# Set HADOOP_HOME to point to a specific hadoop install directory
HADOOP_HOME=/usr/apps/hadoop-2.8.5

配置hive-site.xml

<configuration>
<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:mysql://dev-1:3306/hive121?createDatabaseIfNotExist=true</value>
<description>JDBC connect string for a JDBC metastore</description>
</property>

<property>
<name>javax.jdo.option.ConnectionDriverName</name>
<value>com.mysql.jdbc.Driver</value>
<description>Driver class name for a JDBC metastore</description>
</property>

<property>
<name>javax.jdo.option.ConnectionUserName</name>
<value>root</value>
<description>username to use against metastore database</description>
</property>

<property>
<name>javax.jdo.option.ConnectionPassword</name>
<value>这里填你的mysql密码</value>
<description>password to use against metastore database</description>
</property>
</configuration>

配置完后这里需要注意一下,因为访问mysql,需要mysql的驱动jar包,需要把mysql的驱动jar包放到hive的lib目录下,找一个jar包放进去,我这里放的是mysql-connector-java-5.1.38.jar这个版本的

mv mysql-connector-java-5.1.38.jar /usr/apps/hive-1.2.1/lib/

启动hive,测试一下是否能正常建库建表,添加数据。hive的启动方式这里不详细说了,一般情况下直接后台启动。

nohup $HIVE_HOME/hiveserver2 1>/var/log/hiveserver.log 2>/var/log/hiveserver.err &

启动后jps进程显示为RunJar连接hive命令

beeline -u jdbc:hive2://dev-1:10000 -n root

6、Kylin安装
下载安装包并解压

tar -zxvf apache-kylin-3.1.0-bin-hbase1x.tar.gz -C /usr/apps/

改个名

mv apache-kylin-3.1.0-bin-hbase1x kylin-3.1.0

好了,先配置上/etc/profile吧,我这里是测试环境,自己搞的所以直接配置etc下的,给大家参照一下我的,注意修改自己的路径,修改完毕后记得source一下。

export  JAVA_HOME=/usr/apps/jdk1.8.0_141
export  JAVA_LIBRAY_PATH=/usr/apps/hadoop-2.8.5/lib/native
export  ZOOKEEPER_HOME=/usr/apps/zookeeper-3.4.6
export  HADOOP_HOME=/usr/apps/hadoop-2.8.5
export  HADOOP_INSTALL=$HADOOP_HOME
export  HADOOP_MAPRED_HOME=$HADOOP_HOME
export  HADOOP_COMMON_HOME=$HADOOP_HOME
export  HADOOP_HDFS_HOME=$HADOOP_HOME
export  YARN_HOME=$HADOOP_HOME
export  HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
export  HADOOP_OPTS="-Djava.library.path=$HADOOP_HOME/lib:$HADOOP_COMMON_LIB_NATIVE_DIR"
export  HBASE_HOME=/usr/apps/hbase-1.2.7
export  HIVE_HOME=/usr/apps/hive-1.2.1
export  HIVE_CONF_DIR=$HIVE_HOME/conf
export  KYLIN_HOME=/usr/apps/kylin-3.1.0
export  HCAT_HOME=$HIVE_HOME/hcatalog
export  KYLIN_CONF_HOME=$KYLIN_HOME/conf
export  tomcat_root=$KYLIN_HOME/tomcat
export  hive_dependency=$HIVE_HOME/conf:$HIVE_HOME/lib/*:$HCAT_HOME/share/hcatalog/hive-hcatalog-core-1.2.1.jar
export  PATH=$PATH:$JAVA_HOME/bin:$ZOOKEEPER_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$HBASE_HOME/bin:$HIVE_HOME/bin:$KYLIN_HOME/bin

配置$KYLIN_HOME/kylin.sh

# set verbose=true to print more logs during start up
# 在上面加上这个
export HBASE_CLASSPATH_PREFIX=${tomcat_root}/bin/bootstrap.jar:${tomcat_root}/bin/tomcat-juli.jar:${tomcat_root}/lib/*:$hive_dependency:$HBASE_CLASSPATH_PREFIX

我的配置kylin.properties

#### METADATA | ENV ###
#
## The metadata store in hbase
kylin.metadata.url=kylin_metadata@hbase
#
## metadata cache sync retry times
kylin.metadata.sync-retries=3
#
## Working folder in HDFS, better be qualified absolute path, make sure user has the right permission to this directory
kylin.env.hdfs-working-dir=/kylin
#
## DEV|QA|PROD. DEV will turn on some dev features, QA and PROD has no difference in terms of functions.
#kylin.env=QA
#
## kylin zk base path
kylin.env.zookeeper-base-path=/kylin
#
#### SERVER | WEB | RESTCLIENT ###
#
## Kylin server mode, valid value [all, query, job]
# 这里注意下主节点写的all,从节点写query就可以了
kylin.server.mode=all
#
## List of web servers in use, this enables one web server instance to sync up with other servers.
kylin.server.cluster-servers=linux06:7070,linux05:7070,linux04:7070
#
## Display timezone on UI,format like[GMT+N or GMT-N]
kylin.web.timezone=GMT+8
#
## Timeout value for the queries submitted through the Web UI, in milliseconds
kylin.web.query-timeout=300000

## Hive database name for putting the intermediate flat tables
# 这里建议先在hive里按照这个名字建好库,否则后续在构建cube是可能会报错
kylin.source.hive.database-for-flat-table=kylin_flat_db

#### STORAGE ###
#
## The storage for final cube file in hbase
kylin.storage.url=hbase
#
## The prefix of hbase table
kylin.storage.hbase.table-name-prefix=KYLIN_
#
## The namespace for hbase storage
kylin.storage.hbase.namespace=default
#
## Compression codec for htable, valid value [none, snappy, lzo, gzip, lz4]
kylin.storage.hbase.compression-codec=none
#
## HBase Cluster FileSystem, which serving hbase, format as hdfs://hbase-cluster:8020
## Leave empty if hbase running on same cluster with hive and mapreduce
##kylin.storage.hbase.cluster-fs=
#
## The cut size for hbase region, in GB.
kylin.storage.hbase.region-cut-gb=5
#
## The hfile size of GB, smaller hfile leading to the converting hfile MR has more reducers and be faster.
## Set 0 to disable this optimization.
kylin.storage.hbase.hfile-size-gb=2
#### JOB ###
#
## Max job retry on error, default 0: no retry
kylin.job.retry=0
#
## Max count of concurrent jobs running
kylin.job.max-concurrent-jobs=10
#
## The percentage of the sampling, default 100%
#kylin.job.sampling-percentage=100
#
## If true, will send email notification on job complete
##kylin.job.notification-enabled=true
##kylin.job.notification-mail-enable-starttls=true
##kylin.job.notification-mail-host=smtp.office365.com
##kylin.job.notification-mail-port=587
##kylin.job.notification-mail-username=kylin@example.com
##kylin.job.notification-mail-password=mypassword
##kylin.job.notification-mail-sender=kylin@example.com
#kylin.job.scheduler.provider.100=org.apache.kylin.job.impl.curator.CuratorScheduler
kylin.job.scheduler.default=2
#
#### ENGINE ###
#
## Time interval to check hadoop job status
kylin.engine.mr.yarn-check-interval-seconds=10

#最后面加
kylin.job.jar=/usr/apps/kylin-3.1.0/lib/kylin-job-3.1.0.jar
kylin.coprocessor.local.jar=/usr/apps/kylin-3.1.0/lib/kylin-coprocessor-3.1.0.jar

在启动kylin前需要先把Hadoop的$HADOOP_HOME/sbin/mr-jobhistory-daemon.sh start historyserver启动起来
配置完毕后拷贝到其他节点,至于其他节点也需要逐一启动,只启动主节点的也可以,在配置文件中配置的主节点是all,从节点只是query。

测试kylin的启动环境

$KYLIN_HOME/bin/check-env.sh

测试环境无误后启动kylin

$KYLIN_HOME/bin/kylin.sh start

启动后进入dev-1:7070/kylin,登录使用用户名:ADMIN
使用密码登陆:KYLIN

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值