Druid部署与使用:一、Druid集群部署

一、准备

四台服务器:

192.168.1.1  druid01  coordinator  overlord

192.168.1.2  druid02  historical  middleManager

192.168.1.3  druid03  historical middleManager

192.168.1.4  druid04  broker

安装包

jdk1.8 、hadoop 2.7.3 、druid 0.12.2 、mysql5.7

 

元数据存储:选用mysql

流数据来源:对接kafka加载流数据

UV计算:使用datasketches

DeepStorage:深度存储 Hadoop2.7

 

二、集群安装

1.下载依赖的插件:

 

cd $DRUID_HOME;

java -classpath "/usr/local/druid/lib/*" io.druid.cli.Main  tools pull-deps -r "https://mvnrepository.com" --defaultVersion 0.12.2  --clean  -c io.druid.extensions:druid-kafka-extraction-namespace -c io.druid.extensions:druid-kafka-eight -c io.druid.extensions:druid-histogram -c io.druid.extensions:mysql-metadata-storage -c io.druid.extensions:druid-hdfs-storage -c io.druid.extensions:druid-datasketches  -h org.apache.hadoop:hadoop-client:2.7.3 ;

-r 指定下载依赖的远程版本库,--defaultVersion 指定Druid的默认版本,--clean 清理已经存在的依赖, -c 指定要下载的extensions名称及版本。

更多参数使用参考:http://druid.io/docs/latest/operations/pull-deps.html

2.修改公共配置文件

修改$DRUID_HOME/conf/druid/_comm/comm.runtime.properties文件

#
# Licensed to Metamarkets Group Inc. (Metamarkets) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. Metamarkets licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
#

#
# Extensions
#

# This is not the full list of Druid extensions, but common ones that people often use. You may need to change this list
# based on your particular setup.
#这里指定下载的extensiond列表
druid.extensions.loadList=["druid-kafka-eight", "druid-histogram", "druid-datasketches", "mysql-metadata-storage","druid-hdfs-storage","druid-kafka-extraction-namespace"]

# If you have a different version of Hadoop, place your Hadoop client jar files in your hadoop-dependencies directory
# and uncomment the line below to point to your directory.
#这里指定hadoopclient 依赖位置
druid.extensions.hadoopDependenciesDir=/usr/local/druid/hadoop-dependencies

#
# Logging
#

# Log all runtime properties on startup. Disable to avoid logging properties on startup:
druid.startup.logging.logProperties=true

#
# Zookeeper
#
#指定zk集群的地址
druid.zk.service.host=192.168.1.1:2181,192.168.1.2:2181,192.168.1.3:2181
druid.zk.paths.base=/druid
druid.zk.service.sessionTimeoutMs=30000


druid.request.logging.type=emitter
druid.request.logging.feed=druid_requests

druid.emitter=http
druid.emitter.logging.logLevel=info
druid.emitter.http.recipientBaseUrl=EMITTER_URL:PORT
#
# Metadata storage
#

# For Derby server on your Druid Coordinator (only viable in a cluster with a single Coordinator, no fail-over):
#druid.metadata.storage.type=derby
#druid.metadata.storage.connector.connectURI=jdbc:derby://metadata.store.ip:1527/var/druid/metadata.db;create=true
#druid.metadata.storage.connector.host=metadata.store.ip
#druid.metadata.storage.connector.port=1527

# For MySQL:
#使用mysql metadata stroage,需要指定数据库编码格式为UTF-8,否则报错
druid.metadata.storage.type=mysql
druid.metadata.storage.connector.connectURI=jdbc:mysql://192.168.1.1:3306/druid?characterencoding=utf-8
druid.metadata.storage.connector.user=root
druid.metadata.storage.connector.password=123456

# For PostgreSQL (make sure to additionally include the Postgres extension):
#druid.metadata.storage.type=postgresql
#druid.metadata.storage.connector.connectURI=jdbc:postgresql://db.example.com:5432/druid
#druid.metadata.storage.connector.user=...
#druid.metadata.storage.connector.password=...

#
# Deep storage
#

# For local disk (only viable in a cluster if this is a network mount):
#指定使用hdfs深度存储
druid.storage.type=hdfs
druid.storage.storageDirectory=hdfs://192.168.1.1:9000/druid/segments

# For HDFS (make sure to include the HDFS extension and that your Hadoop config files in the cp):
#druid.storage.type=hdfs
#druid.storage.storageDirectory=/druid/segments

# For S3:
#druid.storage.type=s3
#druid.storage.bucket=your-bucket
#druid.storage.baseKey=druid/segments
#druid.s3.accessKey=...
#druid.s3.secretKey=...

#
# Indexing service logs
#

# For local disk (only viable in a cluster if this is a network mount):
#druid.indexer.logs.type=file
#druid.indexer.logs.directory=var/druid/indexing-logs

# For HDFS (make sure to include the HDFS extension and that your Hadoop config files in the cp):
druid.indexer.logs.type=hdfs
druid.indexer.logs.directory=hdfs://192.168.1.1:9000/druid/indexing-logs

# For S3:
#druid.indexer.logs.type=s3
#druid.indexer.logs.s3Bucket=your-bucket
#druid.indexer.logs.s3Prefix=druid/indexing-logs

#
# Service discovery
#

druid.selectors.indexing.serviceName=druid/overlord
druid.selectors.coordinator.serviceName=druid/coordinator

#
# Monitoring
#

druid.monitoring.monitors=["io.druid.java.util.metrics.JvmMonitor"]
druid.emitter=logging
druid.emitter.logging.logLevel=info

# Storage type of double columns
# ommiting this will lead to index double as float at the storage layer

druid.indexing.doubleStorage=double

3.修改historical历史节点相关配置文件

修改$DRUID_HOME/conf/druid/historcal/jvm.config

-server
-Xms4g
-Xmx4g
-XX:MaxDirectMemorySize=8g
-Duser.timezone=UTC+0800
-Dfile.encoding=UTF-8
-Djava.io.tmpdir=/tmp
-Djava.util.logging.manager=org.apache.logging.log4j.jul.LogManager


##!!!设置-XX:MaxDirectMemory >= (numThreads+3) * sizeBytes,否则Historical节点会启动失败

Historical运行相关配置文件,配置文件runtime.preperties

druid.service=druid/historical
druid.port=9094
druid.host=192.168.1.2

# HTTP server threads
druid.server.http.numThreads=25

# Processing threads and buffers
druid.processing.buffer.sizeBytes=536870912
druid.processing.numThreads=7

# Segment storage
druid.segmentCache.locations=[{"path":"/usr/local/druid/segment-cache","maxSize":50000000000}]
druid.server.maxSize=100000000000

4.修改coordinator协调节点相关配置文件$DRUID_HOME/conf/druid/coordinator/jvm.config和runtime.properties文件

-server
-Xms3g
-Xmx3g
-Duser.timezone=UTC+0800
-Dfile.encoding=UTF-8
-Djava.io.tmpdir=/tmp
-Djava.util.logging.manager=org.apache.logging.log4j.jul.LogManager
-Dderby.stream.error.file=/var/druid/derby.log
druid.service=druid/coordinator
druid.port=9093
druid.host=192.168.1.1
druid.coordinator.startDelay=PT30S
druid.coordinator.period=PT30S

5.修改overlord统治节点相关配置文件$DRUID_HOME/conf/druid/overlord/jvm.config和 runtime.properties

-server
-Xms3g
-Xmx3g
-Duser.timezone=UTC+0800
-Dfile.encoding=UTF-8
-Djava.io.tmpdir=/tmp
-Djava.util.logging.manager=org.apache.logging.log4j.jul.LogManager
druid.service=druid/overlord
druid.port=9091
druid.host=192.168.1.1
druid.indexer.queue.startDelay=PT30S

druid.indexer.runner.type=remote
driod.indexer.runner.minWorkerVersion=0
druid.indexer.storage.type=metadata
druid.indexer.logs.type=hdfs
druid.indexer.logs.directory=/druid/indexer/logs/overlord

6.修改middleManager中间管理节点相关配置文件$DRUID_HOME/conf/druid/middleManager/jvm.config和runtime.properties

-server
-Xms64m
-Xmx64m
-Duser.timezone=UTC+0800
-Dfile.encoding=UTF-8
-Djava.io.tmpdir=/tmp
-Djava.util.logging.manager=org.apache.logging.log4j.jul.LogManager
druid.service=druid/middleManager
druid.port=9092
druid.host=192.168.1.2
# Number of tasks per middleManager
druid.worker.capacity=3
druid.worker.ip=10.10.82.29
druid.worker.version=0
# Task launch parameters
druid.indexer.runner.javaOpts=-server -Xmx2g -Duser.timezone=UTC -Dfile.encoding=UTF-8 -Djava.util.logging.manager=org.apache.logging.log4j.jul.LogManager
druid.indexer.task.baseTaskDir=/usr/local/druid/running_dir/peons/task/

# HTTP server threads
druid.server.http.numThreads=25

# Processing threads and buffers on Peons
druid.indexer.fork.property.druid.processing.buffer.sizeBytes=536870912
druid.indexer.fork.property.druid.processing.numThreads=2

# Hadoop indexing
druid.indexer.task.hadoopWorkingPath=/druid/tmp/druid-indexing
druid.indexer.task.defaultHadoopCoordinates=["org.apache.hadoop:hadoop-client:2.7.3"]
druid.indexer.logs.type=hdfs
druid.indexer.logs.directory=/druid/indexer/logs/middle_manager

druid.indexer.fork.property.druid.monitoring.monitors=["com.metamx.metrics.JvmMonitor"]
druid.indexer.fork.property.segmentCache.locations=[{"path":"/usr/local/druid/running_dir/peons/zk_druid","maxSize":50000000000}]
druid.insexer.fork.property.druid.server.http.numThreads=10
druid.indexer.fork.property.druid.storage.type=hdfs
druid.indexer.fork.property.druid.storage.storageDirectory=hdfs://10.10.82.28:9000/druid/pens/storage

7.Broker代理节点相关配置文件$DRUID_HOME/conf/druid/coordinator/jvm.config和runtime.properties

-server
-Xms8g
-Xmx8g
-XX:MaxDirectMemorySize=8g
-Duser.timezone=UTC+0800
-Dfile.encoding=UTF-8
-Djava.io.tmpdir=/tmp/broker
-Djava.util.logging.manager=org.apache.logging.log4j.jul.LogManager
druid.service=druid/broker
druid.port=8082
druid.host=192.168.1.4
# HTTP server threads
druid.broker.http.numConnections=5
druid.server.http.numThreads=25

# Processing threads and buffers
druid.processing.buffer.sizeBytes=536870912
druid.processing.numThreads=5

# Query cache
druid.broker.cache.useCache=true
druid.broker.cache.populateCache=true
druid.cache.type=local
druid.cache.sizeInBytes=2000000000

8.启动集群

 启动historical 节点  :

nohup java  `cat conf/druid/historical/jvm.config | xargs` -cp conf/druid/_common:conf/druid/historical:lib/*   io.druid.cli.Main server historical &

启动broker 节点

nohup java  `cat conf/druid/broker/jvm.config | xargs` -cp conf/druid/_common:conf/druid/broker:lib/*   io.druid.cli.Main server broker&

启动coordinator 节点

nohup java  `cat conf/druid/coordinator/jvm.config | xargs` -cp conf/druid/_common:conf/druid/coordinator:lib/*   io.druid.cli.Main server coordinator &


启动overlord节点

nohup java  `cat conf/druid/coordinator/jvm.config | xargs` -cp conf/druid/_common:conf/druid/coordinator:lib/*   io.druid.cli.Main server overlord &

启动middlemanager节点

nohup java  `cat conf/druid/middleManager/jvm.config | xargs` -cp conf/druid/_common:conf/druid/middleManager:lib/*   io.druid.cli.Main server middleManager&


9.验证集群启动状态

访问coordinator web UI : http://192.168.1.1:9093

访问 overlord 节点 web ui :http://192.168.1.1:9091

 

 

 

评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值