Druid集群配置

本文档详细介绍了Druid集群的配置过程,包括五个节点的分配,如master节点负责Mysql、coordinator和overlord,其他节点分别为historical、middleManager、broker和realtime节点。集群配置涉及到deep storage使用HDFS,以及各节点特定服务的runtime.properties文件设置。通过启动各节点的服务并验证集群,可以使用web端进行监控和测试查询。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

一、默认端口



二、集群配置思想

(一)本集群有五个节点(master, slave1,slave2,slave3,slave4),节点安排情况如下:

master节点:Mysql server, coordinator node, overlord node 

slave1节点:   historical node, middleManager node

slave2节点:   historical node, middleManager node

slave3节点:   broker node 

slave4节点:   realtime node(没有配置,空着)


(二)配置deep storage(保存冷数据)为HDFS


三、集群配置

(一)配置所有节点(master, slave1,slave2,slave3,slave4)的_common/common.runtime.properties文件

#
# Licensed to Metamarkets Group Inc. (Metamarkets) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. Metamarkets licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
#

#
# Extensions
#

# This is not the full list of Druid extensions, but common ones that people often use. You may need to change this list
# based on your particular setup.
#
# Licensed to Metamarkets Group Inc. (Metamarkets) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. Metamarkets licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
#

#
# Extensions
#

# This is not the full list of Druid extensions, but common ones that people often use. You may need to change this list
# based on your particular setup.
druid.extensions.loadList=["mysql-metadata-storage","druid-hdfs-storage"]


# If you have a different version of Hadoop, place your Hadoop client jar files in your hadoop-dependencies directory
# and uncomment the line below to point to your directory.
#druid.extensions.hadoopDependenciesDir=/my/dir/hadoop-dependencies

#
# Logging
#

# Log all runtime properties on startup. Disable to avoid logging properties on startup:
druid.startup.logging.logProperties=true

#
# Zookeeper
#

druid.zk.service.host=master:2181,slave1:2181,slave2:2181,slave3:2181,slave4:2181
#druid.zk.service.host=master:2181
druid.zk.paths.base=/druid

#
# Metadata storage
#

# For Derby server on your Druid Coordinator (only viable in a cluster with a single Coordinator, no fail-over):
#druid.metadata.storage.type=derby
#druid.metadata.storage.connector.connectURI=jdbc:derby://metadata.store.ip:1527/var/druid/metadata.db;create=true
#druid.metadata.storage.connector.host=metadata.store.ip
#druid.metadata.storage.connector.port=1527

# For MySQL:
druid.metadata.storage.type=mysql
druid.metadata.storage.connector.connectURI=jdbc:mysql://localhost:3306/druid?characterEncoding=UTF-8
druid.metadata.storage.connector.user=***
druid.metadata.storage.connector.password=***

# For PostgreSQL (make sure to additionally include the Postgres extension):
#druid.metadata.storage.type=postgresql
#druid.metadata.storage.connector.connectURI=jdbc:postgresql://db.example.com:5432/druid
#druid.metadata.storage.connector.user=...
#druid.metadata.storage.connector.password=...

#
# Deep storage
#

# For local disk (only viable in a cluster if this is a network mount):
#druid.storage.type=local
#druid.storage.storageDirectory=var/druid/segments

# For HDFS (make sure to include the HDFS extension and that your Hadoop config files in the cp):
druid.storage.type=hdfs
druid.storage.storageDirectory=hdfs://master:9000/druid/segments

# For S3:
#druid.storage.type=s3
#druid.storage.bucket=your-bucket
#druid.storage.baseKey=druid/segments
#druid.s3.accessKey=...
#druid.s3.secretKey=...

#
# Indexing service logs
#

# For local disk (only viable in a cluster if this is a network mount):
#druid.indexer.logs.type=file
#druid.indexer.logs.directory=var/druid/indexing-logs

# For HDFS (make sure to include the HDFS extension and that your Hadoop config files in the cp):
druid.indexer.logs.type=hdfs
druid.indexer.logs.directory=hdfs://master:9000/druid/indexing-logs

# For S3:
#druid.indexer.logs.type=s3
#druid.indexer.logs.s3Bucket=your-bucket
#druid.indexer.logs.s3Prefix=druid/indexing-logs

#
# Service discovery
#

druid.selectors.indexing.serviceName=druid/overlord
druid.selectors.coordinator.serviceName=druid/coordinator

#
# Monitoring
#

druid.monitoring.monitors=["com.metamx.metrics.JvmMonitor"]
druid.emitter=logging
druid.emitter.logging.logLevel=info

(二)配置master节点的coordinator(在coordinator/runtime.properties文件里) 

druid.service=druid/coordinator
druid.port=8081

druid.coordinator.startDelay=PT30S
druid.coordinator.period=PT30S

#Below it's append by fenghui for test
druid.host=master



      配置master节点的 overlord(在overlord/runtime.properties文件里) 

druid.service=druid/overlord
druid.port=8090

#druid.indexer.queue.startDelay=PT30S

druid.indexer.runner.type=remote
druid.indexer.storage.type=metadata

#below it is append by fenghui for test!
druid.host=master

#druid.indexer.logs.type=hdfs
#druid.indexer.logs.directory=/druid/indexing-logs


(三)配置slave1节点的historical (在historical /runtime.properties文件里) 

druid.service=druid/historical
druid.port=8083

# HTTP server threads
druid.server.http.numThreads=25

# Processing threads and buffers
druid.processing.buffer.sizeBytes=6870912
druid.processing.numThreads=7

# Segment storage
druid.segmentCache.locations=[{"path":"var/druid/segment-cache","maxSize"\:130000000000}]
druid.server.maxSize=130000000000



druid.host=slave1


druid.historical.cache.useCache=false
druid.historical.cache.populateCache=false


    配置slave1节点的middleManager (在middleManager /runtime.properties文件里) 

druid.service=druid/middleManager
druid.port=8091

# Number of tasks per middleManager
druid.worker.capacity=3

# Task launch parameters
druid.indexer.runner.javaOpts=-server -Xmx256m -Duser.timezone=UTC -Dfile.encoding=UTF-8 -Djava.util.logging.manager=org.apache.logging.log4j.jul.LogManager
druid.indexer.task.baseTaskDir=var/druid/task

# HTTP server threads
druid.server.http.numThreads=25

# Processing threads and buffers
druid.processing.buffer.sizeBytes=65536
druid.processing.numThreads=2

# Hadoop indexing
druid.indexer.task.hadoopWorkingPath=var/druid/hadoop-tmp
druid.indexer.task.defaultHadoopCoordinates=["org.apache.hadoop:hadoop-client:2.3.0"]



#below it's append by fenghui for test
druid.host=slave1

#druid.indexer.logs.type=hdfs
#druid.indexer.logs.directory=/druid/indexing-logs

(四)配置slave2节点(同slave1节点)


(五)配置slave1节点的broker(broker/runtime.properties文件里) 

druid.service=druid/broker
druid.port=8082

# HTTP server threads
druid.broker.http.numConnections=5
druid.server.http.numThreads=25

# Processing threads and buffers
druid.processing.buffer.sizeBytes=32768
druid.processing.numThreads=2

# Query cache
druid.broker.cache.useCache=true
druid.broker.cache.populateCache=true
druid.cache.type=local
druid.cache.sizeInBytes=2000000000

druid.host=slave3


三、启动集群命令

    分别到每个节点master, slave1,slave2,slave3,slave4)去启动相应的服务,命令如下(下面是启动broker的命令):     java -Xmx256m -Duser.timezone=UTC -Dfile.encoding=UTF-8 -classpath conf/druid/_common:conf/druid/broker:lib/* io.druid.cli.Main server broker &


四、验证集群

(一)打开web端

http://master:8090/console.html



http://master:8081/#/




(二)下载测试文件,把文件解压到每个节点(master, slave1,slave2,slave3,slave4)的druid-0.9.1.1目录下,然后再file目录下执行 bash submit_task.sh命令,可以在http://master:8090/console.html下查看状态

    在file目录下执行bash query_task.sh命令





参考文件:

druid.io_druid.io本地集群搭建 / 扩展集群搭建

http://druid.io/docs/latest/tutorials/cluster.html


评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值