Druid -- 基于Imply方式集群部署

集群部署

  集群规划:

10.19.xxx10.19.xxx10.19.xxx
coordinatorhistoricalbroker
overlordmiddlemanagerpivot
router(可选)

1. 下载tar包,上传服务器,解压

tar -xzf imply-3.4.tar.gz
cd imply-3.4

2. 修改配置文件common.runtime.properties

  主要修改extensions.loadList、zk地址、metadata改为mysql、deepStorage改为HDFS、indexer改为HDFS。修改完毕,将hadoop相关主要配置文件core-site、hdfs-site、mapred-site、yarn-site移动至conf/druid/_common目录下

cd conf/druid
vi _common/common.runtime.properties
#
# Extensions
#

duid.extensions.directory=dist/druid/extensions
druid.extensions.hadoopDependenciesDir=dist/druid/hadoop-dependencies
druid.extensions.loadList=["druid-kafka-indexing-service","druid-hdfs-storage","druid-histogram","druid-datasketches", "druid-lookups-cached-global","mysql-metadata-storage"]

#
# Logging
#
# Log all runtime properties on startup. Disable to avoid logging properties on startup:
druid.startup.logging.logProperties=true

#
# Zookeeper
#
druid.zk.service.host=xxx:2181,xxx:2181,xxx:2181
druid.zk.paths.base=/druid

#
# Metadata storage
#
# For Derby server on your Druid Coordinator (only viable in a cluster with a single Coordinator, no fail-over):
#druid.metadata.storage.type=derby
#druid.metadata.storage.connector.connectURI=jdbc:derby://master.example.com:1527/var/druid/metadata.db;create=true
#druid.metadata.storage.connector.host=master.example.com
#druid.metadata.storage.connector.port=1527

# For MySQL:
druid.metadata.storage.type=mysql
druid.metadata.storage.connector.connectURI=jdbc:mysql://xxx:3306/druid
druid.metadata.storage.connector.user=druid
druid.metadata.storage.connector.password=druid

# For PostgreSQL:
#druid.metadata.storage.type=postgresql
#druid.metadata.storage.connector.connectURI=jdbc:postgresql://db.example.com:5432/druid
#druid.metadata.storage.connector.user=...
#druid.metadata.storage.connector.password=...

#
# Deep storage
#
# For local disk (only viable in a cluster if this is a network mount):
#druid.storage.type=local
#druid.storage.storageDirectory=var/druid/segments

# For HDFS:
druid.storage.type=hdfs
druid.storage.storageDirectory=/druid/segments

# For S3:
#druid.storage.type=s3
#druid.storage.bucket=your-bucket
#druid.storage.baseKey=druid/segments
#druid.s3.accessKey=...
#druid.s3.secretKey=...

#
# Indexing service logs
#
# For local disk (only viable in a cluster if this is a network mount):
#druid.indexer.logs.type=file
#druid.indexer.logs.directory=var/druid/indexing-logs

# For HDFS:
druid.indexer.logs.type=hdfs
druid.indexer.logs.directory=/druid/indexing-logs

# For S3:
#druid.indexer.logs.type=s3
#druid.indexer.logs.s3Bucket=your-bucket
#druid.indexer.logs.s3Prefix=druid/indexing-logs

#
# Service discovery
#
druid.selectors.indexing.serviceName=druid/overlord
druid.selectors.coordinator.serviceName=druid/coordinator

#
# Monitoring
#
druid.monitoring.monitors=["org.apache.druid.java.util.metrics.JvmMonitor"]
druid.emitter=logging
druid.emitter.logging.logLevel=debug
cp /services/hadoop-2.7.7/etc/hadoop/core-site.xml .
cp /services/hadoop-2.7.7/etc/hadoop/hdfs-site.xml .
cp /services/hadoop-2.7.7/etc/hadoop/mapred-site.xml .
cp /services/hadoop-2.7.7/etc/hadoop/yarn-site.xml .

3. 修改coordinator配置,vi overlord/runtime.properties

druid.service=druid/coordinator
druid.host=xxx
druid.port=8081

druid.coordinator.startDelay=PT30S
druid.coordinator.period=PT30S

4. 修改overlord配置,vi coordinator/runtime.properties

druid.service=druid/overlord
druid.host=xxx
druid.port=8090

druid.indexer.queue.startDelay=PT30S

druid.indexer.runner.type=remote
druid.indexer.storage.type=metadata

5. 修改middleManager配置文件,vi middleManager/runtime.properties

druid.service=druid/middlemanager
druid.host=xxx
druid.port=8091

# Number of tasks per middleManager
druid.worker.capacity=3

# Task launch parameters
druid.indexer.runner.javaOpts=-server -Xmx2g -Duser.timezone=UTC -Dfile.encoding=UTF-8 -XX:+ExitOnOutOfMemoryError -Djava.util.logging.manager=org.apache.logging.log4j.jul.LogManager
druid.indexer.task.baseTaskDir=var/druid/task
druid.indexer.task.restoreTasksOnRestart=true

# HTTP server threads
druid.server.http.numThreads=40

# Processing threads and buffers
druid.processing.buffer.sizeBytes=100000000
druid.processing.numMergeBuffers=2
druid.processing.numThreads=2
druid.processing.tmpDir=var/druid/processing

# Hadoop indexing
druid.indexer.task.hadoopWorkingPath=var/druid/hadoop-tmp
druid.indexer.task.defaultHadoopCoordinates=["org.apache.hadoop:hadoop-client:2.8.5", "org.apache.hadoop:hadoop-aws:2.8.5"]

6. 修改historical配置文件,vi historical/runtime.properties

druid.service=druid/historical
druid.host=xxx
druid.port=8083

# HTTP server threads
druid.server.http.numThreads=40

# Processing threads and buffers
druid.processing.buffer.sizeBytes=536870912
druid.processing.numMergeBuffers=2
druid.processing.numThreads=7
druid.processing.tmpDir=var/druid/processing

# Segment storage
druid.segmentCache.locations=[{"path":"var/druid/segment-cache","maxSize"\:130000000000}]
druid.server.maxSize=130000000000

# Query cache
druid.historical.cache.useCache=true
druid.historical.cache.populateCache=true
druid.cache.type=caffeine
druid.cache.sizeInBytes=2000000000

7. 修改broker配置文件,vi broker/runtime.properties

druid.service=druid/broker
druid.host=xxx
druid.port=8082

# HTTP server settings
druid.server.http.numThreads=60

# HTTP client settings
druid.broker.http.numConnections=10
druid.broker.http.maxQueuedBytes=50000000

# Processing threads and buffers
druid.processing.buffer.sizeBytes=536870912
druid.processing.numMergeBuffers=2
druid.processing.numThreads=1
druid.processing.tmpDir=var/druid/processing

# Query cache disabled -- push down caching and merging instead
druid.broker.cache.useCache=false
druid.broker.cache.populateCache=false

# SQL
druid.sql.enable=true

8. 修改router配置文件(可选),vi router/runtime.properties

druid.service=druid/router
druid.host=xxx
druid.port=8888

druid.processing.numThreads=1
druid.processing.buffer.sizeBytes=1000000

druid.router.defaultBrokerServiceName=druid/broker
druid.router.coordinatorServiceName=druid/coordinator
druid.router.http.numConnections=50
druid.router.http.readTimeout=PT5M
druid.router.http.numMaxThreads=100

druid.server.http.numThreads=100

druid.router.managementProxy.enabled=true

9. 修改pivot配置文件,vi …/pivot/config.yaml

# The port on which the Pivot server will listen on.
port: 9095

# runtime directory
varDir: var/pivot

servingMode: clustered

# User management mode
# By default Imply will not show a login screen and anyone accessing it will automatically be treated as an 'admin'
# Uncomment the line below to enable user authentication, for more info see: https://docs.imply.io/on-prem/configure/config-api
#userMode: native-users


# The initial settings that will be loaded in, in this case a connection will be created for a Druid cluster that is running locally.
initialSettings:
  connections:
    - name: druid
      type: druid
      title: My Druid
      #router或者broker地址
      host: xxx:8888
      coordinatorHosts: ["xxx:8081"]
      overlordHosts: ["xxx:8090"]

#
# Pivot must have a state store in order to function
# The state (data cubes, dashboards, etc) can be stored in two ways.
# Choose just one option and comment out the other.
#
#  1) Stored in a sqlite file, editable at runtime with Settings View. Not suitable for running in a cluster.
#  2) Stored in a database, editable at runtime with Settings View. Works well with a cluster of Imply servers.
#

#
# 1) File-backed (sqlite) state (not suitable for running in a cluster)
#

#stateStore:
#  type: sqlite
#  connection: var/pivot/pivot-settings.sqlite

#
# 2) Database-backed state 'mysql' (MySQL) or 'pg' (Postgres)
#

stateStore:
# location: mysql
  type: mysql
  connection: 'mysql://druid:druid@xxx:3306/pivot'

10. 启动相关依赖服务:hdfs、mysql、zookeeper

11. 启动coordinator、overlord

  根据集群规划,在154机器启动此两个服务

nohup bin/supervise -c conf/supervise/data.conf > logs/data.log 2>&1 &

12. 启动historical、middlemanager

  根据集群规划,在172机器启动此两个服务

nohup bin/supervise -c conf/supervise/master-no-zk.conf > logs/master-no-zk.log 2>&1 &

13. 启动broker、pivot、router

  根据集群规划,在222机器启动此两个服务

nohup bin/supervise -c conf/supervise/query.conf > logs/query.log 2>&1 &

验证

  1. 服务启动,会在var/sv/xx/current生成日志,如果需要重新部署,可以删除var下所有文件
  2. pivot页面默认使用9095端口,进入页面需要创建connection
    在这里插入图片描述
  3. 载入数据
    在这里插入图片描述
    在这里插入图片描述
  4. 从hdfs导入
    在这里插入图片描述
  5. 右下角继续,parse data
    在这里插入图片描述
  6. 右下角继续,parse time,这里指定时间列
    在这里插入图片描述
  7. 右下角继续,transform =》 filter =》 configure schema,这里指定维度列和度量列
    在这里插入图片描述
  8. 右下角继续,partition,设置segment合并周期
    在这里插入图片描述
  9. 右下角继续,tune =》 publish =》 edit spec 发布任务
    在这里插入图片描述
  10. 回到data标签页,稍等片刻,数据已导入
    在这里插入图片描述
  11. 使用页面sql工具查询
    在这里插入图片描述
  12. 创建cube
    在这里插入图片描述
  13. 可以查看cube的各种指标
    在这里插入图片描述
    在这里插入图片描述
  • 2
    点赞
  • 4
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
Imply详细安装步骤Imply详细安装步骤Imply详细安装步骤Imply详细安装步骤Imply详细安装步骤Imply详细安装步骤Imply详细安装步骤Imply详细安装步骤Imply详细安装步骤Imply详细安装步骤Imply详细安装步骤Imply详细安装步骤Imply详细安装步骤Imply详细安装步骤Imply详细安装步骤Imply详细安装步骤Imply详细安装步骤Imply详细安装步骤Imply详细安装步骤Imply详细安装步骤Imply详细安装步骤Imply详细安装步骤Imply详细安装步骤Imply详细安装步骤Imply详细安装步骤Imply详细安装步骤Imply详细安装步骤Imply详细安装步骤Imply详细安装步骤Imply详细安装步骤Imply详细安装步骤Imply详细安装步骤Imply详细安装步骤Imply详细安装步骤Imply详细安装步骤Imply详细安装步骤Imply详细安装步骤Imply详细安装步骤Imply详细安装步骤Imply详细安装步骤Imply详细安装步骤Imply详细安装步骤Imply详细安装步骤Imply详细安装步骤Imply详细安装步骤Imply详细安装步骤Imply详细安装步骤Imply详细安装步骤Imply详细安装步骤Imply详细安装步骤Imply详细安装步骤Imply详细安装步骤Imply详细安装步骤Imply详细安装步骤Imply详细安装步骤Imply详细安装步骤Imply详细安装步骤Imply详细安装步骤Imply详细安装步骤Imply详细安装步骤Imply详细安装步骤Imply详细安装步骤Imply详细安装步骤Imply详细安装步骤Imply详细安装步骤Imply详细安装步骤Imply详细安装步骤Imply详细安装步骤Imply详细安装步骤Imply详细安装步骤Imply详细安装步骤Imply详细安装步骤Imply详细安装步骤Imply详细安装步骤Imply详细安装步骤

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值