向导
- 集群部署
- 1. 下载tar包,上传服务器,解压
- 2. 修改配置文件common.runtime.properties
- 3. 修改coordinator配置,vi overlord/runtime.properties
- 4. 修改overlord配置,vi coordinator/runtime.properties
- 5. 修改middleManager配置文件,vi middleManager/runtime.properties
- 6. 修改historical配置文件,vi historical/runtime.properties
- 7. 修改broker配置文件,vi broker/runtime.properties
- 8. 修改router配置文件(可选),vi router/runtime.properties
- 9. 修改pivot配置文件,vi ../pivot/config.yaml
- 10. 启动相关依赖服务:hdfs、mysql、zookeeper
- 11. 启动coordinator、overlord
- 12. 启动historical、middlemanager
- 13. 启动broker、pivot、router
- 验证
集群部署
集群规划:
10.19.xxx | 10.19.xxx | 10.19.xxx |
---|---|---|
coordinator | historical | broker |
overlord | middlemanager | pivot |
router(可选) |
1. 下载tar包,上传服务器,解压
tar -xzf imply-3.4.tar.gz
cd imply-3.4
2. 修改配置文件common.runtime.properties
主要修改extensions.loadList、zk地址、metadata改为mysql、deepStorage改为HDFS、indexer改为HDFS。修改完毕,将hadoop相关主要配置文件core-site、hdfs-site、mapred-site、yarn-site移动至conf/druid/_common目录下
cd conf/druid
vi _common/common.runtime.properties
#
# Extensions
#
duid.extensions.directory=dist/druid/extensions
druid.extensions.hadoopDependenciesDir=dist/druid/hadoop-dependencies
druid.extensions.loadList=["druid-kafka-indexing-service","druid-hdfs-storage","druid-histogram","druid-datasketches", "druid-lookups-cached-global","mysql-metadata-storage"]
#
# Logging
#
# Log all runtime properties on startup. Disable to avoid logging properties on startup:
druid.startup.logging.logProperties=true
#
# Zookeeper
#
druid.zk.service.host=xxx:2181,xxx:2181,xxx:2181
druid.zk.paths.base=/druid
#
# Metadata storage
#
# For Derby server on your Druid Coordinator (only viable in a cluster with a single Coordinator, no fail-over):
#druid.metadata.storage.type=derby
#druid.metadata.storage.connector.connectURI=jdbc:derby://master.example.com:1527/var/druid/metadata.db;create=true
#druid.metadata.storage.connector.host=master.example.com
#druid.metadata.storage.connector.port=1527
# For MySQL:
druid.metadata.storage.type=mysql
druid.metadata.storage.connector.connectURI=jdbc:mysql://xxx:3306/druid
druid.metadata.storage.connector.user=druid
druid.metadata.storage.connector.password=druid
# For PostgreSQL:
#druid.metadata.storage.type=postgresql
#druid.metadata.storage.connector.connectURI=jdbc:postgresql://db.example.com:5432/druid
#druid.metadata.storage.connector.user=...
#druid.metadata.storage.connector.password=...
#
# Deep storage
#
# For local disk (only viable in a cluster if this is a network mount):
#druid.storage.type=local
#druid.storage.storageDirectory=var/druid/segments
# For HDFS:
druid.storage.type=hdfs
druid.storage.storageDirectory=/druid/segments
# For S3:
#druid.storage.type=s3
#druid.storage.bucket=your-bucket
#druid.storage.baseKey=druid/segments
#druid.s3.accessKey=...
#druid.s3.secretKey=...
#
# Indexing service logs
#
# For local disk (only viable in a cluster if this is a network mount):
#druid.indexer.logs.type=file
#druid.indexer.logs.directory=var/druid/indexing-logs
# For HDFS:
druid.indexer.logs.type=hdfs
druid.indexer.logs.directory=/druid/indexing-logs
# For S3:
#druid.indexer.logs.type=s3
#druid.indexer.logs.s3Bucket=your-bucket
#druid.indexer.logs.s3Prefix=druid/indexing-logs
#
# Service discovery
#
druid.selectors.indexing.serviceName=druid/overlord
druid.selectors.coordinator.serviceName=druid/coordinator
#
# Monitoring
#
druid.monitoring.monitors=["org.apache.druid.java.util.metrics.JvmMonitor"]
druid.emitter=logging
druid.emitter.logging.logLevel=debug
cp /services/hadoop-2.7.7/etc/hadoop/core-site.xml .
cp /services/hadoop-2.7.7/etc/hadoop/hdfs-site.xml .
cp /services/hadoop-2.7.7/etc/hadoop/mapred-site.xml .
cp /services/hadoop-2.7.7/etc/hadoop/yarn-site.xml .
3. 修改coordinator配置,vi overlord/runtime.properties
druid.service=druid/coordinator
druid.host=xxx
druid.port=8081
druid.coordinator.startDelay=PT30S
druid.coordinator.period=PT30S
4. 修改overlord配置,vi coordinator/runtime.properties
druid.service=druid/overlord
druid.host=xxx
druid.port=8090
druid.indexer.queue.startDelay=PT30S
druid.indexer.runner.type=remote
druid.indexer.storage.type=metadata
5. 修改middleManager配置文件,vi middleManager/runtime.properties
druid.service=druid/middlemanager
druid.host=xxx
druid.port=8091
# Number of tasks per middleManager
druid.worker.capacity=3
# Task launch parameters
druid.indexer.runner.javaOpts=-server -Xmx2g -Duser.timezone=UTC -Dfile.encoding=UTF-8 -XX:+ExitOnOutOfMemoryError -Djava.util.logging.manager=org.apache.logging.log4j.jul.LogManager
druid.indexer.task.baseTaskDir=var/druid/task
druid.indexer.task.restoreTasksOnRestart=true
# HTTP server threads
druid.server.http.numThreads=40
# Processing threads and buffers
druid.processing.buffer.sizeBytes=100000000
druid.processing.numMergeBuffers=2
druid.processing.numThreads=2
druid.processing.tmpDir=var/druid/processing
# Hadoop indexing
druid.indexer.task.hadoopWorkingPath=var/druid/hadoop-tmp
druid.indexer.task.defaultHadoopCoordinates=["org.apache.hadoop:hadoop-client:2.8.5", "org.apache.hadoop:hadoop-aws:2.8.5"]
6. 修改historical配置文件,vi historical/runtime.properties
druid.service=druid/historical
druid.host=xxx
druid.port=8083
# HTTP server threads
druid.server.http.numThreads=40
# Processing threads and buffers
druid.processing.buffer.sizeBytes=536870912
druid.processing.numMergeBuffers=2
druid.processing.numThreads=7
druid.processing.tmpDir=var/druid/processing
# Segment storage
druid.segmentCache.locations=[{"path":"var/druid/segment-cache","maxSize"\:130000000000}]
druid.server.maxSize=130000000000
# Query cache
druid.historical.cache.useCache=true
druid.historical.cache.populateCache=true
druid.cache.type=caffeine
druid.cache.sizeInBytes=2000000000
7. 修改broker配置文件,vi broker/runtime.properties
druid.service=druid/broker
druid.host=xxx
druid.port=8082
# HTTP server settings
druid.server.http.numThreads=60
# HTTP client settings
druid.broker.http.numConnections=10
druid.broker.http.maxQueuedBytes=50000000
# Processing threads and buffers
druid.processing.buffer.sizeBytes=536870912
druid.processing.numMergeBuffers=2
druid.processing.numThreads=1
druid.processing.tmpDir=var/druid/processing
# Query cache disabled -- push down caching and merging instead
druid.broker.cache.useCache=false
druid.broker.cache.populateCache=false
# SQL
druid.sql.enable=true
8. 修改router配置文件(可选),vi router/runtime.properties
druid.service=druid/router
druid.host=xxx
druid.port=8888
druid.processing.numThreads=1
druid.processing.buffer.sizeBytes=1000000
druid.router.defaultBrokerServiceName=druid/broker
druid.router.coordinatorServiceName=druid/coordinator
druid.router.http.numConnections=50
druid.router.http.readTimeout=PT5M
druid.router.http.numMaxThreads=100
druid.server.http.numThreads=100
druid.router.managementProxy.enabled=true
9. 修改pivot配置文件,vi …/pivot/config.yaml
# The port on which the Pivot server will listen on.
port: 9095
# runtime directory
varDir: var/pivot
servingMode: clustered
# User management mode
# By default Imply will not show a login screen and anyone accessing it will automatically be treated as an 'admin'
# Uncomment the line below to enable user authentication, for more info see: https://docs.imply.io/on-prem/configure/config-api
#userMode: native-users
# The initial settings that will be loaded in, in this case a connection will be created for a Druid cluster that is running locally.
initialSettings:
connections:
- name: druid
type: druid
title: My Druid
#router或者broker地址
host: xxx:8888
coordinatorHosts: ["xxx:8081"]
overlordHosts: ["xxx:8090"]
#
# Pivot must have a state store in order to function
# The state (data cubes, dashboards, etc) can be stored in two ways.
# Choose just one option and comment out the other.
#
# 1) Stored in a sqlite file, editable at runtime with Settings View. Not suitable for running in a cluster.
# 2) Stored in a database, editable at runtime with Settings View. Works well with a cluster of Imply servers.
#
#
# 1) File-backed (sqlite) state (not suitable for running in a cluster)
#
#stateStore:
# type: sqlite
# connection: var/pivot/pivot-settings.sqlite
#
# 2) Database-backed state 'mysql' (MySQL) or 'pg' (Postgres)
#
stateStore:
# location: mysql
type: mysql
connection: 'mysql://druid:druid@xxx:3306/pivot'
10. 启动相关依赖服务:hdfs、mysql、zookeeper
11. 启动coordinator、overlord
根据集群规划,在154机器启动此两个服务
nohup bin/supervise -c conf/supervise/data.conf > logs/data.log 2>&1 &
12. 启动historical、middlemanager
根据集群规划,在172机器启动此两个服务
nohup bin/supervise -c conf/supervise/master-no-zk.conf > logs/master-no-zk.log 2>&1 &
13. 启动broker、pivot、router
根据集群规划,在222机器启动此两个服务
nohup bin/supervise -c conf/supervise/query.conf > logs/query.log 2>&1 &
验证
- 服务启动,会在var/sv/xx/current生成日志,如果需要重新部署,可以删除var下所有文件
- pivot页面默认使用9095端口,进入页面需要创建connection
- 载入数据
- 从hdfs导入
- 右下角继续,parse data
- 右下角继续,parse time,这里指定时间列
- 右下角继续,transform =》 filter =》 configure schema,这里指定维度列和度量列
- 右下角继续,partition,设置segment合并周期
- 右下角继续,tune =》 publish =》 edit spec 发布任务
- 回到data标签页,稍等片刻,数据已导入
- 使用页面sql工具查询
- 创建cube
- 可以查看cube的各种指标