【Druid】(四)Apache Druid 部署和配置(单机版 / Docker 容器版 / Kubernetes 集群版)

一、Apache Druid 部署

1.1 单机版

1.1.1 Jar 包下载

https://imply.io/get-started 下载最新版本安装包

1.1.2 Druid 的安装部署

说明:imply 集成了Druid,提供了Druid 从部署到配置到各种可视化工具的完整的解决方案,
imply 有点类似Cloudera Manager。

1.解压

tar -zxvf imply-2.7.10.tar.gz -C /opt/module

目录说明如下:

  • bin/ - run scripts for included software.
  • conf/ - template configurations for a clustered setup.
  • conf-quickstart/* - configurations for the single-machine quickstart.
  • dist/ - all included software.
  • quickstart/ - files related to the single-machine quickstart.

2.修改配置文件

1)修改Druid 的ZK 配置

[chris@hadoop102 _common]$ pwd
/opt/module/imply/conf/druid/_common
[chris@hadoop102 _common]$ vi common.runtime.properties
druid.zk.service.host=hadoop102:2181,hadoop103:2181,hadoop104:218
1

2)修改启动命令参数,使其不校验不启动内置ZK

[chris@hadoop102 supervise]$ pwd
/opt/module/imply/conf/supervise
:verify bin/verify-java
#:verify bin/verify-default-ports
#:verify bin/verify-version-check
:kill-timeout 10
#!p10 zk bin/run-zk conf-quickstart

3.启动

1)启动zookeeper

2)启动imply

[chris@hadoop102 imply]$ bin/supervise -c
conf/supervise/quickstart.conf

说明:每启动一个服务均会打印出一条日志。可以通过/opt/module/imply-2.7.10/var/sv/查看服务启动时的日志信息

3)查看端口号9095 的启动情况

[chris@hadoop102 ~]$ netstat -anp | grep 9095
tcp 0 0 :::9095 :::*
LISTEN 3930/imply-ui-linux
tcp 0
0 ::ffff:192.168.1.102:9095 ::ffff:192.168.1.1:52567
ESTABLISHED 3930/imply-ui-linux
tcp 0
0 ::ffff:192.168.1.102:9095 ::ffff:192.168.1.1:52568
ESTABLISHED 3930/imply-ui-linux

4.登录hadoop102:9095 查看

在这里插入图片描述

5.停止服务

按Ctrl + c 中断监督进程, 如果想中断服务后进行干净的启动, 请删除
/opt/module/imply-2.7.10/var/目录。

1.2 Docker 容器版

1.2.1 下载
# 搜索 Docker Hub
$ docker search druid

# 下载最新版本的镜像
$ docker pull apache/druid:0.19.0

# 检查镜像是否下载成功
$ docker image list
1.2.2 配置 Docker 文件共享

打开配置面板,进入 File Sharing 配置页面,增加 ${the path of your source code}/distribution/docker/storage 路径,随后点击 Apply & Restart 按钮,应用并重启

1.2.3 启动
$ git clone https://github.com/apache/druid.git
$ cd druid
$ docker-compose -f distribution/docker/docker-compose.yml up

# 同理,也可以使用 start/stop 命令启停容器
$ docker-compose -f distribution/docker/docker-compose.yml stop
$ docker-compose -f distribution/docker/docker-compose.yml start

# 或者使用 down 命令移除容器
$ docker-compose -f distribution/docker/docker-compose.yml down
1.2.4 校验
Historical 容器
$ docker exec -it historical sh
$ ls /opt/data/
  indexing-logs  segments
$ ls /opt/data/segments/
  intermediate_pushes  wikipedia
$ ls /opt/data/segments/wikipedia/
  2016-06-27T00:00:00.000Z_2016-06-28T00:00:00.000Z
$ ls /opt/data/segments/wikipedia/2016-06-27T00\:00\:00.000Z_2016-06-28T00\:00\:00.000Z/
  2020-06-04T07:11:42.714Z
$ ls /opt/data/segments/wikipedia/2016-06-27T00\:00\:00.000Z_2016-06-28T00\:00\:00.000Z/2020-06-04T07\:11\:42.714Z/0/
  index.zip
$ ls -lh
total 8M
-rw-r--r--    1 druid    druid       5.9M Jun  4 07:49 00000.smoosh
-rw-r--r--    1 druid    druid         29 Jun  4 07:49 factory.json
-rw-r--r--    1 druid    druid       1.7M Jun  4 07:14 index.zip
-rw-r--r--    1 druid    druid        707 Jun  4 07:49 meta.smoosh
-rw-r--r--    1 druid    druid          4 Jun  4 07:49 version.bin
$ cat factory.json
{"type":"mMapSegmentFactory"}
$ xxd version.bin
00000000: 0000 0009                                ....
$ cat meta.smoosh
v1,2147483647,1
__time,0,0,1106
channel,0,145739,153122
cityName,0,153122,195592
comment,0,195592,1598156
count,0,1106,2063
countryIsoCode,0,1598156,1614170
countryName,0,1614170,1630859
diffUrl,0,1630859,4224103
flags,0,4224103,4252873
index.drd,0,6162513,6163275
isAnonymous,0,4252873,4262876
isMinor,0,4262876,4282592
isNew,0,4282592,4290896
isRobot,0,4290896,4298796
isUnpatrolled,0,4298796,4307345
metadata.drd,0,6163275,6163925
namespace,0,4307345,4342089
page,0,4342089,5710071
regionIsoCode,0,5710071,5730339
regionName,0,5730339,5759351
sum_added,0,2063,37356
sum_commentLength,0,37356,66244
sum_deleted,0,66244,81170
sum_delta,0,81170,126275
sum_deltaBucket,0,126275,145739
user,0,5759351,6162513

其中,index.drd 包含该 Segment 覆盖的时间范围、指定的 Bitmap 种类(concise / roaring),以及包含的列和维度;而 metadata.drd 包含是否 Rollup、哪些聚合函数、查询的粒度,时间戳字段信息,以及可用于存储任意 Key-Value 数据的 Map 结构(例如 Kafka Firehose 用来存储 offset 信息)。更多细节详见 org.apache.druid.segment.IndexIO.V9IndexLoader#load

PostgreSQL 容器
$ docker exec -it postgres bash
$ psql -U druid -d druid
$ \l
                             List of databases
   Name    | Owner | Encoding |  Collate   |   Ctype    | Access privileges
-----------+-------+----------+------------+------------+-------------------
 druid     | druid | UTF8     | en_US.utf8 | en_US.utf8 |
 postgres  | druid | UTF8     | en_US.utf8 | en_US.utf8 |
 template0 | druid | UTF8     | en_US.utf8 | en_US.utf8 | =c/druid         +
           |       |          |            |            | druid=CTc/druid
 template1 | druid | UTF8     | en_US.utf8 | en_US.utf8 | =c/druid         +
           |       |          |            |            | druid=CTc/druid
(4 rows)
$ \c druid
You are now connected to database "druid" as user "druid".
$ \dt
               List of relations
 Schema |         Name          | Type  | Owner
--------+-----------------------+-------+-------
 public | druid_audit           | table | druid
 public | druid_config          | table | druid
 public | druid_datasource      | table | druid
 public | druid_pendingsegments | table | druid
 public | druid_rules           | table | druid
 public | druid_segments        | table | druid
 public | druid_supervisors     | table | druid
 public | druid_tasklocks       | table | druid
 public | druid_tasklogs        | table | druid
 public | druid_tasks           | table | druid
(10 rows)
> select id, datasource, created_date, start, "end", partitioned, version, used from public.druid_segments;
wikipedia_2016-06-27T00:00:00.000Z_2016-06-28T00:00:00.000Z_2020-06-04T07:11:42.714Z | wikipedia  | 2020-06-04T07:14:50.619Z | 2016-06-27T00:00:00.000Z | 2016-06-28T00:00:00.000Z | t           | 2020-06-04T07:11:42.714Z | t

1.3 Kubernetes 集群版

1.3.1 安装
$ helm repo add incubator https://kubernetes-charts-incubator.storage.googleapis.com
$ helm install incubator/druid --version 0.2.6 --generate-name
NAME: druid-1592218780
LAST DEPLOYED: Mon Jun 15 23:59:42 2020
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
1. Get the router URL by running these commands:
  export POD_NAME=$(kubectl get pods --namespace default -l "app=druid,release=druid-1592218780" -o jsonpath="{.items[0].metadata.name}")
  echo "Visit http://127.0.0.1:8080 to use your application"
  kubectl port-forward $POD_NAME 8080:8888
$ export POD_NAME=$(kubectl get pods --namespace default -l "app=druid,release=`helm list | grep druid- | awk '{print $1}'`" | grep router | awk '{print $1}')
$ nohup kubectl port-forward $POD_NAME 8888:8888 --address 0.0.0.0 2>&1 &
1.3.2 校验
$ kubectl get all
NAME                                                READY   STATUS    RESTARTS   AGE
pod/druid-1592364086-broker-76bf68c8bc-96d56        1/1     Running   0          2m36s
pod/druid-1592364086-coordinator-5f645bd5c8-rhhpz   1/1     Running   0          2m36s
pod/druid-1592364086-historical-0                   1/1     Running   0          2m36s
pod/druid-1592364086-middle-manager-0               1/1     Running   0          2m36s
pod/druid-1592364086-postgresql-0                   1/1     Running   0          2m36s
pod/druid-1592364086-router-67f678b6c5-mw6b4        1/1     Running   0          2m36s
pod/druid-1592364086-zookeeper-0                    1/1     Running   0          2m36s
pod/druid-1592364086-zookeeper-1                    1/1     Running   0          2m8s
pod/druid-1592364086-zookeeper-2                    1/1     Running   0          85s
pod/local-volume-provisioner-8sjtx                  1/1     Running   0          8m59s
pod/local-volume-provisioner-9z7mh                  1/1     Running   0          8m59s
pod/local-volume-provisioner-m2xrt                  1/1     Running   0          8m59s
pod/local-volume-provisioner-ptbqs                  1/1     Running   0          8m59s
pod/local-volume-provisioner-tw2fn                  1/1     Running   0          8m59s

NAME                                           TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                      AGE
service/druid-1592364086-broker                ClusterIP   10.10.10.128     <none>        8082/TCP                     2m36s
service/druid-1592364086-coordinator           ClusterIP   10.10.10.195     <none>        8081/TCP                     2m36s
service/druid-1592364086-historical            ClusterIP   10.10.10.226     <none>        8083/TCP                     2m36s
service/druid-1592364086-middle-manager        ClusterIP   10.10.10.108     <none>        8091/TCP                     2m36s
service/druid-1592364086-postgresql            ClusterIP   10.10.10.155     <none>        5432/TCP                     2m36s
service/druid-1592364086-postgresql-headless   ClusterIP   None             <none>        5432/TCP                     2m36s
service/druid-1592364086-router                ClusterIP   10.10.10.29      <none>        8888/TCP                     2m36s
service/druid-1592364086-zookeeper             ClusterIP   10.10.10.122     <none>        2181/TCP                     2m36s
service/druid-1592364086-zookeeper-headless    ClusterIP   None             <none>        2181/TCP,3888/TCP,2888/TCP   2m36s
service/kubernetes                             ClusterIP   10.10.10.1       <none>        443/TCP                      30m

NAME                                      DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
daemonset.apps/local-volume-provisioner   5         5         5       5            5           <none>          9m

NAME                                           READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/druid-1592364086-broker        1/1     1            1           2m36s
deployment.apps/druid-1592364086-coordinator   1/1     1            1           2m36s
deployment.apps/druid-1592364086-router        1/1     1            1           2m36s

NAME                                                      DESIRED   CURRENT   READY   AGE
replicaset.apps/druid-1592364086-broker-76bf68c8bc        1         1         1       2m36s
replicaset.apps/druid-1592364086-coordinator-5f645bd5c8   1         1         1       2m36s
replicaset.apps/druid-1592364086-router-67f678b6c5        1         1         1       2m36s

NAME                                               READY   AGE
statefulset.apps/druid-1592364086-historical       1/1     2m36s
statefulset.apps/druid-1592364086-middle-manager   1/1     2m36s
statefulset.apps/druid-1592364086-postgresql       1/1     2m36s
statefulset.apps/druid-1592364086-zookeeper        3/3     2m36s

上述 CLUSTER-IP 信息已脱敏

1.3.3 ZooKeeper 元数据
$ zkCli.sh
[zk: localhost:2181(CONNECTED) 0] ls -R /druid
/druid

/druid/announcements
/druid/coordinator
/druid/discovery
/druid/indexer
/druid/internal-discovery
/druid/loadQueue
/druid/overlord
/druid/segments
/druid/servedSegments

/druid/announcements/10.10.10.63:8083

/druid/coordinator/_COORDINATOR
/druid/coordinator/_COORDINATOR/_c_281fb87b-c40c-4d71-a657-8254cbcf3730-latch-0000000000

/druid/discovery/druid:broker
/druid/discovery/druid:coordinator
/druid/discovery/druid:overlord
/druid/discovery/druid:router

/druid/discovery/druid:broker/0e76bfc1-87f8-4799-9c36-0fb0e5617aef
/druid/discovery/druid:coordinator/035b1ada-531e-4a71-865b-7a1a6d6f1734
/druid/discovery/druid:overlord/a74523d6-1708-45b3-9c0b-87f438cda4e3
/druid/discovery/druid:router/c0bb18d3-51b1-4089-932b-a5d6e05ab91c

/druid/indexer/announcements
/druid/indexer/status
/druid/indexer/tasks

/druid/indexer/announcements/10.10.10.65:8091
/druid/indexer/status/10.10.10.65:8091
/druid/indexer/tasks/10.10.10.65:8091

/druid/internal-discovery/BROKER
/druid/internal-discovery/COORDINATOR
/druid/internal-discovery/HISTORICAL
/druid/internal-discovery/INDEXER
/druid/internal-discovery/MIDDLE_MANAGER
/druid/internal-discovery/OVERLORD
/druid/internal-discovery/PEON
/druid/internal-discovery/ROUTER

/druid/internal-discovery/BROKER/10.10.10.73:8082
/druid/internal-discovery/COORDINATOR/10.10.10.72:8081
/druid/internal-discovery/HISTORICAL/10.10.10.63:8083
/druid/internal-discovery/MIDDLE_MANAGER/10.10.10.65:8091
/druid/internal-discovery/OVERLORD/10.10.10.72:8081
/druid/internal-discovery/ROUTER/10.10.10.55:8888

/druid/loadQueue/10.10.10.63:8083

/druid/overlord/_OVERLORD
/druid/overlord/_OVERLORD/_c_ecacbc56-4d36-4ca0-ac1d-0df919c40bff-latch-0000000000

/druid/segments/10.10.10.63:8083

/druid/segments/10.10.10.63:8083/10.10.10.63:8083_historical__default_tier_2020-06-20T04:08:23.309Z_1b957acb6850491ca6ea885fca1b3c210
/druid/segments/10.10.10.63:8083/10.10.10.63:8083_historical__default_tier_2020-06-20T04:10:16.643Z_57c1f60104a94c459bf0331eb3c1f0a01

/druid/servedSegments/10.10.10.63:8083

上述 IP 地址相关信息已脱敏

1.3.4 Broker 健康检查
$ kill `ps -ef | grep 8082 | grep -v grep | awk '{print $2}'`; export POD_NAME=$(kubectl get pods --namespace default -l "app=druid,release=`helm list | grep druid- | awk '{print $1}'`" | grep broker | awk '{print $1}') ; nohup kubectl port-forward $POD_NAME 8082:8082 --address 0.0.0.0 2>&1 &
$ curl localhost:8082/status/health
true
1.3.5 Historical 缓存
$ cd /opt/druid/var/druid/segment-cache/info_dir
$ cat wikipedia_2016-06-27T00:00:00.000Z_2016-06-27T01:00:00.000Z_2020-06-20T04:10:01.833Z
{
  "dataSource": "wikipedia",
  "interval": "2016-06-27T00:00:00.000Z/2016-06-27T01:00:00.000Z",
  "version": "2020-06-20T04:10:01.833Z",
  "loadSpec": {
    "type": "hdfs",
    "path": "hdfs://10.10.10.44:8020/druid/segments/wikipedia/20160627T000000.000Z_20160627T010000.000Z/2020-06-20T04_10_01.833Z/0_index.zip"
  },
  "dimensions": "channel,cityName,comment,countryIsoCode,countryName,diffUrl,flags,isAnonymous,isMinor,isNew,isRobot,isUnpatrolled,namespace,page,regionIsoCode,regionName,user",
  "metrics": "count,sum_added,sum_commentLength,sum_deleted,sum_delta,sum_deltaBucket",
  "shardSpec": {
    "type": "numbered",
    "partitionNum": 0,
    "partitions": 0
  },
  "binaryVersion": 9,
  "size": 241189,
  "identifier": "wikipedia_2016-06-27T00:00:00.000Z_2016-06-27T01:00:00.000Z_2020-06-20T04:10:01.833Z"
}
$ cd /opt/druid/var/druid/segment-cache/wikipedia/2016-06-27T00:00:00.000Z_2016-06-27T01:00:00.000Z/2020-06-20T04:10:01.833Z/0
$ ls
00000.smoosh  factory.json  meta.smoosh   version.bin
1.3.6 Segment 文件
$ kubectl exec -it hdfs-1593317115-namenode-0 bash
$ hdfs dfs -get /druid/segments/wikipedia/20160627T000000.000Z_20160627T010000.000Z/2020-06-28T04_10_01.833Z/0_index.zip /tmp/index/
$ exit
$ kubectl cp hdfs-1593317115-namenode-0:/tmp/index/0_index.zip /tmp/0_index.zip
$ unzip 0_index.zip
$ ls
00000.smoosh  0_index.zip  factory.json  meta.smoosh  version.bin
1.3.7 Coordinator 动态配置
$ kill `ps -ef | grep 8081 | grep -v grep | awk '{print $2}'`; export POD_NAME=$(kubectl get pods --namespace default -l "app=druid,release=`helm list | grep druid- | awk '{print $1}'`" | grep coordinator | awk '{print $1}') ; nohup kubectl port-forward $POD_NAME 8081:8081 --address 0.0.0.0 2>&1 &
$ curl localhost:8081/druid/coordinator/v1/config | python -m json.tool
{
  "balancerComputeThreads": 1,
  "decommissioningMaxPercentOfMaxSegmentsToMove": 70,
  "decommissioningNodes": [],
  "emitBalancingStats": false,
  "killAllDataSources": false,
  "killDataSourceWhitelist": [],
  "killPendingSegmentsSkipList": [],
  "maxSegmentsInNodeLoadingQueue": 0,
  "maxSegmentsToMove": 5,
  "mergeBytesLimit": 524288000,
  "mergeSegmentsLimit": 100,
  "millisToWaitBeforeDeleting": 900000,
  "pauseCoordination": false,
  "replicantLifetime": 15,
  "replicationThrottleLimit": 10
}
# 将 maxSegmentsToMove 调整为 50
$ curl -XPOST -H 'Content-Type:application/json' localhost:8081/druid/coordinator/v1/config -d '{"maxSegmentsToMove":50}'
$ curl localhost:8081/druid/coordinator/v1/config | python -m json.tool
{
  "balancerComputeThreads": 1,
  "decommissioningMaxPercentOfMaxSegmentsToMove": 70,
  "decommissioningNodes": [],
  "emitBalancingStats": false,
  "killAllDataSources": false,
  "killDataSourceWhitelist": [],
  "killPendingSegmentsSkipList": [],
  "maxSegmentsInNodeLoadingQueue": 0,
  "maxSegmentsToMove": 50,
  "mergeBytesLimit": 524288000,
  "mergeSegmentsLimit": 100,
  "millisToWaitBeforeDeleting": 900000,
  "pauseCoordination": false,
  "replicantLifetime": 15,
  "replicationThrottleLimit": 10
}

动态配置项 maxSegmentsToMove 可以用于控制同时被 rebalance 的 Segment 数量

1.3.8 Druid SQL 查询
# 映射 Broker 容器的 8082 端口
$ kill `ps -ef | grep 8082 | grep -v grep | awk '{print $2}'`; export POD_NAME=$(kubectl get pods --namespace default -l "app=druid,release=`helm list | grep druid- | awk '{print $1}'`" | grep broker | awk '{print $1}') ; nohup kubectl port-forward $POD_NAME 8082:8082 --address 0.0.0.0 2>&1 &
$ echo '{"query":"SELECT COUNT(*) as res FROM wikipedia"}' > druid_query.sql
$ curl -XPOST -H'Content-Type: application/json' http://localhost:8082/druid/v2/sql/ -d@druid_query.sql
[{"res":24433}]

二、配置

2.1 常用端口

在这里插入图片描述

生产中,建议将 ZooKeeper 和 Metadata Stroage 部署在独立的物理机上,而不是混合部署在 Coordinator 节点上

2.2 rollup

在 Apache Druid 0.9.2 版本之后,我们可以通过在 granularitySpec 中配置 "rollup": false,来完全关闭 RollUp 特性,即在数据摄入的过程中,不做任何的预聚合,只保留最原始的数据点。即便是同一时刻的、具有相同维度的、完全相同的多个数据点,都会全部存储下来,不会发生覆盖写

2.3 selectStrategy

该参数默认为 fillCapacity,意味着分配 Task 的时候,会将某个 MiddleManager 分配满,才会分配新的 Task 到其他 MiddleManager 上。这里可以考虑使用 equalDistribution 策略,将 Task 均匀分配到 MiddleManager 上

$ cd $DRUID_HOME
$ vim conf/druid/overlord/runtime.properties
druid.indexer.selectStrategy=equalDistribution

在 0.11.0 版本中,默认策略已经改成了 equalDistribution。详见 WorkerBehaviorConfig#DEFAULT_STRATEGY

2.4 maxRowsPerSegment

该参数用于控制每个 Segment 中最大能够存储的记录行数(默认值为 500,0000),只有二级分区采用的是 dynamic 才会生效。如果 spec.tuningConfig.type 设置的是 hashed,则需要指定 shard 的数量,以及哪些 Dimension 被用于 hash 计算(默认为所有 Dimension)。另外,在 dynamic 类型的二级分区中,还有一个 maxTotalRows 参数(默认值为 2000,0000),用来控制所有尚未被存储到 Deep Storage 中的 segment 的记录行数,一旦达到 maxTotalRows 则会立即触发 push 操作

如果该参数设置得很低,会产生很多小的 Segment 文件。一方面,如果 DeepStorage 为 HDFS 的话,会触发小文件问题,影响到集群性能(访问大量小文件不同于访问少数大文件,需要不断地在 DataNode 之间跳转,大部分时间都会耗费在 task 的启动和释放上,并且 NameNode 要监控的数据块变多后,网络带宽和内存占用也会比高,还会拖慢 NameNode 节点的故障恢复);另一方面,如果操作系统中 vm.max_map_count 参数为默认的 65530 的话,可能会达到这个阈值,使得 MMap 操作失败,进而导致 Historical 进程 crash 退出,如果 Segment 一直无法完成 handoff 的过程,则会促使 Coordinator 进程 kill 实时任务

2.5 druid.server.tier

该参数相当于给 Historical 节点加了一个标签,可以将相同 Tier 名称的 Historical 进行分组,便于实现冷热分层。在 historical/runtime.properties 配置文件中设置,默认值为 _default_tier。例如,可以创建 hotcold 两个 Tier,hot Tier 用于存储最近三个月的数据,cold Tier 用于存储最近一年的数据。如此一来,因为 cold 分组下的 Historical 节点存储的数据只需要应对一些低频查询,便可以使用非 SSD 的磁盘,以节约硬件成本

2.6 tieredReplicants

该参数用于在 Rule 中设置 Tier 存储的副本数量。假设将 tieredReplicants 设置为 2 之后,数据便会在不同的 Historical 节点上各自存储一份,以避免某一个 Historical 故障,而影响到查询

2.3 Coordinator Rule 配置

保留最近 30 天数据

在这里插入图片描述

  • 1
    点赞
  • 7
    收藏
    觉得还不错? 一键收藏
  • 1
    评论
### 回答1: 您可以通过以下命令查看Linux上安装的Apache Druid本: ``` cd /opt/druid/ cat VERSION ``` 该命令将打印出Apache Druid本信息。 ### 回答2: 要查看Linux上Apache Druid本,可以按照以下步骤操作: 1. 打开终端:使用命令行界面进入Linux系统。 2. 输入以下命令查看Druid的安装目录:`ls -al /opt/druid` 3. 进入Druid安装目录:`cd /opt/druid` 4. 列出Druid的文件内容:`ls -al` 5. 找到并打开Druid的POM文件:`cat pom.xml` 6. 在POM文件中搜索本信息:查找关键字"druid-core"或"druid-extensions",并查看它们的本号。 7. 还可以通过Druid的启动日志查看本信息:`cat /var/log/druid/startup.log` 8. 在启动日志中搜索关键字"version",找到相关的本信息。 通过以上步骤,您应该能够找到Apache Druid本信息。 ### 回答3: 要查看Linux系统上安装的Apache Druid本,您可以执行以下步骤: 1. 打开终端或命令行界面。 2. 输入以下命令并按下回车键来检查Druid是否已安装: ``` druid ``` 如果未找到该命令,表示Druid未安装,可以尝试安装Druid或执行其他步骤。 3. 如果Druid已安装,可以使用以下命令来查看Druid本: ``` druid --version ``` 或 ``` druid -v ``` 执行命令后,终端将显示Apache Druid本号。 4. 如果上述命令未返回本号,您可以尝试通过在终端中执行以下命令来查找Druid的安装路径: ``` which druid ``` 此命令将返回Druid的安装路径,例如:/usr/local/druid。 5. 一旦您获得了Druid的安装路径,您可以转到该路径并查找本信息。假设Druid安装在/usr/local/druid目录下,您可以通过执行以下命令来查看本信息: ``` cat /usr/local/druid/version.txt ``` 此命令将显示Druid本号。 通过上述步骤,您应该能够找到Linux系统上安装的Apache Druid本信息。请注意,确保您拥有适当的权限以执行命令和访问Druid安装目录。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值