docker 搭建Nebula 3.6 集群

环境及版本说明

  1. 服务器信息
编号IP地址nebula版本
110.0.19.1103.6.0
210.0.19.1113.6.0
310.0.19.1123.6.0

-其他注意事项
本文搭建一个三节点的nebula集群

IP地址metad 进程数量storaged 进程数量graphd 进程数量
10.0.19.110111
10.0.19.111111
10.0.19.112111

集群采用docker-compose部署方式,网络模式采用host模式,详情请查看下方compose文件
docker-compose 文件中的服务 console 只需要在10.0.19.110上部署,其他两节点不用部署。详情请查看下方compose文件注释

部署步骤

上述ip地址简称为s110、s111、s112

  1. 查看镜像
[root@nebula meta]# docker images
REPOSITORY                          TAG          IMAGE ID       CREATED        SIZE
python                              loader       0e3031408ada   8 days ago     79.3MB
nebula-dashboard-nebula-dashboard   latest       0eb8c927b29d   13 days ago    483MB
python                              alpine3.19   dc76baf701c3   6 weeks ago    51.7MB
quay.io/prometheus/node-exporter    latest       72c9c2088986   2 months ago   22.7MB
vesoft/nebula-graph-studio          v3.8.0       aedef1233adb   3 months ago   67.8MB
vesoft/nebula-importer              latest       89ec4ad69981   3 months ago   35MB
vesoft/nebula-metad                 v3.6.0       e160f99aa8b4   6 months ago   290MB
vesoft/nebula-storaged              v3.6.0       6121adf9100c   6 months ago   292MB
vesoft/nebula-graphd                v3.6.0       9ffa939c73f4   6 months ago   285MB
vesoft/nebula-console               v3.5         5e5b729a4d9c   8 months ago   15.3MB
  1. 程序配置文件
    将本地配置文件挂载到容器内(三台服务器中配置文件均相同)
  • 目录结构
    common 存放配置文件
    data 存储数据
    logs 存放服务日志
    在这里插入图片描述

  • metad 配置文件

cat  common/metad/nebula-metad.conf
########## basics ##########
# Whether to run as a daemon process
--daemonize=true
# The file to host the process id
--pid_file=pids/nebula-metad.pid

--timezone_name=UTC+08:00
########## logging ##########
# The directory to host logging files
--log_dir=logs
# Log level, 0, 1, 2, 3 for INFO, WARNING, ERROR, FATAL respectively
--minloglevel=0
# Verbose log level, 1, 2, 3, 4, the higher of the level, the more verbose of the logging
--v=0
# Maximum seconds to buffer the log messages
--logbufsecs=0
# Whether to redirect stdout and stderr to separate output files
--redirect_stdout=true
# Destination filename of stdout and stderr, which will also reside in log_dir.
--stdout_log_file=metad-stdout.log
--stderr_log_file=metad-stderr.log
# Copy log messages at or above this level to stderr in addition to logfiles. The numbers of severity levels INFO, WARNING, ERROR, and FATAL are 0, 1, 2, and 3, respectively.
--stderrthreshold=3
# wether logging files' name contain time stamp, If Using logrotate to rotate logging files, than should set it to true.
--timestamp_in_logfile_name=true

########## networking ##########
# Comma separated Meta Server addresses
--meta_server_addrs=127.0.0.1:9559
# Local IP used to identify the nebula-metad process.
# Change it to an address other than loopback if the service is distributed or
# will be accessed remotely.
--local_ip=127.0.0.1
# Meta daemon listening port
--port=9559
# HTTP service ip
--ws_ip=0.0.0.0
# HTTP service port
--ws_http_port=19559
# Port to listen on Storage with HTTP protocol, it corresponds to ws_http_port in storage's configuration file
--ws_storage_http_port=19779

########## storage ##########
# Root data path, here should be only single path for metad
--data_path=data/meta

########## Misc #########
# The default number of parts when a space is created
--default_parts_num=100
# The default replica factor when a space is created
--default_replica_factor=1

--heartbeat_interval_secs=10
--agent_heartbeat_interval_secs=60
  • graphd 配置文件
cat common/graphd/nebula-graphd.conf
########## basics ##########
# Whether to run as a daemon process
--daemonize=true
# The file to host the process id
--pid_file=pids/nebula-graphd.pid
# Whether to enable optimizer
--enable_optimizer=true
# The default charset when a space is created
--default_charset=utf8
# The default collate when a space is created
--default_collate=utf8_bin
# Whether to use the configuration obtained from the configuration file
--local_config=true
--timezone_name=UTC+08:00
########## logging ##########
# The directory to host logging files
--log_dir=logs
# Log level, 0, 1, 2, 3 for INFO, WARNING, ERROR, FATAL respectively
--minloglevel=0
# Verbose log level, 1, 2, 3, 4, the higher of the level, the more verbose of the logging
--v=0
# Maximum seconds to buffer the log messages
--logbufsecs=0
# Whether to redirect stdout and stderr to separate output files
--redirect_stdout=true
# Destination filename of stdout and stderr, which will also reside in log_dir.
--stdout_log_file=graphd-stdout.log
--stderr_log_file=graphd-stderr.log
# Copy log messages at or above this level to stderr in addition to logfiles. The numbers of severity levels INFO, WARNING, ERROR, and FATAL are 0, 1, 2, and 3, respectively.
--stderrthreshold=3
# wether logging files' name contain time stamp.
--timestamp_in_logfile_name=true
--max_log_size=1024
########## query ##########
# Whether to treat partial success as an error.
# This flag is only used for Read-only access, and Modify access always treats partial success as an error.
--accept_partial_success=false
# Maximum sentence length, unit byte
--max_allowed_query_size=4194304

########## networking ##########
# Comma separated Meta Server Addresses
--meta_server_addrs=127.0.0.1:9559
# Local IP used to identify the nebula-graphd process.
# Change it to an address other than loopback if the service is distributed or
# will be accessed remotely.
--local_ip=127.0.0.1
# Network device to listen on
--listen_netdev=any
# Port to listen on
--port=9669
# To turn on SO_REUSEPORT or not
--reuse_port=false
# Backlog of the listen socket, adjust this together with net.core.somaxconn
--listen_backlog=1024
# The number of seconds Nebula service waits before closing the idle connections
--client_idle_timeout_secs=28800
# The number of seconds before idle sessions expire
# The range should be in [1, 604800]
--session_idle_timeout_secs=28800
# The number of threads to accept incoming connections
--num_accept_threads=1
# The number of networking IO threads, 0 for # of CPU cores
--num_netio_threads=0
# Max active connections for all networking threads. 0 means no limit.
# Max connections for each networking thread = num_max_connections / num_netio_threads
--num_max_connections=0
# The number of threads to execute user queries, 0 for # of CPU cores
--num_worker_threads=0
# HTTP service ip
--ws_ip=0.0.0.0
# HTTP service port
--ws_http_port=19669
# storage client timeout
--storage_client_timeout_ms=60000
# slow query threshold in us
--slow_query_threshold_us=200000
# Port to listen on Meta with HTTP protocol, it corresponds to ws_http_port in metad's configuration file
--ws_meta_http_port=19559

########## authentication ##########
# Enable authorization
--enable_authorize=true
--failed_login_attempts=3
--password_lock_time_in_secs=10
# User login authentication type, password for nebula authentication, ldap for ldap authentication, cloud for cloud authentication
--auth_type=password

########## memory ##########
# System memory high watermark ratio, cancel the memory checking when the ratio greater than 1.0
--system_memory_high_watermark_ratio=0.8

########## metrics ##########
--enable_space_level_metrics=false

########## experimental feature ##########
# if use experimental features
--enable_experimental_feature=false

# if use balance data feature, only work if enable_experimental_feature is true
--enable_data_balance=true

# enable udf, written in c++ only for now
--enable_udf=true

# set the directory where the .so files of udf are stored, when enable_udf is true
--udf_path=/home/nebula/dev/nebula/udf/

########## session ##########
# Maximum number of sessions that can be created per IP and per user
--max_sessions_per_ip_per_user=300

########## memory tracker ##########
# trackable memory ratio (trackable_memory / (total_memory - untracked_reserved_memory) )
--memory_tracker_limit_ratio=0.8
# untracked reserved memory in Mib
--memory_tracker_untracked_reserved_memory_mb=50

# enable log memory tracker stats periodically
--memory_tracker_detail_log=false
# log memory tacker stats interval in milliseconds
--memory_tracker_detail_log_interval_ms=60000

# enable memory background purge (if jemalloc is used)
--memory_purge_enabled=true
# memory background purge interval in seconds
--memory_purge_interval_seconds=10

########## performance optimization ##########
# The max job size in multi job mode
--max_job_size=1
# The min batch size for handling dataset in multi job mode, only enabled when max_job_size is greater than 1
--min_batch_size=8192
# if true, return directly without go through RPC
--optimize_appendvertices=false
# number of paths constructed by each thread
--path_batch_size=10000


#调整单批写入顶点或者边的数量
--max_plan_depth=2048
–max_allowed_statements=2048
  • storaged 配置文件
 cat common/storaged/nebula-storaged.conf
########## basics ##########
# Whether to run as a daemon process
--daemonize=true
# The file to host the process id
--pid_file=pids/nebula-storaged.pid
# Whether to use the configuration obtained from the configuration file
--local_config=true
--timezone_name=UTC+08:00
########## logging ##########
# The directory to host logging files
--log_dir=logs
# Log level, 0, 1, 2, 3 for INFO, WARNING, ERROR, FATAL respectively
--minloglevel=0
# Verbose log level, 1, 2, 3, 4, the higher of the level, the more verbose of the logging
--v=0
# Maximum seconds to buffer the log messages
--logbufsecs=0
# Whether to redirect stdout and stderr to separate output files
--redirect_stdout=true
# Destination filename of stdout and stderr, which will also reside in log_dir.
--stdout_log_file=storaged-stdout.log
--stderr_log_file=storaged-stderr.log
# Copy log messages at or above this level to stderr in addition to logfiles. The numbers of severity levels INFO, WARNING, ERROR, and FATAL are 0, 1, 2, and 3, respectively.
--stderrthreshold=3
# Wether logging files' name contain time stamp.
--timestamp_in_logfile_name=true

########## networking ##########
# Comma separated Meta server addresses
--meta_server_addrs=127.0.0.1:9559
# Local IP used to identify the nebula-storaged process.
# Change it to an address other than loopback if the service is distributed or
# will be accessed remotely.
--local_ip=127.0.0.1
# Storage daemon listening port
--port=9779
# HTTP service ip
--ws_ip=0.0.0.0
# HTTP service port
--ws_http_port=19779
# heartbeat with meta service
--heartbeat_interval_secs=10

######### Raft #########
# Raft election timeout
--raft_heartbeat_interval_secs=30
# RPC timeout for raft client (ms)
--raft_rpc_timeout_ms=500
## recycle Raft WAL
--wal_ttl=14400

########## Disk ##########
# Root data path. Split by comma. e.g. --data_path=/disk1/path1/,/disk2/path2/
# One path per Rocksdb instance.
--data_path=data/storage

# Minimum reserved bytes of each data path
--minimum_reserved_bytes=268435456

# The default reserved bytes for one batch operation
--rocksdb_batch_size=4096
# The default block cache size used in BlockBasedTable.
# The unit is MB.
--rocksdb_block_cache=4
# The type of storage engine, `rocksdb', `memory', etc.
--engine_type=rocksdb

# Compression algorithm, options: no,snappy,lz4,lz4hc,zlib,bzip2,zstd
# For the sake of binary compatibility, the default value is snappy.
# Recommend to use:
#   * lz4 to gain more CPU performance, with the same compression ratio with snappy
#   * zstd to occupy less disk space
#   * lz4hc for the read-heavy write-light scenario
--rocksdb_compression=lz4

# Set different compressions for different levels
# For example, if --rocksdb_compression is snappy,
# "no:no:lz4:lz4::zstd" is identical to "no:no:lz4:lz4:snappy:zstd:snappy"
# In order to disable compression for level 0/1, set it to "no:no"
--rocksdb_compression_per_level=

# Whether or not to enable rocksdb's statistics, disabled by default
--enable_rocksdb_statistics=false

# Statslevel used by rocksdb to collection statistics, optional values are
#   * kExceptHistogramOrTimers, disable timer stats, and skip histogram stats
#   * kExceptTimers, Skip timer stats
#   * kExceptDetailedTimers, Collect all stats except time inside mutex lock AND time spent on compression.
#   * kExceptTimeForMutex, Collect all stats except the counters requiring to get time inside the mutex lock.
#   * kAll, Collect all stats
--rocksdb_stats_level=kExceptHistogramOrTimers

# Whether or not to enable rocksdb's prefix bloom filter, enabled by default.
--enable_rocksdb_prefix_filtering=true
# Whether or not to enable rocksdb's whole key bloom filter, disabled by default.
--enable_rocksdb_whole_key_filtering=false

############## rocksdb Options ##############
# rocksdb DBOptions in json, each name and value of option is a string, given as "option_name":"option_value" separated by comma
--rocksdb_db_options={}
# rocksdb ColumnFamilyOptions in json, each name and value of option is string, given as "option_name":"option_value" separated by comma
#关闭自动compaction
--rocksdb_column_family_options={"disable_auto_compactions":"true","write_buffer_size":"67108864","max_write_buffer_number":"4","max_bytes_for_level_base":"268435456"}
# rocksdb BlockBasedTableOptions in json, each name and value of option is string, given as "option_name":"option_value" separated by comma
--rocksdb_block_based_table_options={"block_size":"8192"}

############### misc ####################
# Whether turn on query in multiple thread
--query_concurrently=true
# Whether remove outdated space data
--auto_remove_invalid_space=true
# Network IO threads number
--num_io_threads=16
# Max active connections for all networking threads. 0 means no limit.
# Max connections for each networking thread = num_max_connections / num_netio_threads
--num_max_connections=0
# Worker threads number to handle request
--num_worker_threads=32
# Maximum subtasks to run admin jobs concurrently
--max_concurrent_subtasks=10
# The rate limit in bytes when leader synchronizes snapshot data
--snapshot_part_rate_limit=10485760
# The amount of data sent in each batch when leader synchronizes snapshot data
--snapshot_batch_size=1048576
# The rate limit in bytes when leader synchronizes rebuilding index
--rebuild_index_part_rate_limit=4194304
# The amount of data sent in each batch when leader synchronizes rebuilding index
--rebuild_index_batch_size=1048576

########## memory tracker ##########
# trackable memory ratio (trackable_memory / (total_memory - untracked_reserved_memory) )
--memory_tracker_limit_ratio=0.8
# untracked reserved memory in Mib
--memory_tracker_untracked_reserved_memory_mb=50

# enable log memory tracker stats periodically
--memory_tracker_detail_log=false
# log memory tacker stats interval in milliseconds
--memory_tracker_detail_log_interval_ms=60000

# enable memory background purge (if jemalloc is used)
--memory_purge_enabled=true
# memory background purge interval in seconds
--memory_purge_interval_seconds=10
  1. docker-compose 文件 以s110 节点为例。其他节点仅替换local_ip和healthcheck.test 下的ip为实际ip即可
cat docker-compose.yml

version: '3.4'
services:
  metad:
    image: docker.io/vesoft/nebula-metad:v3.6.0
    environment:
      USER: root
      TZ: Asia/Shanghai
    container_name: meta
    command:
      - --meta_server_addrs=10.0.19.110:9559,10.0.19.111:9559,10.0.19.112:9559
      - --local_ip=10.0.19.110
      - --ws_ip=0.0.0.0
      - --port=9559
      - --ws_http_port=19559
      - --data_path=/data/meta
      - --log_dir=/logs
      - --v=0
      - --minloglevel=0
    healthcheck:
      test: ["CMD", "curl", "-sf", "http://10.0.19.110:19559/status"]
      interval: 30s
      timeout: 10s
      retries: 3
      start_period: 20s
    ports:
      - "9559:9559"
      - "19559:19559"
      - "19560:19560"
    volumes:
      - ./data/meta:/data/meta
      - ./logs/meta:/logs
      - ./common/metad:/usr/local/nebula/etc
    restart: on-failure
    cap_add:
      - SYS_PTRACE
    network_mode: "host"
  storaged:
    image: docker.io/vesoft/nebula-storaged:v3.6.0
    environment:
      USER: root
      TZ:   Asia/Shanghai
    container_name: storage
    privileged: true
    command:
      - --meta_server_addrs=10.0.19.110:9559,10.0.19.111:9559,10.0.19.112:9559
      - --local_ip=10.0.19.110
      - --ws_ip=0.0.0.0
      - --port=9779
      - --ws_http_port=19779
      - --data_path=/data/storage
      - --log_dir=/logs
      - --v=0
      - --minloglevel=0
    depends_on:
      - metad
    healthcheck:
      test: ["CMD", "curl", "-sf", "http://10.0.19.110:19779/status"]
      interval: 30s
      timeout: 10s
      retries: 3
      start_period: 20s
    ports:
      - "9779:9779"
      - "19779:19779"
      - "19780:19780"
    volumes:
      - ./data/storage:/data/storage
      - ./logs/storage:/logs
      - ./common/storaged:/usr/local/nebula/etc
    restart: on-failure
    cap_add:
      - SYS_PTRACE
    network_mode: "host"
  graphd:
    image: docker.io/vesoft/nebula-graphd:v3.6.0
    environment:
      USER: root
      TZ:   Asia/Shanghai
    container_name: graph
    privileged: true
    command:
      - --meta_server_addrs=10.0.19.110:9559,10.0.19.111:9559,10.0.19.112:9559
      - --port=9669
      - --local_ip=10.0.19.110
      - --ws_ip=0.0.0.0
      - --ws_http_port=19669
      - --log_dir=/logs
      - --v=0
      - --minloglevel=0
    depends_on:
      - storaged
    healthcheck:
      test: ["CMD", "curl", "-sf", "http://10.0.19.110:19669/status"]
      interval: 30s
      timeout: 10s
      retries: 3
      start_period: 20s
    ports:
      - "9669:9669"
      - "19669:19669"
      - "19670:19670"
    volumes:
      - ./logs/graph:/logs
      - ./common/graphd:/usr/local/nebula/etc
    restart: on-failure
    cap_add:
      - SYS_PTRACE
    network_mode: "host"
  console:              ###该容器用于将storage主机注册至 Meta 服务中. 其他节点无需部署
    image: docker.io/vesoft/nebula-console:v3.5
    entrypoint: ""
    command:
      - sh
      - -c
      - |
        for i in `seq 1 60`;do
          var=`nebula-console -addr graphd -port 9669 -u root -p nebula -e 'ADD HOSTS "10.0.19.110":9779,"10.0.19.111":9779,"10.0.19.112":9779'`;
          if [[ $$? == 0 ]];then
            break;
          fi;
          sleep 1;
          echo "retry to add hosts.";
        done && tail -f /dev/null;
    container_name: console
    depends_on:
      - graphd
    network_mode: "host"

4.服务启动

分别在三个节点中执行以下命令

[root@nebula nebula-3.6]# docker-compose up -d
[+] Running 4/4
 ✔ Container meta     Started                                                                                                                                         0.1s
 ✔ Container storage  Started                                                                                                                                         0.2s
 ✔ Container graph    Started                                                                                                                                         0.3s
 ✔ Container console  Started                                                                                                                                         0.3s
[root@nebula nebula-3.6]# docker-compose ps -a
NAME                IMAGE                                     COMMAND                  SERVICE             CREATED             STATUS                            PORTS
console             docker.io/vesoft/nebula-console:v3.5      "sh -c 'for i in `se…"   console             2 days ago          Up 6 seconds
graph               docker.io/vesoft/nebula-graphd:v3.6.0     "/usr/local/nebula/b…"   graphd              2 days ago          Up 6 seconds (health: starting)
meta                docker.io/vesoft/nebula-metad:v3.6.0      "/usr/local/nebula/b…"   metad               2 days ago          Up 6 seconds (health: starting)
storage             docker.io/vesoft/nebula-storaged:v3.6.0   "/usr/local/nebula/b…"   storaged            2 days ago          Up 6 seconds (health: starting)
  1. 服务验证
  • 通过nebula-graph-studio查看集群。 查看status列状态均为ONLINE,则集群搭建成功
    在这里插入图片描述
  • 4
    点赞
  • 3
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值