bash - filename=${1:-/etc/hosts} 和 filename=/etc/hosts 的区别

本文介绍了一段Bash脚本示例,用于检查文件是否存在且非空,然后计算其MD5校验和。此外还讲解了如何使用Bash参数扩展特性来设置默认文件路径。

举例如下:

filename=/etc/hosts

if [ -r "$filename" ] && [ -s "$filename" ]; then
    md5sum $filename
else
    echo "$filename cannot be processed"
fi

filename=${1:-/etc/hosts}

if [ -r "$filename" ] && [ -s "$filename" ]; then
    md5sum $filename
else
    echo "$filename cannot be processed"
fi

如果 $1 为空或不存在,则把 /etc/hosts 赋值给 filename 变量,更一般的解释如下:

${parameter:-value} 如果 parameter 不存在或为空则用 value 替代表达式的值,否则用 parameter 替代表达式的值


点击此处查看问题来源

详情请参考 Bash Manual

历史记录 登录 我的虚拟机IP为192.168.128.130,我的用户名为:maoxiuting,请为我生成对应的代码操作步骤,要详细 回答 题目重述 根据您提供的文档内容,结合您的实际环境(虚拟机IP为 192.168.128.130,用户名为 maoxiuting),需要生成一套完整的配置操作步骤对应代码,涵盖: 固定IP设置 主机名映射(包括WindowsLinux) SSH免密登录配置(针对新用户) JDK与Hadoop环境配置 HDFS相关服务启动与操作 以下是为您量身定制的详细操作流程。 一、配置固定IP 操作步骤: 打开 VMware Workstation,找到当前虚拟机的网络设置。 建议使用 NAT 模式 修改虚拟网络编辑器中子网IP为 192.168.128.0,子网掩码 255.255.255.0,网关设为 192.168.128.2 登录Linux系统,使用root权限编辑网络接口配置文件: Bash vim /etc/sysconfig/network-scripts/ifcfg-ens33 修改内容如下: 重启网络服务: ✅ 验证:执行 ifconfig 或 ip addr show ens33,确认IP已变为 192.168.128.130 二、配置主机名与映射 1. 设置主机名为 master Bash hostnamectl set-hostname master 注:可通过 hostname 命令验证。 2. Windows 添加 hosts 映射(管理员权限) 步骤: 打开路径:C:\Windows\System32\drivers\etc\hosts 用记事本(以管理员身份运行)打开 hosts 文件 添加以下内容: text 192.168.128.130 master 若后续添加slave节点,请继续追加如: text 192.168.128.131 slave1 192.168.128.132 slave2 3. Linux 系统中也配置 hosts Bash vim /etc/hosts 添加: text 192.168.128.130 master ✅ 验证:执行 ping master 应能通。 三、配置SSH免密登录(适用于用户 maoxiuting) 由于原文档基于root用户配置SSH,现改为普通用户 maoxiuting。 1. 切换到目标用户并生成密钥对 Bash su - maoxiuting ssh-keygen -t rsa -b 4096 -P '' -f ~/.ssh/id_rsa -P '' 表示无密码;可按需交互式输入密码 2. 将公钥写入授权列表(本机免登自己) Bash cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys chmod 600 ~/.ssh/authorized_keys chmod 700 ~/.ssh 3. 若有多台主机需互通,复制公钥到其他机器: Bash ssh-copy-id maoxiuting@slave1 提前确保 maoxiuting 用户已在目标主机存在,且 .ssh 目录权限正确 4. 测试免密登录 Bash ssh maoxiuting@master ✅ 成功则无需密码直接进入。 四、配置JDK环境(继续使用 /export/server 路径) 1. 创建安装目录 Bash sudo mkdir -p /export/server sudo chown maoxiuting:maoxiuting /export/server # 授予用户权限 2. 上传JDK安装包至 /root 或 /home/maoxiuting 通过 FinalShell 拖拽上传 jdk-22_linux-x64_bin.tar.gz 3. 解压并重命名 Bash tar -zxvf jdk-22_linux-x64_bin.tar.gz -C /export/server cd /export/server mv jdk-22 jdk 4. 配置环境变量(用户级 or 全局) 推荐在用户家目录下修改 .bash_profile: Bash vim ~/.bash_profile 添加: Bash export JAVA_HOME=/export/server/jdk export PATH=$PATH:$JAVA_HOME/bin 加载配置: Bash source ~/.bash_profile ✅ 验证:java -version 输出应显示JDK 22版本信息 如需全局生效,可编辑 /etc/profile 并 source,但注意权限问题 五、关闭防火墙 SELinux 1. 关闭防火墙 Bash sudo systemctl stop firewalld sudo systemctl disable firewalld 2. 关闭 SELinux Bash sudo vim /etc/sysconfig/selinux 修改为: text SELINUX=disabled 📌 保存后必须重启系统才能生效: Bash sudo reboot ⚠️ 注意:SELINUX拼写不能出错,否则系统可能无法启动! 六、安装Hadoop并配置核心文件 1. 解压Hadoop到指定目录 Bash tar -zxvf hadoop-3.x.x.tar.gz -C /export/server cd /export/server mv hadoop-3.x.x hadoop 2. 设置用户权限 Bash sudo chown -R maoxiuting:maoxiuting /export/server/hadoop 3. 配置 core-site.xml 路径:/export/server/hadoop/etc/hadoop/core-site.xml ✅ 使用主机名 master 替代 localhost,便于集群扩展 4. 配置 hdfs-site.xml 创建数据目录并授权: Bash mkdir -p /export/server/tmp/namenode mkdir -p /export/server/tmp/datanode chown -R maoxiuting:maoxiuting /export/server/tmp 5. 配置 hadoop-env.sh Bash vim /export/server/hadoop/etc/hadoop/hadoop-env.sh 添加: ❗关键:指定HDFS进程运行用户为 maoxiuting,避免权限拒绝错误 七、格式化NameNode ⚠️ 仅首次执行! Bash /export/server/hadoop/bin/hdfs namenode -format 观察输出是否有 Storage directory has been successfully created. 字样 八、启动Hadoop HDFS服务 Bash /export/server/hadoop/sbin/start-dfs.sh ✅ 验证是否启动成功: Bash jps 应看到: NameNode DataNode SecondaryNameNode 若没有出现,请检查日志 /export/server/hadoop/logs/ 九、Web界面与命令行操作HDFS 1. 访问 Web UI 在Windows浏览器中访问: http://192.168.128.130:9870 或使用主机名: http://master:9870 ✅ 可视化查看HDFS状态 2. 命令行操作示例(切换回 maoxiuting 用户) (1) 创建用户目录 Bash /export/server/hadoop/bin/hadoop fs -mkdir -p hdfs://master:9000/user/maoxiuting (2) 准备本地测试文件 Bash echo "I love Hadoop" > /export/server/test.txt (3) 上传文件 Bash /export/server/hadoop/bin/hadoop fs -put /export/server/test.txt hdfs://master:9000/user/maoxiuting/ (4) 查看文件列表 Bash /export/server/hadoop/bin/hadoop fs -ls hdfs://master:9000/user/maoxiuting/ (5) 查看文件内容 Bash /export/server/hadoop/bin/hadoop fs -cat hdfs://master:9000/user/maoxiuting/test.txt (6) 下载文件并改名 Bash /export/server/hadoop/bin/hadoop fs -get hdfs://master:9000/user/maoxiuting/test.txt /export/server/test1.txt (7) 删除文件 Bash /export/server/hadoop/bin/hadoop fs -rm hdfs://master:9000/user/maoxiuting/test.txt (8) 删除目录 Bash /export/server/hadoop/bin/hadoop fs -rm -r hdfs://master:9000/user/maoxiuting 知识点 SSH免密登录原理 基于非对称加密,客户端私钥与服务端公钥匹配后自动认证,无需密码。 HDFS核心配置参数作用 fs.defaultFS 定义默认文件系统地址;dfs.namenode.name.dir 指定元数据存储位置。 Linux环境变量与权限管理 $PATH 决定命令查找路径;chmod/chown 控制文件访问权限,防止服务启动失败。 jdkHadoop的路径分别为/home/maoxiuting/jdk-8u391-linux-x64.tar.gz,/home/maoxiuting/hadoop-3.1.3.tar.gz, 回答 题目重述 根据您提供的新路径信息: JDK 压缩包位于 /home/maoxiuting/jdk-8u391-linux-x64.tar.gz Hadoop 压缩包位于 /home/maoxiuting/hadoop-3.1.3.tar.gz 结合之前的配置需求(IP: 192.168.128.130,用户名:maoxiuting),更新并修正所有操作步骤,确保路径准确、权限合理、服务正常启动。 一、解压 JDK 与 Hadoop 到指定目录 1. 创建统一安装目录并赋权 Bash sudo mkdir -p /export/server sudo chown maoxiuting:maoxiuting /export/server 2. 解压 JDK 3. 解压 Hadoop ✅ 确保 /export/server 下有两个文件夹: /export/server/jdk /export/server/hadoop 二、配置环境变量(用户级别) 编辑当前用户的环境配置文件: Bash vim ~/.bash_profile 添加以下内容: Bash export JAVA_HOME=/export/server/jdk export HADOOP_HOME=/export/server/hadoop export PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin 保存后立即生效: Bash source ~/.bash_profile ✅ 验证: Bash java -version hadoop version 应分别输出 Java 8 Hadoop 3.1.3 版本信息。 ⚠️ 若提示命令未找到,请检查路径拼写及是否执行 source 三、配置 Hadoop 核心文件(路径已适配)详细一点,然后所有命令均改为vi
最新发布
10-31
[root@hadoop1 flume]# bin/flume-ng agent -n agent1 -c conf -f conf/taildir-file-selector-avro.properties -Dflume.root.logger=INFO,console Info: Including Hadoop libraries found via (/home/softwares/hadoop/bin/hadoop) for HDFS access ERROR: org_apache_flume_tools_GetJavaProperty is not COMMAND nor fully qualified CLASSNAME. Info: Including Hive libraries found via (/home/softwares/hive) for Hive access + exec /home/softwares/jdk/bin/java -Xmx20m -Dflume.root.logger=INFO,console -cp '/home/softwares/flume/conf:/home/softwares/flume/lib/*:/home/softwares/hadoop/etc/hadoop:/home/softwares/hadoop/share/hadoop/common/lib/*:/home/softwares/hadoop/share/hadoop/common/*:/home/softwares/hadoop/share/hadoop/hdfs:/home/softwares/hadoop/share/hadoop/hdfs/lib/*:/home/softwares/hadoop/share/hadoop/hdfs/*:/home/softwares/hadoop/share/hadoop/mapreduce/lib/*:/home/softwares/hadoop/share/hadoop/mapreduce/*:/home/softwares/hadoop/share/hadoop/yarn:/home/softwares/hadoop/share/hadoop/yarn/lib/*:/home/softwares/hadoop/share/hadoop/yarn/*:/home/softwares/hive/lib/*' -Djava.library.path=:Usage: hadoop '[OPTIONS]' SUBCOMMAND '[SUBCOMMAND' 'OPTIONS]' or hadoop '[OPTIONS]' CLASSNAME '[CLASSNAME' 'OPTIONS]' where CLASSNAME is a user-provided Java class OPTIONS is none or any of: buildpaths attempt to add class files from build tree --config dir Hadoop config directory --debug turn on shell script debug mode --help usage information hostnames 'list[,of,host,names]' hosts to use in slave mode hosts filename list of hosts to use in slave mode loglevel level set the log4j level for this command workers turn on worker mode SUBCOMMAND is one of: Admin Commands: daemonlog get/set the log level for each daemon Client Commands: archive create a Hadoop archive checknative check native Hadoop and compression libraries availability classpath prints the class path needed to get the Hadoop jar and the required libraries conftest validate configuration XML files credential interact with credential providers distch distributed metadata changer distcp copy file or directories recursively dtutil operations related to delegation tokens envvars display computed Hadoop environment variables fs run a generic filesystem user client gridmix submit a mix of synthetic job, modeling a profiled from production load jar '<jar>' run a jar file. NOTE: please use '"yarn' 'jar"' to launch YARN applications, not this command. jnipath prints the java.library.path kdiag Diagnose Kerberos Problems kerbname show auth_to_local principal conversion key manage keys via the KeyProvider rumenfolder scale a rumen input trace rumentrace convert logs into a rumen trace s3guard manage metadata on S3 trace view and modify Hadoop tracing settings version print the version Daemon Commands: kms run KMS, the Key Management Server SUBCOMMAND may print help when invoked w/o parameters or with -h. org.apache.flume.node.Application -n agent1 -f conf/taildir-file-selector-avro.properties 错误: 找不到或无法加载主类 hadoop [root@hadoop1 flume]# jps
10-19
# ================================================================== # WARNING: This file is auto-generated by generate_docker_compose # Do not modify this file directly. Instead, update the .env.example # or docker-compose-template.yaml and regenerate this file. # ================================================================== x-shared-env: &shared-api-worker-env CONSOLE_API_URL: ${CONSOLE_API_URL:-} CONSOLE_WEB_URL: ${CONSOLE_WEB_URL:-} SERVICE_API_URL: ${SERVICE_API_URL:-} APP_API_URL: ${APP_API_URL:-} APP_WEB_URL: ${APP_WEB_URL:-} FILES_URL: ${FILES_URL:-} LOG_LEVEL: ${LOG_LEVEL:-INFO} LOG_FILE: ${LOG_FILE:-/app/logs/server.log} LOG_FILE_MAX_SIZE: ${LOG_FILE_MAX_SIZE:-20} LOG_FILE_BACKUP_COUNT: ${LOG_FILE_BACKUP_COUNT:-5} LOG_DATEFORMAT: ${LOG_DATEFORMAT:-%Y-%m-%d %H:%M:%S} LOG_TZ: ${LOG_TZ:-UTC} DEBUG: ${DEBUG:-false} FLASK_DEBUG: ${FLASK_DEBUG:-false} ENABLE_REQUEST_LOGGING: ${ENABLE_REQUEST_LOGGING:-False} SECRET_KEY: ${SECRET_KEY:-sk-9f73s3ljTXVcMT3Blb3ljTqtsKiGHXVcMT3BlbkFJLK7U} INIT_PASSWORD: ${INIT_PASSWORD:-} DEPLOY_ENV: ${DEPLOY_ENV:-PRODUCTION} CHECK_UPDATE_URL: ${CHECK_UPDATE_URL:-https://updates.dify.ai} OPENAI_API_BASE: ${OPENAI_API_BASE:-https://api.openai.com/v1} MIGRATION_ENABLED: ${MIGRATION_ENABLED:-true} FILES_ACCESS_TIMEOUT: ${FILES_ACCESS_TIMEOUT:-300} ACCESS_TOKEN_EXPIRE_MINUTES: ${ACCESS_TOKEN_EXPIRE_MINUTES:-60} REFRESH_TOKEN_EXPIRE_DAYS: ${REFRESH_TOKEN_EXPIRE_DAYS:-30} APP_MAX_ACTIVE_REQUESTS: ${APP_MAX_ACTIVE_REQUESTS:-0} APP_MAX_EXECUTION_TIME: ${APP_MAX_EXECUTION_TIME:-1200} DIFY_BIND_ADDRESS: ${DIFY_BIND_ADDRESS:-0.0.0.0} DIFY_PORT: ${DIFY_PORT:-5001} SERVER_WORKER_AMOUNT: ${SERVER_WORKER_AMOUNT:-1} SERVER_WORKER_CLASS: ${SERVER_WORKER_CLASS:-gevent} SERVER_WORKER_CONNECTIONS: ${SERVER_WORKER_CONNECTIONS:-10} CELERY_WORKER_CLASS: ${CELERY_WORKER_CLASS:-} GUNICORN_TIMEOUT: ${GUNICORN_TIMEOUT:-360} CELERY_WORKER_AMOUNT: ${CELERY_WORKER_AMOUNT:-} CELERY_AUTO_SCALE: ${CELERY_AUTO_SCALE:-false} CELERY_MAX_WORKERS: ${CELERY_MAX_WORKERS:-} CELERY_MIN_WORKERS: ${CELERY_MIN_WORKERS:-} API_TOOL_DEFAULT_CONNECT_TIMEOUT: ${API_TOOL_DEFAULT_CONNECT_TIMEOUT:-10} API_TOOL_DEFAULT_READ_TIMEOUT: ${API_TOOL_DEFAULT_READ_TIMEOUT:-60} ENABLE_WEBSITE_JINAREADER: ${ENABLE_WEBSITE_JINAREADER:-true} ENABLE_WEBSITE_FIRECRAWL: ${ENABLE_WEBSITE_FIRECRAWL:-true} ENABLE_WEBSITE_WATERCRAWL: ${ENABLE_WEBSITE_WATERCRAWL:-true} DB_USERNAME: ${DB_USERNAME:-postgres} DB_PASSWORD: ${DB_PASSWORD:-difyai123456} DB_HOST: ${DB_HOST:-db} DB_PORT: ${DB_PORT:-5432} DB_DATABASE: ${DB_DATABASE:-dify} SQLALCHEMY_POOL_SIZE: ${SQLALCHEMY_POOL_SIZE:-30} SQLALCHEMY_POOL_RECYCLE: ${SQLALCHEMY_POOL_RECYCLE:-3600} SQLALCHEMY_ECHO: ${SQLALCHEMY_ECHO:-false} POSTGRES_MAX_CONNECTIONS: ${POSTGRES_MAX_CONNECTIONS:-100} POSTGRES_SHARED_BUFFERS: ${POSTGRES_SHARED_BUFFERS:-128MB} POSTGRES_WORK_MEM: ${POSTGRES_WORK_MEM:-4MB} POSTGRES_MAINTENANCE_WORK_MEM: ${POSTGRES_MAINTENANCE_WORK_MEM:-64MB} POSTGRES_EFFECTIVE_CACHE_SIZE: ${POSTGRES_EFFECTIVE_CACHE_SIZE:-4096MB} REDIS_HOST: ${REDIS_HOST:-redis} REDIS_PORT: ${REDIS_PORT:-6379} REDIS_USERNAME: ${REDIS_USERNAME:-} REDIS_PASSWORD: ${REDIS_PASSWORD:-difyai123456} REDIS_USE_SSL: ${REDIS_USE_SSL:-false} REDIS_DB: ${REDIS_DB:-0} REDIS_USE_SENTINEL: ${REDIS_USE_SENTINEL:-false} REDIS_SENTINELS: ${REDIS_SENTINELS:-} REDIS_SENTINEL_SERVICE_NAME: ${REDIS_SENTINEL_SERVICE_NAME:-} REDIS_SENTINEL_USERNAME: ${REDIS_SENTINEL_USERNAME:-} REDIS_SENTINEL_PASSWORD: ${REDIS_SENTINEL_PASSWORD:-} REDIS_SENTINEL_SOCKET_TIMEOUT: ${REDIS_SENTINEL_SOCKET_TIMEOUT:-0.1} REDIS_USE_CLUSTERS: ${REDIS_USE_CLUSTERS:-false} REDIS_CLUSTERS: ${REDIS_CLUSTERS:-} REDIS_CLUSTERS_PASSWORD: ${REDIS_CLUSTERS_PASSWORD:-} CELERY_BROKER_URL: ${CELERY_BROKER_URL:-redis://:difyai123456@redis:6379/1} BROKER_USE_SSL: ${BROKER_USE_SSL:-false} CELERY_USE_SENTINEL: ${CELERY_USE_SENTINEL:-false} CELERY_SENTINEL_MASTER_NAME: ${CELERY_SENTINEL_MASTER_NAME:-} CELERY_SENTINEL_PASSWORD: ${CELERY_SENTINEL_PASSWORD:-} CELERY_SENTINEL_SOCKET_TIMEOUT: ${CELERY_SENTINEL_SOCKET_TIMEOUT:-0.1} WEB_API_CORS_ALLOW_ORIGINS: ${WEB_API_CORS_ALLOW_ORIGINS:-*} CONSOLE_CORS_ALLOW_ORIGINS: ${CONSOLE_CORS_ALLOW_ORIGINS:-*} STORAGE_TYPE: ${STORAGE_TYPE:-opendal} OPENDAL_SCHEME: ${OPENDAL_SCHEME:-fs} OPENDAL_FS_ROOT: ${OPENDAL_FS_ROOT:-storage} S3_ENDPOINT: ${S3_ENDPOINT:-} S3_REGION: ${S3_REGION:-us-east-1} S3_BUCKET_NAME: ${S3_BUCKET_NAME:-difyai} S3_ACCESS_KEY: ${S3_ACCESS_KEY:-} S3_SECRET_KEY: ${S3_SECRET_KEY:-} S3_USE_AWS_MANAGED_IAM: ${S3_USE_AWS_MANAGED_IAM:-false} AZURE_BLOB_ACCOUNT_NAME: ${AZURE_BLOB_ACCOUNT_NAME:-difyai} AZURE_BLOB_ACCOUNT_KEY: ${AZURE_BLOB_ACCOUNT_KEY:-difyai} AZURE_BLOB_CONTAINER_NAME: ${AZURE_BLOB_CONTAINER_NAME:-difyai-container} AZURE_BLOB_ACCOUNT_URL: ${AZURE_BLOB_ACCOUNT_URL:-https://<your_account_name>.blob.core.windows.net} GOOGLE_STORAGE_BUCKET_NAME: ${GOOGLE_STORAGE_BUCKET_NAME:-your-bucket-name} GOOGLE_STORAGE_SERVICE_ACCOUNT_JSON_BASE64: ${GOOGLE_STORAGE_SERVICE_ACCOUNT_JSON_BASE64:-} ALIYUN_OSS_BUCKET_NAME: ${ALIYUN_OSS_BUCKET_NAME:-your-bucket-name} ALIYUN_OSS_ACCESS_KEY: ${ALIYUN_OSS_ACCESS_KEY:-your-access-key} ALIYUN_OSS_SECRET_KEY: ${ALIYUN_OSS_SECRET_KEY:-your-secret-key} ALIYUN_OSS_ENDPOINT: ${ALIYUN_OSS_ENDPOINT:-https://oss-ap-southeast-1-internal.aliyuncs.com} ALIYUN_OSS_REGION: ${ALIYUN_OSS_REGION:-ap-southeast-1} ALIYUN_OSS_AUTH_VERSION: ${ALIYUN_OSS_AUTH_VERSION:-v4} ALIYUN_OSS_PATH: ${ALIYUN_OSS_PATH:-your-path} TENCENT_COS_BUCKET_NAME: ${TENCENT_COS_BUCKET_NAME:-your-bucket-name} TENCENT_COS_SECRET_KEY: ${TENCENT_COS_SECRET_KEY:-your-secret-key} TENCENT_COS_SECRET_ID: ${TENCENT_COS_SECRET_ID:-your-secret-id} TENCENT_COS_REGION: ${TENCENT_COS_REGION:-your-region} TENCENT_COS_SCHEME: ${TENCENT_COS_SCHEME:-your-scheme} OCI_ENDPOINT: ${OCI_ENDPOINT:-https://your-object-storage-namespace.compat.objectstorage.us-ashburn-1.oraclecloud.com} OCI_BUCKET_NAME: ${OCI_BUCKET_NAME:-your-bucket-name} OCI_ACCESS_KEY: ${OCI_ACCESS_KEY:-your-access-key} OCI_SECRET_KEY: ${OCI_SECRET_KEY:-your-secret-key} OCI_REGION: ${OCI_REGION:-us-ashburn-1} HUAWEI_OBS_BUCKET_NAME: ${HUAWEI_OBS_BUCKET_NAME:-your-bucket-name} HUAWEI_OBS_SECRET_KEY: ${HUAWEI_OBS_SECRET_KEY:-your-secret-key} HUAWEI_OBS_ACCESS_KEY: ${HUAWEI_OBS_ACCESS_KEY:-your-access-key} HUAWEI_OBS_SERVER: ${HUAWEI_OBS_SERVER:-your-server-url} VOLCENGINE_TOS_BUCKET_NAME: ${VOLCENGINE_TOS_BUCKET_NAME:-your-bucket-name} VOLCENGINE_TOS_SECRET_KEY: ${VOLCENGINE_TOS_SECRET_KEY:-your-secret-key} VOLCENGINE_TOS_ACCESS_KEY: ${VOLCENGINE_TOS_ACCESS_KEY:-your-access-key} VOLCENGINE_TOS_ENDPOINT: ${VOLCENGINE_TOS_ENDPOINT:-your-server-url} VOLCENGINE_TOS_REGION: ${VOLCENGINE_TOS_REGION:-your-region} BAIDU_OBS_BUCKET_NAME: ${BAIDU_OBS_BUCKET_NAME:-your-bucket-name} BAIDU_OBS_SECRET_KEY: ${BAIDU_OBS_SECRET_KEY:-your-secret-key} BAIDU_OBS_ACCESS_KEY: ${BAIDU_OBS_ACCESS_KEY:-your-access-key} BAIDU_OBS_ENDPOINT: ${BAIDU_OBS_ENDPOINT:-your-server-url} SUPABASE_BUCKET_NAME: ${SUPABASE_BUCKET_NAME:-your-bucket-name} SUPABASE_API_KEY: ${SUPABASE_API_KEY:-your-access-key} SUPABASE_URL: ${SUPABASE_URL:-your-server-url} VECTOR_STORE: ${VECTOR_STORE:-weaviate} WEAVIATE_ENDPOINT: ${WEAVIATE_ENDPOINT:-http://weaviate:8080} WEAVIATE_API_KEY: ${WEAVIATE_API_KEY:-WVF5YThaHlkYwhGUSmCRgsX3tD5ngdN8pkih} QDRANT_URL: ${QDRANT_URL:-http://qdrant:6333} QDRANT_API_KEY: ${QDRANT_API_KEY:-difyai123456} QDRANT_CLIENT_TIMEOUT: ${QDRANT_CLIENT_TIMEOUT:-20} QDRANT_GRPC_ENABLED: ${QDRANT_GRPC_ENABLED:-false} QDRANT_GRPC_PORT: ${QDRANT_GRPC_PORT:-6334} QDRANT_REPLICATION_FACTOR: ${QDRANT_REPLICATION_FACTOR:-1} MILVUS_URI: ${MILVUS_URI:-http://host.docker.internal:19530} MILVUS_DATABASE: ${MILVUS_DATABASE:-} MILVUS_TOKEN: ${MILVUS_TOKEN:-} MILVUS_USER: ${MILVUS_USER:-} MILVUS_PASSWORD: ${MILVUS_PASSWORD:-} MILVUS_ENABLE_HYBRID_SEARCH: ${MILVUS_ENABLE_HYBRID_SEARCH:-False} MILVUS_ANALYZER_PARAMS: ${MILVUS_ANALYZER_PARAMS:-} MYSCALE_HOST: ${MYSCALE_HOST:-myscale} MYSCALE_PORT: ${MYSCALE_PORT:-8123} MYSCALE_USER: ${MYSCALE_USER:-default} MYSCALE_PASSWORD: ${MYSCALE_PASSWORD:-} MYSCALE_DATABASE: ${MYSCALE_DATABASE:-dify} MYSCALE_FTS_PARAMS: ${MYSCALE_FTS_PARAMS:-} COUCHBASE_CONNECTION_STRING: ${COUCHBASE_CONNECTION_STRING:-couchbase://couchbase-server} COUCHBASE_USER: ${COUCHBASE_USER:-Administrator} COUCHBASE_PASSWORD: ${COUCHBASE_PASSWORD:-password} COUCHBASE_BUCKET_NAME: ${COUCHBASE_BUCKET_NAME:-Embeddings} COUCHBASE_SCOPE_NAME: ${COUCHBASE_SCOPE_NAME:-_default} PGVECTOR_HOST: ${PGVECTOR_HOST:-pgvector} PGVECTOR_PORT: ${PGVECTOR_PORT:-5432} PGVECTOR_USER: ${PGVECTOR_USER:-postgres} PGVECTOR_PASSWORD: ${PGVECTOR_PASSWORD:-difyai123456} PGVECTOR_DATABASE: ${PGVECTOR_DATABASE:-dify} PGVECTOR_MIN_CONNECTION: ${PGVECTOR_MIN_CONNECTION:-1} PGVECTOR_MAX_CONNECTION: ${PGVECTOR_MAX_CONNECTION:-5} PGVECTOR_PG_BIGM: ${PGVECTOR_PG_BIGM:-false} PGVECTOR_PG_BIGM_VERSION: ${PGVECTOR_PG_BIGM_VERSION:-1.2-20240606} VASTBASE_HOST: ${VASTBASE_HOST:-vastbase} VASTBASE_PORT: ${VASTBASE_PORT:-5432} VASTBASE_USER: ${VASTBASE_USER:-dify} VASTBASE_PASSWORD: ${VASTBASE_PASSWORD:-Difyai123456} VASTBASE_DATABASE: ${VASTBASE_DATABASE:-dify} VASTBASE_MIN_CONNECTION: ${VASTBASE_MIN_CONNECTION:-1} VASTBASE_MAX_CONNECTION: ${VASTBASE_MAX_CONNECTION:-5} PGVECTO_RS_HOST: ${PGVECTO_RS_HOST:-pgvecto-rs} PGVECTO_RS_PORT: ${PGVECTO_RS_PORT:-5432} PGVECTO_RS_USER: ${PGVECTO_RS_USER:-postgres} PGVECTO_RS_PASSWORD: ${PGVECTO_RS_PASSWORD:-difyai123456} PGVECTO_RS_DATABASE: ${PGVECTO_RS_DATABASE:-dify} ANALYTICDB_KEY_ID: ${ANALYTICDB_KEY_ID:-your-ak} ANALYTICDB_KEY_SECRET: ${ANALYTICDB_KEY_SECRET:-your-sk} ANALYTICDB_REGION_ID: ${ANALYTICDB_REGION_ID:-cn-hangzhou} ANALYTICDB_INSTANCE_ID: ${ANALYTICDB_INSTANCE_ID:-gp-ab123456} ANALYTICDB_ACCOUNT: ${ANALYTICDB_ACCOUNT:-testaccount} ANALYTICDB_PASSWORD: ${ANALYTICDB_PASSWORD:-testpassword} ANALYTICDB_NAMESPACE: ${ANALYTICDB_NAMESPACE:-dify} ANALYTICDB_NAMESPACE_PASSWORD: ${ANALYTICDB_NAMESPACE_PASSWORD:-difypassword} ANALYTICDB_HOST: ${ANALYTICDB_HOST:-gp-test.aliyuncs.com} ANALYTICDB_PORT: ${ANALYTICDB_PORT:-5432} ANALYTICDB_MIN_CONNECTION: ${ANALYTICDB_MIN_CONNECTION:-1} ANALYTICDB_MAX_CONNECTION: ${ANALYTICDB_MAX_CONNECTION:-5} TIDB_VECTOR_HOST: ${TIDB_VECTOR_HOST:-tidb} TIDB_VECTOR_PORT: ${TIDB_VECTOR_PORT:-4000} TIDB_VECTOR_USER: ${TIDB_VECTOR_USER:-} TIDB_VECTOR_PASSWORD: ${TIDB_VECTOR_PASSWORD:-} TIDB_VECTOR_DATABASE: ${TIDB_VECTOR_DATABASE:-dify} MATRIXONE_HOST: ${MATRIXONE_HOST:-matrixone} MATRIXONE_PORT: ${MATRIXONE_PORT:-6001} MATRIXONE_USER: ${MATRIXONE_USER:-dump} MATRIXONE_PASSWORD: ${MATRIXONE_PASSWORD:-111} MATRIXONE_DATABASE: ${MATRIXONE_DATABASE:-dify} TIDB_ON_QDRANT_URL: ${TIDB_ON_QDRANT_URL:-http://127.0.0.1} TIDB_ON_QDRANT_API_KEY: ${TIDB_ON_QDRANT_API_KEY:-dify} TIDB_ON_QDRANT_CLIENT_TIMEOUT: ${TIDB_ON_QDRANT_CLIENT_TIMEOUT:-20} TIDB_ON_QDRANT_GRPC_ENABLED: ${TIDB_ON_QDRANT_GRPC_ENABLED:-false} TIDB_ON_QDRANT_GRPC_PORT: ${TIDB_ON_QDRANT_GRPC_PORT:-6334} TIDB_PUBLIC_KEY: ${TIDB_PUBLIC_KEY:-dify} TIDB_PRIVATE_KEY: ${TIDB_PRIVATE_KEY:-dify} TIDB_API_URL: ${TIDB_API_URL:-http://127.0.0.1} TIDB_IAM_API_URL: ${TIDB_IAM_API_URL:-http://127.0.0.1} TIDB_REGION: ${TIDB_REGION:-regions/aws-us-east-1} TIDB_PROJECT_ID: ${TIDB_PROJECT_ID:-dify} TIDB_SPEND_LIMIT: ${TIDB_SPEND_LIMIT:-100} CHROMA_HOST: ${CHROMA_HOST:-127.0.0.1} CHROMA_PORT: ${CHROMA_PORT:-8000} CHROMA_TENANT: ${CHROMA_TENANT:-default_tenant} CHROMA_DATABASE: ${CHROMA_DATABASE:-default_database} CHROMA_AUTH_PROVIDER: ${CHROMA_AUTH_PROVIDER:-chromadb.auth.token_authn.TokenAuthClientProvider} CHROMA_AUTH_CREDENTIALS: ${CHROMA_AUTH_CREDENTIALS:-} ORACLE_USER: ${ORACLE_USER:-dify} ORACLE_PASSWORD: ${ORACLE_PASSWORD:-dify} ORACLE_DSN: ${ORACLE_DSN:-oracle:1521/FREEPDB1} ORACLE_CONFIG_DIR: ${ORACLE_CONFIG_DIR:-/app/api/storage/wallet} ORACLE_WALLET_LOCATION: ${ORACLE_WALLET_LOCATION:-/app/api/storage/wallet} ORACLE_WALLET_PASSWORD: ${ORACLE_WALLET_PASSWORD:-dify} ORACLE_IS_AUTONOMOUS: ${ORACLE_IS_AUTONOMOUS:-false} RELYT_HOST: ${RELYT_HOST:-db} RELYT_PORT: ${RELYT_PORT:-5432} RELYT_USER: ${RELYT_USER:-postgres} RELYT_PASSWORD: ${RELYT_PASSWORD:-difyai123456} RELYT_DATABASE: ${RELYT_DATABASE:-postgres} OPENSEARCH_HOST: ${OPENSEARCH_HOST:-opensearch} OPENSEARCH_PORT: ${OPENSEARCH_PORT:-9200} OPENSEARCH_SECURE: ${OPENSEARCH_SECURE:-true} OPENSEARCH_VERIFY_CERTS: ${OPENSEARCH_VERIFY_CERTS:-true} OPENSEARCH_AUTH_METHOD: ${OPENSEARCH_AUTH_METHOD:-basic} OPENSEARCH_USER: ${OPENSEARCH_USER:-admin} OPENSEARCH_PASSWORD: ${OPENSEARCH_PASSWORD:-admin} OPENSEARCH_AWS_REGION: ${OPENSEARCH_AWS_REGION:-ap-southeast-1} OPENSEARCH_AWS_SERVICE: ${OPENSEARCH_AWS_SERVICE:-aoss} TENCENT_VECTOR_DB_URL: ${TENCENT_VECTOR_DB_URL:-http://127.0.0.1} TENCENT_VECTOR_DB_API_KEY: ${TENCENT_VECTOR_DB_API_KEY:-dify} TENCENT_VECTOR_DB_TIMEOUT: ${TENCENT_VECTOR_DB_TIMEOUT:-30} TENCENT_VECTOR_DB_USERNAME: ${TENCENT_VECTOR_DB_USERNAME:-dify} TENCENT_VECTOR_DB_DATABASE: ${TENCENT_VECTOR_DB_DATABASE:-dify} TENCENT_VECTOR_DB_SHARD: ${TENCENT_VECTOR_DB_SHARD:-1} TENCENT_VECTOR_DB_REPLICAS: ${TENCENT_VECTOR_DB_REPLICAS:-2} TENCENT_VECTOR_DB_ENABLE_HYBRID_SEARCH: ${TENCENT_VECTOR_DB_ENABLE_HYBRID_SEARCH:-false} ELASTICSEARCH_HOST: ${ELASTICSEARCH_HOST:-0.0.0.0} ELASTICSEARCH_PORT: ${ELASTICSEARCH_PORT:-9200} ELASTICSEARCH_USERNAME: ${ELASTICSEARCH_USERNAME:-elastic} ELASTICSEARCH_PASSWORD: ${ELASTICSEARCH_PASSWORD:-elastic} KIBANA_PORT: ${KIBANA_PORT:-5601} BAIDU_VECTOR_DB_ENDPOINT: ${BAIDU_VECTOR_DB_ENDPOINT:-http://127.0.0.1:5287} BAIDU_VECTOR_DB_CONNECTION_TIMEOUT_MS: ${BAIDU_VECTOR_DB_CONNECTION_TIMEOUT_MS:-30000} BAIDU_VECTOR_DB_ACCOUNT: ${BAIDU_VECTOR_DB_ACCOUNT:-root} BAIDU_VECTOR_DB_API_KEY: ${BAIDU_VECTOR_DB_API_KEY:-dify} BAIDU_VECTOR_DB_DATABASE: ${BAIDU_VECTOR_DB_DATABASE:-dify} BAIDU_VECTOR_DB_SHARD: ${BAIDU_VECTOR_DB_SHARD:-1} BAIDU_VECTOR_DB_REPLICAS: ${BAIDU_VECTOR_DB_REPLICAS:-3} VIKINGDB_ACCESS_KEY: ${VIKINGDB_ACCESS_KEY:-your-ak} VIKINGDB_SECRET_KEY: ${VIKINGDB_SECRET_KEY:-your-sk} VIKINGDB_REGION: ${VIKINGDB_REGION:-cn-shanghai} VIKINGDB_HOST: ${VIKINGDB_HOST:-api-vikingdb.xxx.volces.com} VIKINGDB_SCHEMA: ${VIKINGDB_SCHEMA:-http} VIKINGDB_CONNECTION_TIMEOUT: ${VIKINGDB_CONNECTION_TIMEOUT:-30} VIKINGDB_SOCKET_TIMEOUT: ${VIKINGDB_SOCKET_TIMEOUT:-30} LINDORM_URL: ${LINDORM_URL:-http://lindorm:30070} LINDORM_USERNAME: ${LINDORM_USERNAME:-lindorm} LINDORM_PASSWORD: ${LINDORM_PASSWORD:-lindorm} LINDORM_QUERY_TIMEOUT: ${LINDORM_QUERY_TIMEOUT:-1} OCEANBASE_VECTOR_HOST: ${OCEANBASE_VECTOR_HOST:-oceanbase} OCEANBASE_VECTOR_PORT: ${OCEANBASE_VECTOR_PORT:-2881} OCEANBASE_VECTOR_USER: ${OCEANBASE_VECTOR_USER:-root@test} OCEANBASE_VECTOR_PASSWORD: ${OCEANBASE_VECTOR_PASSWORD:-difyai123456} OCEANBASE_VECTOR_DATABASE: ${OCEANBASE_VECTOR_DATABASE:-test} OCEANBASE_CLUSTER_NAME: ${OCEANBASE_CLUSTER_NAME:-difyai} OCEANBASE_MEMORY_LIMIT: ${OCEANBASE_MEMORY_LIMIT:-6G} OCEANBASE_ENABLE_HYBRID_SEARCH: ${OCEANBASE_ENABLE_HYBRID_SEARCH:-false} OPENGAUSS_HOST: ${OPENGAUSS_HOST:-opengauss} OPENGAUSS_PORT: ${OPENGAUSS_PORT:-6600} OPENGAUSS_USER: ${OPENGAUSS_USER:-postgres} OPENGAUSS_PASSWORD: ${OPENGAUSS_PASSWORD:-Dify@123} OPENGAUSS_DATABASE: ${OPENGAUSS_DATABASE:-dify} OPENGAUSS_MIN_CONNECTION: ${OPENGAUSS_MIN_CONNECTION:-1} OPENGAUSS_MAX_CONNECTION: ${OPENGAUSS_MAX_CONNECTION:-5} OPENGAUSS_ENABLE_PQ: ${OPENGAUSS_ENABLE_PQ:-false} HUAWEI_CLOUD_HOSTS: ${HUAWEI_CLOUD_HOSTS:-https://127.0.0.1:9200} HUAWEI_CLOUD_USER: ${HUAWEI_CLOUD_USER:-admin} HUAWEI_CLOUD_PASSWORD: ${HUAWEI_CLOUD_PASSWORD:-admin} UPSTASH_VECTOR_URL: ${UPSTASH_VECTOR_URL:-https://xxx-vector.upstash.io} UPSTASH_VECTOR_TOKEN: ${UPSTASH_VECTOR_TOKEN:-dify} TABLESTORE_ENDPOINT: ${TABLESTORE_ENDPOINT:-https://instance-name.cn-hangzhou.ots.aliyuncs.com} TABLESTORE_INSTANCE_NAME: ${TABLESTORE_INSTANCE_NAME:-instance-name} TABLESTORE_ACCESS_KEY_ID: ${TABLESTORE_ACCESS_KEY_ID:-xxx} TABLESTORE_ACCESS_KEY_SECRET: ${TABLESTORE_ACCESS_KEY_SECRET:-xxx} UPLOAD_FILE_SIZE_LIMIT: ${UPLOAD_FILE_SIZE_LIMIT:-15} UPLOAD_FILE_BATCH_LIMIT: ${UPLOAD_FILE_BATCH_LIMIT:-5} ETL_TYPE: ${ETL_TYPE:-dify} UNSTRUCTURED_API_URL: ${UNSTRUCTURED_API_URL:-} UNSTRUCTURED_API_KEY: ${UNSTRUCTURED_API_KEY:-} SCARF_NO_ANALYTICS: ${SCARF_NO_ANALYTICS:-true} PROMPT_GENERATION_MAX_TOKENS: ${PROMPT_GENERATION_MAX_TOKENS:-512} CODE_GENERATION_MAX_TOKENS: ${CODE_GENERATION_MAX_TOKENS:-1024} PLUGIN_BASED_TOKEN_COUNTING_ENABLED: ${PLUGIN_BASED_TOKEN_COUNTING_ENABLED:-false} MULTIMODAL_SEND_FORMAT: ${MULTIMODAL_SEND_FORMAT:-base64} UPLOAD_IMAGE_FILE_SIZE_LIMIT: ${UPLOAD_IMAGE_FILE_SIZE_LIMIT:-10} UPLOAD_VIDEO_FILE_SIZE_LIMIT: ${UPLOAD_VIDEO_FILE_SIZE_LIMIT:-100} UPLOAD_AUDIO_FILE_SIZE_LIMIT: ${UPLOAD_AUDIO_FILE_SIZE_LIMIT:-50} SENTRY_DSN: ${SENTRY_DSN:-} API_SENTRY_DSN: ${API_SENTRY_DSN:-} API_SENTRY_TRACES_SAMPLE_RATE: ${API_SENTRY_TRACES_SAMPLE_RATE:-1.0} API_SENTRY_PROFILES_SAMPLE_RATE: ${API_SENTRY_PROFILES_SAMPLE_RATE:-1.0} WEB_SENTRY_DSN: ${WEB_SENTRY_DSN:-} NOTION_INTEGRATION_TYPE: ${NOTION_INTEGRATION_TYPE:-public} NOTION_CLIENT_SECRET: ${NOTION_CLIENT_SECRET:-} NOTION_CLIENT_ID: ${NOTION_CLIENT_ID:-} NOTION_INTERNAL_SECRET: ${NOTION_INTERNAL_SECRET:-} MAIL_TYPE: ${MAIL_TYPE:-resend} MAIL_DEFAULT_SEND_FROM: ${MAIL_DEFAULT_SEND_FROM:-} RESEND_API_URL: ${RESEND_API_URL:-https://api.resend.com} RESEND_API_KEY: ${RESEND_API_KEY:-your-resend-api-key} SMTP_SERVER: ${SMTP_SERVER:-} SMTP_PORT: ${SMTP_PORT:-465} SMTP_USERNAME: ${SMTP_USERNAME:-} SMTP_PASSWORD: ${SMTP_PASSWORD:-} SMTP_USE_TLS: ${SMTP_USE_TLS:-true} SMTP_OPPORTUNISTIC_TLS: ${SMTP_OPPORTUNISTIC_TLS:-false} SENDGRID_API_KEY: ${SENDGRID_API_KEY:-} INDEXING_MAX_SEGMENTATION_TOKENS_LENGTH: ${INDEXING_MAX_SEGMENTATION_TOKENS_LENGTH:-4000} INVITE_EXPIRY_HOURS: ${INVITE_EXPIRY_HOURS:-72} RESET_PASSWORD_TOKEN_EXPIRY_MINUTES: ${RESET_PASSWORD_TOKEN_EXPIRY_MINUTES:-5} CODE_EXECUTION_ENDPOINT: ${CODE_EXECUTION_ENDPOINT:-http://sandbox:8194} CODE_EXECUTION_API_KEY: ${CODE_EXECUTION_API_KEY:-dify-sandbox} CODE_MAX_NUMBER: ${CODE_MAX_NUMBER:-9223372036854775807} CODE_MIN_NUMBER: ${CODE_MIN_NUMBER:--9223372036854775808} CODE_MAX_DEPTH: ${CODE_MAX_DEPTH:-5} CODE_MAX_PRECISION: ${CODE_MAX_PRECISION:-20} CODE_MAX_STRING_LENGTH: ${CODE_MAX_STRING_LENGTH:-80000} CODE_MAX_STRING_ARRAY_LENGTH: ${CODE_MAX_STRING_ARRAY_LENGTH:-30} CODE_MAX_OBJECT_ARRAY_LENGTH: ${CODE_MAX_OBJECT_ARRAY_LENGTH:-30} CODE_MAX_NUMBER_ARRAY_LENGTH: ${CODE_MAX_NUMBER_ARRAY_LENGTH:-1000} CODE_EXECUTION_CONNECT_TIMEOUT: ${CODE_EXECUTION_CONNECT_TIMEOUT:-10} CODE_EXECUTION_READ_TIMEOUT: ${CODE_EXECUTION_READ_TIMEOUT:-60} CODE_EXECUTION_WRITE_TIMEOUT: ${CODE_EXECUTION_WRITE_TIMEOUT:-10} TEMPLATE_TRANSFORM_MAX_LENGTH: ${TEMPLATE_TRANSFORM_MAX_LENGTH:-80000} WORKFLOW_MAX_EXECUTION_STEPS: ${WORKFLOW_MAX_EXECUTION_STEPS:-500} WORKFLOW_MAX_EXECUTION_TIME: ${WORKFLOW_MAX_EXECUTION_TIME:-1200} WORKFLOW_CALL_MAX_DEPTH: ${WORKFLOW_CALL_MAX_DEPTH:-5} MAX_VARIABLE_SIZE: ${MAX_VARIABLE_SIZE:-204800} WORKFLOW_PARALLEL_DEPTH_LIMIT: ${WORKFLOW_PARALLEL_DEPTH_LIMIT:-3} WORKFLOW_FILE_UPLOAD_LIMIT: ${WORKFLOW_FILE_UPLOAD_LIMIT:-10} WORKFLOW_NODE_EXECUTION_STORAGE: ${WORKFLOW_NODE_EXECUTION_STORAGE:-rdbms} HTTP_REQUEST_NODE_MAX_BINARY_SIZE: ${HTTP_REQUEST_NODE_MAX_BINARY_SIZE:-10485760} HTTP_REQUEST_NODE_MAX_TEXT_SIZE: ${HTTP_REQUEST_NODE_MAX_TEXT_SIZE:-1048576} HTTP_REQUEST_NODE_SSL_VERIFY: ${HTTP_REQUEST_NODE_SSL_VERIFY:-True} RESPECT_XFORWARD_HEADERS_ENABLED: ${RESPECT_XFORWARD_HEADERS_ENABLED:-false} SSRF_PROXY_HTTP_URL: ${SSRF_PROXY_HTTP_URL:-http://ssrf_proxy:3128} SSRF_PROXY_HTTPS_URL: ${SSRF_PROXY_HTTPS_URL:-http://ssrf_proxy:3128} LOOP_NODE_MAX_COUNT: ${LOOP_NODE_MAX_COUNT:-100} MAX_TOOLS_NUM: ${MAX_TOOLS_NUM:-10} MAX_PARALLEL_LIMIT: ${MAX_PARALLEL_LIMIT:-10} MAX_ITERATIONS_NUM: ${MAX_ITERATIONS_NUM:-99} TEXT_GENERATION_TIMEOUT_MS: ${TEXT_GENERATION_TIMEOUT_MS:-60000} POSTGRES_USER: ${POSTGRES_USER:-${DB_USERNAME}} POSTGRES_PASSWORD: ${POSTGRES_PASSWORD:-${DB_PASSWORD}} POSTGRES_DB: ${POSTGRES_DB:-${DB_DATABASE}} PGDATA: ${PGDATA:-/var/lib/postgresql/data/pgdata} SANDBOX_API_KEY: ${SANDBOX_API_KEY:-dify-sandbox} SANDBOX_GIN_MODE: ${SANDBOX_GIN_MODE:-release} SANDBOX_WORKER_TIMEOUT: ${SANDBOX_WORKER_TIMEOUT:-15} SANDBOX_ENABLE_NETWORK: ${SANDBOX_ENABLE_NETWORK:-true} SANDBOX_HTTP_PROXY: ${SANDBOX_HTTP_PROXY:-http://ssrf_proxy:3128} SANDBOX_HTTPS_PROXY: ${SANDBOX_HTTPS_PROXY:-http://ssrf_proxy:3128} SANDBOX_PORT: ${SANDBOX_PORT:-8194} WEAVIATE_PERSISTENCE_DATA_PATH: ${WEAVIATE_PERSISTENCE_DATA_PATH:-/var/lib/weaviate} WEAVIATE_QUERY_DEFAULTS_LIMIT: ${WEAVIATE_QUERY_DEFAULTS_LIMIT:-25} WEAVIATE_AUTHENTICATION_ANONYMOUS_ACCESS_ENABLED: ${WEAVIATE_AUTHENTICATION_ANONYMOUS_ACCESS_ENABLED:-true} WEAVIATE_DEFAULT_VECTORIZER_MODULE: ${WEAVIATE_DEFAULT_VECTORIZER_MODULE:-none} WEAVIATE_CLUSTER_HOSTNAME: ${WEAVIATE_CLUSTER_HOSTNAME:-node1} WEAVIATE_AUTHENTICATION_APIKEY_ENABLED: ${WEAVIATE_AUTHENTICATION_APIKEY_ENABLED:-true} WEAVIATE_AUTHENTICATION_APIKEY_ALLOWED_KEYS: ${WEAVIATE_AUTHENTICATION_APIKEY_ALLOWED_KEYS:-WVF5YThaHlkYwhGUSmCRgsX3tD5ngdN8pkih} WEAVIATE_AUTHENTICATION_APIKEY_USERS: ${WEAVIATE_AUTHENTICATION_APIKEY_USERS:-hello@dify.ai} WEAVIATE_AUTHORIZATION_ADMINLIST_ENABLED: ${WEAVIATE_AUTHORIZATION_ADMINLIST_ENABLED:-true} WEAVIATE_AUTHORIZATION_ADMINLIST_USERS: ${WEAVIATE_AUTHORIZATION_ADMINLIST_USERS:-hello@dify.ai} CHROMA_SERVER_AUTHN_CREDENTIALS: ${CHROMA_SERVER_AUTHN_CREDENTIALS:-difyai123456} CHROMA_SERVER_AUTHN_PROVIDER: ${CHROMA_SERVER_AUTHN_PROVIDER:-chromadb.auth.token_authn.TokenAuthenticationServerProvider} CHROMA_IS_PERSISTENT: ${CHROMA_IS_PERSISTENT:-TRUE} ORACLE_PWD: ${ORACLE_PWD:-Dify123456} ORACLE_CHARACTERSET: ${ORACLE_CHARACTERSET:-AL32UTF8} ETCD_AUTO_COMPACTION_MODE: ${ETCD_AUTO_COMPACTION_MODE:-revision} ETCD_AUTO_COMPACTION_RETENTION: ${ETCD_AUTO_COMPACTION_RETENTION:-1000} ETCD_QUOTA_BACKEND_BYTES: ${ETCD_QUOTA_BACKEND_BYTES:-4294967296} ETCD_SNAPSHOT_COUNT: ${ETCD_SNAPSHOT_COUNT:-50000} MINIO_ACCESS_KEY: ${MINIO_ACCESS_KEY:-minioadmin} MINIO_SECRET_KEY: ${MINIO_SECRET_KEY:-minioadmin} ETCD_ENDPOINTS: ${ETCD_ENDPOINTS:-etcd:2379} MINIO_ADDRESS: ${MINIO_ADDRESS:-minio:9000} MILVUS_AUTHORIZATION_ENABLED: ${MILVUS_AUTHORIZATION_ENABLED:-true} PGVECTOR_PGUSER: ${PGVECTOR_PGUSER:-postgres} PGVECTOR_POSTGRES_PASSWORD: ${PGVECTOR_POSTGRES_PASSWORD:-difyai123456} PGVECTOR_POSTGRES_DB: ${PGVECTOR_POSTGRES_DB:-dify} PGVECTOR_PGDATA: ${PGVECTOR_PGDATA:-/var/lib/postgresql/data/pgdata} OPENSEARCH_DISCOVERY_TYPE: ${OPENSEARCH_DISCOVERY_TYPE:-single-node} OPENSEARCH_BOOTSTRAP_MEMORY_LOCK: ${OPENSEARCH_BOOTSTRAP_MEMORY_LOCK:-true} OPENSEARCH_JAVA_OPTS_MIN: ${OPENSEARCH_JAVA_OPTS_MIN:-512m} OPENSEARCH_JAVA_OPTS_MAX: ${OPENSEARCH_JAVA_OPTS_MAX:-1024m} OPENSEARCH_INITIAL_ADMIN_PASSWORD: ${OPENSEARCH_INITIAL_ADMIN_PASSWORD:-Qazwsxedc!@#123} OPENSEARCH_MEMLOCK_SOFT: ${OPENSEARCH_MEMLOCK_SOFT:--1} OPENSEARCH_MEMLOCK_HARD: ${OPENSEARCH_MEMLOCK_HARD:--1} OPENSEARCH_NOFILE_SOFT: ${OPENSEARCH_NOFILE_SOFT:-65536} OPENSEARCH_NOFILE_HARD: ${OPENSEARCH_NOFILE_HARD:-65536} NGINX_SERVER_NAME: ${NGINX_SERVER_NAME:-_} NGINX_HTTPS_ENABLED: ${NGINX_HTTPS_ENABLED:-false} NGINX_PORT: ${NGINX_PORT:-80} NGINX_SSL_PORT: ${NGINX_SSL_PORT:-443} NGINX_SSL_CERT_FILENAME: ${NGINX_SSL_CERT_FILENAME:-dify.crt} NGINX_SSL_CERT_KEY_FILENAME: ${NGINX_SSL_CERT_KEY_FILENAME:-dify.key} NGINX_SSL_PROTOCOLS: ${NGINX_SSL_PROTOCOLS:-TLSv1.1 TLSv1.2 TLSv1.3} NGINX_WORKER_PROCESSES: ${NGINX_WORKER_PROCESSES:-auto} NGINX_CLIENT_MAX_BODY_SIZE: ${NGINX_CLIENT_MAX_BODY_SIZE:-15M} NGINX_KEEPALIVE_TIMEOUT: ${NGINX_KEEPALIVE_TIMEOUT:-65} NGINX_PROXY_READ_TIMEOUT: ${NGINX_PROXY_READ_TIMEOUT:-3600s} NGINX_PROXY_SEND_TIMEOUT: ${NGINX_PROXY_SEND_TIMEOUT:-3600s} NGINX_ENABLE_CERTBOT_CHALLENGE: ${NGINX_ENABLE_CERTBOT_CHALLENGE:-false} CERTBOT_EMAIL: ${CERTBOT_EMAIL:-your_email@example.com} CERTBOT_DOMAIN: ${CERTBOT_DOMAIN:-your_domain.com} CERTBOT_OPTIONS: ${CERTBOT_OPTIONS:-} SSRF_HTTP_PORT: ${SSRF_HTTP_PORT:-3128} SSRF_COREDUMP_DIR: ${SSRF_COREDUMP_DIR:-/var/spool/squid} SSRF_REVERSE_PROXY_PORT: ${SSRF_REVERSE_PROXY_PORT:-8194} SSRF_SANDBOX_HOST: ${SSRF_SANDBOX_HOST:-sandbox} SSRF_DEFAULT_TIME_OUT: ${SSRF_DEFAULT_TIME_OUT:-5} SSRF_DEFAULT_CONNECT_TIME_OUT: ${SSRF_DEFAULT_CONNECT_TIME_OUT:-5} SSRF_DEFAULT_READ_TIME_OUT: ${SSRF_DEFAULT_READ_TIME_OUT:-5} SSRF_DEFAULT_WRITE_TIME_OUT: ${SSRF_DEFAULT_WRITE_TIME_OUT:-5} EXPOSE_NGINX_PORT: ${EXPOSE_NGINX_PORT:-80} EXPOSE_NGINX_SSL_PORT: ${EXPOSE_NGINX_SSL_PORT:-443} POSITION_TOOL_PINS: ${POSITION_TOOL_PINS:-} POSITION_TOOL_INCLUDES: ${POSITION_TOOL_INCLUDES:-} POSITION_TOOL_EXCLUDES: ${POSITION_TOOL_EXCLUDES:-} POSITION_PROVIDER_PINS: ${POSITION_PROVIDER_PINS:-} POSITION_PROVIDER_INCLUDES: ${POSITION_PROVIDER_INCLUDES:-} POSITION_PROVIDER_EXCLUDES: ${POSITION_PROVIDER_EXCLUDES:-} CSP_WHITELIST: ${CSP_WHITELIST:-} CREATE_TIDB_SERVICE_JOB_ENABLED: ${CREATE_TIDB_SERVICE_JOB_ENABLED:-false} MAX_SUBMIT_COUNT: ${MAX_SUBMIT_COUNT:-100} TOP_K_MAX_VALUE: ${TOP_K_MAX_VALUE:-10} DB_PLUGIN_DATABASE: ${DB_PLUGIN_DATABASE:-dify_plugin} EXPOSE_PLUGIN_DAEMON_PORT: ${EXPOSE_PLUGIN_DAEMON_PORT:-5002} PLUGIN_DAEMON_PORT: ${PLUGIN_DAEMON_PORT:-5002} PLUGIN_DAEMON_KEY: ${PLUGIN_DAEMON_KEY:-lYkiYYT6owG+71oLerGzA7GXCgOT++6ovaezWAjpCjf+Sjc3ZtU+qUEi} PLUGIN_DAEMON_URL: ${PLUGIN_DAEMON_URL:-http://plugin_daemon:5002} PLUGIN_MAX_PACKAGE_SIZE: ${PLUGIN_MAX_PACKAGE_SIZE:-52428800} PLUGIN_PPROF_ENABLED: ${PLUGIN_PPROF_ENABLED:-false} PLUGIN_DEBUGGING_HOST: ${PLUGIN_DEBUGGING_HOST:-0.0.0.0} PLUGIN_DEBUGGING_PORT: ${PLUGIN_DEBUGGING_PORT:-5003} EXPOSE_PLUGIN_DEBUGGING_HOST: ${EXPOSE_PLUGIN_DEBUGGING_HOST:-localhost} EXPOSE_PLUGIN_DEBUGGING_PORT: ${EXPOSE_PLUGIN_DEBUGGING_PORT:-5003} PLUGIN_DIFY_INNER_API_KEY: ${PLUGIN_DIFY_INNER_API_KEY:-QaHbTe77CtuXmsfyhR7+vRjI/+XbV1AaFy691iy+kGDv2Jvy0/eAh8Y1} PLUGIN_DIFY_INNER_API_URL: ${PLUGIN_DIFY_INNER_API_URL:-http://api:5001} ENDPOINT_URL_TEMPLATE: ${ENDPOINT_URL_TEMPLATE:-http://localhost/e/{hook_id}} MARKETPLACE_ENABLED: ${MARKETPLACE_ENABLED:-true} MARKETPLACE_API_URL: ${MARKETPLACE_API_URL:-https://marketplace.dify.ai} FORCE_VERIFYING_SIGNATURE: ${FORCE_VERIFYING_SIGNATURE:-true} PLUGIN_PYTHON_ENV_INIT_TIMEOUT: ${PLUGIN_PYTHON_ENV_INIT_TIMEOUT:-120} PLUGIN_MAX_EXECUTION_TIMEOUT: ${PLUGIN_MAX_EXECUTION_TIMEOUT:-600} PIP_MIRROR_URL: ${PIP_MIRROR_URL:-} PLUGIN_STORAGE_TYPE: ${PLUGIN_STORAGE_TYPE:-local} PLUGIN_STORAGE_LOCAL_ROOT: ${PLUGIN_STORAGE_LOCAL_ROOT:-/app/storage} PLUGIN_WORKING_PATH: ${PLUGIN_WORKING_PATH:-/app/storage/cwd} PLUGIN_INSTALLED_PATH: ${PLUGIN_INSTALLED_PATH:-plugin} PLUGIN_PACKAGE_CACHE_PATH: ${PLUGIN_PACKAGE_CACHE_PATH:-plugin_packages} PLUGIN_MEDIA_CACHE_PATH: ${PLUGIN_MEDIA_CACHE_PATH:-assets} PLUGIN_STORAGE_OSS_BUCKET: ${PLUGIN_STORAGE_OSS_BUCKET:-} PLUGIN_S3_USE_AWS: ${PLUGIN_S3_USE_AWS:-false} PLUGIN_S3_USE_AWS_MANAGED_IAM: ${PLUGIN_S3_USE_AWS_MANAGED_IAM:-false} PLUGIN_S3_ENDPOINT: ${PLUGIN_S3_ENDPOINT:-} PLUGIN_S3_USE_PATH_STYLE: ${PLUGIN_S3_USE_PATH_STYLE:-false} PLUGIN_AWS_ACCESS_KEY: ${PLUGIN_AWS_ACCESS_KEY:-} PLUGIN_AWS_SECRET_KEY: ${PLUGIN_AWS_SECRET_KEY:-} PLUGIN_AWS_REGION: ${PLUGIN_AWS_REGION:-} PLUGIN_AZURE_BLOB_STORAGE_CONTAINER_NAME: ${PLUGIN_AZURE_BLOB_STORAGE_CONTAINER_NAME:-} PLUGIN_AZURE_BLOB_STORAGE_CONNECTION_STRING: ${PLUGIN_AZURE_BLOB_STORAGE_CONNECTION_STRING:-} PLUGIN_TENCENT_COS_SECRET_KEY: ${PLUGIN_TENCENT_COS_SECRET_KEY:-} PLUGIN_TENCENT_COS_SECRET_ID: ${PLUGIN_TENCENT_COS_SECRET_ID:-} PLUGIN_TENCENT_COS_REGION: ${PLUGIN_TENCENT_COS_REGION:-} PLUGIN_ALIYUN_OSS_REGION: ${PLUGIN_ALIYUN_OSS_REGION:-} PLUGIN_ALIYUN_OSS_ENDPOINT: ${PLUGIN_ALIYUN_OSS_ENDPOINT:-} PLUGIN_ALIYUN_OSS_ACCESS_KEY_ID: ${PLUGIN_ALIYUN_OSS_ACCESS_KEY_ID:-} PLUGIN_ALIYUN_OSS_ACCESS_KEY_SECRET: ${PLUGIN_ALIYUN_OSS_ACCESS_KEY_SECRET:-} PLUGIN_ALIYUN_OSS_AUTH_VERSION: ${PLUGIN_ALIYUN_OSS_AUTH_VERSION:-v4} PLUGIN_ALIYUN_OSS_PATH: ${PLUGIN_ALIYUN_OSS_PATH:-} PLUGIN_VOLCENGINE_TOS_ENDPOINT: ${PLUGIN_VOLCENGINE_TOS_ENDPOINT:-} PLUGIN_VOLCENGINE_TOS_ACCESS_KEY: ${PLUGIN_VOLCENGINE_TOS_ACCESS_KEY:-} PLUGIN_VOLCENGINE_TOS_SECRET_KEY: ${PLUGIN_VOLCENGINE_TOS_SECRET_KEY:-} PLUGIN_VOLCENGINE_TOS_REGION: ${PLUGIN_VOLCENGINE_TOS_REGION:-} ENABLE_OTEL: ${ENABLE_OTEL:-false} OTLP_BASE_ENDPOINT: ${OTLP_BASE_ENDPOINT:-http://localhost:4318} OTLP_API_KEY: ${OTLP_API_KEY:-} OTEL_EXPORTER_OTLP_PROTOCOL: ${OTEL_EXPORTER_OTLP_PROTOCOL:-} OTEL_EXPORTER_TYPE: ${OTEL_EXPORTER_TYPE:-otlp} OTEL_SAMPLING_RATE: ${OTEL_SAMPLING_RATE:-0.1} OTEL_BATCH_EXPORT_SCHEDULE_DELAY: ${OTEL_BATCH_EXPORT_SCHEDULE_DELAY:-5000} OTEL_MAX_QUEUE_SIZE: ${OTEL_MAX_QUEUE_SIZE:-2048} OTEL_MAX_EXPORT_BATCH_SIZE: ${OTEL_MAX_EXPORT_BATCH_SIZE:-512} OTEL_METRIC_EXPORT_INTERVAL: ${OTEL_METRIC_EXPORT_INTERVAL:-60000} OTEL_BATCH_EXPORT_TIMEOUT: ${OTEL_BATCH_EXPORT_TIMEOUT:-10000} OTEL_METRIC_EXPORT_TIMEOUT: ${OTEL_METRIC_EXPORT_TIMEOUT:-30000} ALLOW_EMBED: ${ALLOW_EMBED:-false} QUEUE_MONITOR_THRESHOLD: ${QUEUE_MONITOR_THRESHOLD:-200} QUEUE_MONITOR_ALERT_EMAILS: ${QUEUE_MONITOR_ALERT_EMAILS:-} QUEUE_MONITOR_INTERVAL: ${QUEUE_MONITOR_INTERVAL:-30} services: # API service api: image: langgenius/dify-api:1.5.0 restart: always environment: # Use the shared environment variables. <<: *shared-api-worker-env # Startup mode, 'api' starts the API server. MODE: api SENTRY_DSN: ${API_SENTRY_DSN:-} SENTRY_TRACES_SAMPLE_RATE: ${API_SENTRY_TRACES_SAMPLE_RATE:-1.0} SENTRY_PROFILES_SAMPLE_RATE: ${API_SENTRY_PROFILES_SAMPLE_RATE:-1.0} PLUGIN_REMOTE_INSTALL_HOST: ${EXPOSE_PLUGIN_DEBUGGING_HOST:-localhost} PLUGIN_REMOTE_INSTALL_PORT: ${EXPOSE_PLUGIN_DEBUGGING_PORT:-5003} PLUGIN_MAX_PACKAGE_SIZE: ${PLUGIN_MAX_PACKAGE_SIZE:-52428800} INNER_API_KEY_FOR_PLUGIN: ${PLUGIN_DIFY_INNER_API_KEY:-QaHbTe77CtuXmsfyhR7+vRjI/+XbV1AaFy691iy+kGDv2Jvy0/eAh8Y1} depends_on: db: condition: service_healthy redis: condition: service_started volumes: # Mount the storage directory to the container, for storing user files. - ./volumes/app/storage:/app/api/storage networks: - ssrf_proxy_network - default # worker service # The Celery worker for processing the queue. worker: image: langgenius/dify-api:1.5.0 restart: always environment: # Use the shared environment variables. <<: *shared-api-worker-env # Startup mode, 'worker' starts the Celery worker for processing the queue. MODE: worker SENTRY_DSN: ${API_SENTRY_DSN:-} SENTRY_TRACES_SAMPLE_RATE: ${API_SENTRY_TRACES_SAMPLE_RATE:-1.0} SENTRY_PROFILES_SAMPLE_RATE: ${API_SENTRY_PROFILES_SAMPLE_RATE:-1.0} PLUGIN_MAX_PACKAGE_SIZE: ${PLUGIN_MAX_PACKAGE_SIZE:-52428800} INNER_API_KEY_FOR_PLUGIN: ${PLUGIN_DIFY_INNER_API_KEY:-QaHbTe77CtuXmsfyhR7+vRjI/+XbV1AaFy691iy+kGDv2Jvy0/eAh8Y1} depends_on: db: condition: service_healthy redis: condition: service_started volumes: # Mount the storage directory to the container, for storing user files. - ./volumes/app/storage:/app/api/storage networks: - ssrf_proxy_network - default # Frontend web application. web: image: langgenius/dify-web:1.5.0 restart: always environment: CONSOLE_API_URL: ${CONSOLE_API_URL:-} APP_API_URL: ${APP_API_URL:-} SENTRY_DSN: ${WEB_SENTRY_DSN:-} NEXT_TELEMETRY_DISABLED: ${NEXT_TELEMETRY_DISABLED:-0} TEXT_GENERATION_TIMEOUT_MS: ${TEXT_GENERATION_TIMEOUT_MS:-60000} CSP_WHITELIST: ${CSP_WHITELIST:-} ALLOW_EMBED: ${ALLOW_EMBED:-false} MARKETPLACE_API_URL: ${MARKETPLACE_API_URL:-https://marketplace.dify.ai} MARKETPLACE_URL: ${MARKETPLACE_URL:-https://marketplace.dify.ai} TOP_K_MAX_VALUE: ${TOP_K_MAX_VALUE:-} INDEXING_MAX_SEGMENTATION_TOKENS_LENGTH: ${INDEXING_MAX_SEGMENTATION_TOKENS_LENGTH:-} PM2_INSTANCES: ${PM2_INSTANCES:-2} LOOP_NODE_MAX_COUNT: ${LOOP_NODE_MAX_COUNT:-100} MAX_TOOLS_NUM: ${MAX_TOOLS_NUM:-10} MAX_PARALLEL_LIMIT: ${MAX_PARALLEL_LIMIT:-10} MAX_ITERATIONS_NUM: ${MAX_ITERATIONS_NUM:-99} ENABLE_WEBSITE_JINAREADER: ${ENABLE_WEBSITE_JINAREADER:-true} ENABLE_WEBSITE_FIRECRAWL: ${ENABLE_WEBSITE_FIRECRAWL:-true} ENABLE_WEBSITE_WATERCRAWL: ${ENABLE_WEBSITE_WATERCRAWL:-true} # The postgres database. db: image: postgres:15-alpine restart: always environment: POSTGRES_USER: ${POSTGRES_USER:-postgres} POSTGRES_PASSWORD: ${POSTGRES_PASSWORD:-difyai123456} POSTGRES_DB: ${POSTGRES_DB:-dify} PGDATA: ${PGDATA:-/var/lib/postgresql/data/pgdata} command: > postgres -c 'max_connections=${POSTGRES_MAX_CONNECTIONS:-100}' -c 'shared_buffers=${POSTGRES_SHARED_BUFFERS:-128MB}' -c 'work_mem=${POSTGRES_WORK_MEM:-4MB}' -c 'maintenance_work_mem=${POSTGRES_MAINTENANCE_WORK_MEM:-64MB}' -c 'effective_cache_size=${POSTGRES_EFFECTIVE_CACHE_SIZE:-4096MB}' volumes: - ./volumes/db/data:/var/lib/postgresql/data healthcheck: test: [ 'CMD', 'pg_isready', '-h', 'db', '-U', '${PGUSER:-postgres}', '-d', '${POSTGRES_DB:-dify}' ] interval: 1s timeout: 3s retries: 60 # The redis cache. redis: image: redis:6-alpine restart: always environment: REDISCLI_AUTH: ${REDIS_PASSWORD:-difyai123456} volumes: # Mount the redis data directory to the container. - ./volumes/redis/data:/data # Set the redis password when startup redis server. command: redis-server --requirepass ${REDIS_PASSWORD:-difyai123456} healthcheck: test: [ 'CMD', 'redis-cli', 'ping' ] # The DifySandbox sandbox: image: langgenius/dify-sandbox:0.2.12 restart: always environment: # The DifySandbox configurations # Make sure you are changing this key for your deployment with a strong key. # You can generate a strong key using `openssl rand -base64 42`. API_KEY: ${SANDBOX_API_KEY:-dify-sandbox} GIN_MODE: ${SANDBOX_GIN_MODE:-release} WORKER_TIMEOUT: ${SANDBOX_WORKER_TIMEOUT:-15} ENABLE_NETWORK: ${SANDBOX_ENABLE_NETWORK:-true} HTTP_PROXY: ${SANDBOX_HTTP_PROXY:-http://ssrf_proxy:3128} HTTPS_PROXY: ${SANDBOX_HTTPS_PROXY:-http://ssrf_proxy:3128} SANDBOX_PORT: ${SANDBOX_PORT:-8194} PIP_MIRROR_URL: ${PIP_MIRROR_URL:-} volumes: - ./volumes/sandbox/dependencies:/dependencies - ./volumes/sandbox/conf:/conf healthcheck: test: [ 'CMD', 'curl', '-f', 'http://localhost:8194/health' ] networks: - ssrf_proxy_network # plugin daemon plugin_daemon: image: langgenius/dify-plugin-daemon:0.1.2-local restart: always environment: # Use the shared environment variables. <<: *shared-api-worker-env DB_DATABASE: ${DB_PLUGIN_DATABASE:-dify_plugin} SERVER_PORT: ${PLUGIN_DAEMON_PORT:-5002} SERVER_KEY: ${PLUGIN_DAEMON_KEY:-lYkiYYT6owG+71oLerGzA7GXCgOT++6ovaezWAjpCjf+Sjc3ZtU+qUEi} MAX_PLUGIN_PACKAGE_SIZE: ${PLUGIN_MAX_PACKAGE_SIZE:-52428800} PPROF_ENABLED: ${PLUGIN_PPROF_ENABLED:-false} DIFY_INNER_API_URL: ${PLUGIN_DIFY_INNER_API_URL:-http://api:5001} DIFY_INNER_API_KEY: ${PLUGIN_DIFY_INNER_API_KEY:-QaHbTe77CtuXmsfyhR7+vRjI/+XbV1AaFy691iy+kGDv2Jvy0/eAh8Y1} PLUGIN_REMOTE_INSTALLING_HOST: ${PLUGIN_DEBUGGING_HOST:-0.0.0.0} PLUGIN_REMOTE_INSTALLING_PORT: ${PLUGIN_DEBUGGING_PORT:-5003} PLUGIN_WORKING_PATH: ${PLUGIN_WORKING_PATH:-/app/storage/cwd} FORCE_VERIFYING_SIGNATURE: ${FORCE_VERIFYING_SIGNATURE:-true} PYTHON_ENV_INIT_TIMEOUT: ${PLUGIN_PYTHON_ENV_INIT_TIMEOUT:-120} PLUGIN_MAX_EXECUTION_TIMEOUT: ${PLUGIN_MAX_EXECUTION_TIMEOUT:-600} PIP_MIRROR_URL: ${PIP_MIRROR_URL:-} PLUGIN_STORAGE_TYPE: ${PLUGIN_STORAGE_TYPE:-local} PLUGIN_STORAGE_LOCAL_ROOT: ${PLUGIN_STORAGE_LOCAL_ROOT:-/app/storage} PLUGIN_INSTALLED_PATH: ${PLUGIN_INSTALLED_PATH:-plugin} PLUGIN_PACKAGE_CACHE_PATH: ${PLUGIN_PACKAGE_CACHE_PATH:-plugin_packages} PLUGIN_MEDIA_CACHE_PATH: ${PLUGIN_MEDIA_CACHE_PATH:-assets} PLUGIN_STORAGE_OSS_BUCKET: ${PLUGIN_STORAGE_OSS_BUCKET:-} S3_USE_AWS_MANAGED_IAM: ${PLUGIN_S3_USE_AWS_MANAGED_IAM:-false} S3_USE_AWS: ${PLUGIN_S3_USE_AWS:-} S3_ENDPOINT: ${PLUGIN_S3_ENDPOINT:-} S3_USE_PATH_STYLE: ${PLUGIN_S3_USE_PATH_STYLE:-false} AWS_ACCESS_KEY: ${PLUGIN_AWS_ACCESS_KEY:-} AWS_SECRET_KEY: ${PLUGIN_AWS_SECRET_KEY:-} AWS_REGION: ${PLUGIN_AWS_REGION:-} AZURE_BLOB_STORAGE_CONNECTION_STRING: ${PLUGIN_AZURE_BLOB_STORAGE_CONNECTION_STRING:-} AZURE_BLOB_STORAGE_CONTAINER_NAME: ${PLUGIN_AZURE_BLOB_STORAGE_CONTAINER_NAME:-} TENCENT_COS_SECRET_KEY: ${PLUGIN_TENCENT_COS_SECRET_KEY:-} TENCENT_COS_SECRET_ID: ${PLUGIN_TENCENT_COS_SECRET_ID:-} TENCENT_COS_REGION: ${PLUGIN_TENCENT_COS_REGION:-} ALIYUN_OSS_REGION: ${PLUGIN_ALIYUN_OSS_REGION:-} ALIYUN_OSS_ENDPOINT: ${PLUGIN_ALIYUN_OSS_ENDPOINT:-} ALIYUN_OSS_ACCESS_KEY_ID: ${PLUGIN_ALIYUN_OSS_ACCESS_KEY_ID:-} ALIYUN_OSS_ACCESS_KEY_SECRET: ${PLUGIN_ALIYUN_OSS_ACCESS_KEY_SECRET:-} ALIYUN_OSS_AUTH_VERSION: ${PLUGIN_ALIYUN_OSS_AUTH_VERSION:-v4} ALIYUN_OSS_PATH: ${PLUGIN_ALIYUN_OSS_PATH:-} VOLCENGINE_TOS_ENDPOINT: ${PLUGIN_VOLCENGINE_TOS_ENDPOINT:-} VOLCENGINE_TOS_ACCESS_KEY: ${PLUGIN_VOLCENGINE_TOS_ACCESS_KEY:-} VOLCENGINE_TOS_SECRET_KEY: ${PLUGIN_VOLCENGINE_TOS_SECRET_KEY:-} VOLCENGINE_TOS_REGION: ${PLUGIN_VOLCENGINE_TOS_REGION:-} ports: - "${EXPOSE_PLUGIN_DEBUGGING_PORT:-5003}:${PLUGIN_DEBUGGING_PORT:-5003}" volumes: - ./volumes/plugin_daemon:/app/storage depends_on: db: condition: service_healthy # ssrf_proxy server # for more information, please refer to # https://docs.dify.ai/learn-more/faq/install-faq#18-why-is-ssrf-proxy-needed%3F ssrf_proxy: image: ubuntu/squid:latest restart: always volumes: - ./ssrf_proxy/squid.conf.template:/etc/squid/squid.conf.template - ./ssrf_proxy/docker-entrypoint.sh:/docker-entrypoint-mount.sh entrypoint: [ 'sh', '-c', "cp /docker-entrypoint-mount.sh /docker-entrypoint.sh && sed -i 's/\r$$//' /docker-entrypoint.sh && chmod +x /docker-entrypoint.sh && /docker-entrypoint.sh" ] environment: # pls clearly modify the squid env vars to fit your network environment. HTTP_PORT: ${SSRF_HTTP_PORT:-3128} COREDUMP_DIR: ${SSRF_COREDUMP_DIR:-/var/spool/squid} REVERSE_PROXY_PORT: ${SSRF_REVERSE_PROXY_PORT:-8194} SANDBOX_HOST: ${SSRF_SANDBOX_HOST:-sandbox} SANDBOX_PORT: ${SANDBOX_PORT:-8194} networks: - ssrf_proxy_network - default # Certbot service # use `docker-compose --profile certbot up` to start the certbot service. certbot: image: certbot/certbot profiles: - certbot volumes: - ./volumes/certbot/conf:/etc/letsencrypt - ./volumes/certbot/www:/var/www/html - ./volumes/certbot/logs:/var/log/letsencrypt - ./volumes/certbot/conf/live:/etc/letsencrypt/live - ./certbot/update-cert.template.txt:/update-cert.template.txt - ./certbot/docker-entrypoint.sh:/docker-entrypoint.sh environment: - CERTBOT_EMAIL=${CERTBOT_EMAIL} - CERTBOT_DOMAIN=${CERTBOT_DOMAIN} - CERTBOT_OPTIONS=${CERTBOT_OPTIONS:-} entrypoint: [ '/docker-entrypoint.sh' ] command: [ 'tail', '-f', '/dev/null' ] # The nginx reverse proxy. # used for reverse proxying the API service and Web service. nginx: image: nginx:latest restart: always volumes: - ./nginx/nginx.conf.template:/etc/nginx/nginx.conf.template - ./nginx/proxy.conf.template:/etc/nginx/proxy.conf.template - ./nginx/https.conf.template:/etc/nginx/https.conf.template - ./nginx/conf.d:/etc/nginx/conf.d - ./nginx/docker-entrypoint.sh:/docker-entrypoint-mount.sh - ./nginx/ssl:/etc/ssl # cert dir (legacy) - ./volumes/certbot/conf/live:/etc/letsencrypt/live # cert dir (with certbot container) - ./volumes/certbot/conf:/etc/letsencrypt - ./volumes/certbot/www:/var/www/html entrypoint: [ 'sh', '-c', "cp /docker-entrypoint-mount.sh /docker-entrypoint.sh && sed -i 's/\r$$//' /docker-entrypoint.sh && chmod +x /docker-entrypoint.sh && /docker-entrypoint.sh" ] environment: NGINX_SERVER_NAME: ${NGINX_SERVER_NAME:-_} NGINX_HTTPS_ENABLED: ${NGINX_HTTPS_ENABLED:-false} NGINX_SSL_PORT: ${NGINX_SSL_PORT:-443} NGINX_PORT: ${NGINX_PORT:-80} # You're required to add your own SSL certificates/keys to the `./nginx/ssl` directory # and modify the env vars below in .env if HTTPS_ENABLED is true. NGINX_SSL_CERT_FILENAME: ${NGINX_SSL_CERT_FILENAME:-dify.crt} NGINX_SSL_CERT_KEY_FILENAME: ${NGINX_SSL_CERT_KEY_FILENAME:-dify.key} NGINX_SSL_PROTOCOLS: ${NGINX_SSL_PROTOCOLS:-TLSv1.1 TLSv1.2 TLSv1.3} NGINX_WORKER_PROCESSES: ${NGINX_WORKER_PROCESSES:-auto} NGINX_CLIENT_MAX_BODY_SIZE: ${NGINX_CLIENT_MAX_BODY_SIZE:-15M} NGINX_KEEPALIVE_TIMEOUT: ${NGINX_KEEPALIVE_TIMEOUT:-65} NGINX_PROXY_READ_TIMEOUT: ${NGINX_PROXY_READ_TIMEOUT:-3600s} NGINX_PROXY_SEND_TIMEOUT: ${NGINX_PROXY_SEND_TIMEOUT:-3600s} NGINX_ENABLE_CERTBOT_CHALLENGE: ${NGINX_ENABLE_CERTBOT_CHALLENGE:-false} CERTBOT_DOMAIN: ${CERTBOT_DOMAIN:-} depends_on: - api - web ports: - '${EXPOSE_NGINX_PORT:-80}:${NGINX_PORT:-80}' - '${EXPOSE_NGINX_SSL_PORT:-443}:${NGINX_SSL_PORT:-443}' # The Weaviate vector store. weaviate: image: semitechnologies/weaviate:1.19.0 profiles: - '' - weaviate restart: always volumes: # Mount the Weaviate data directory to the con tainer. - ./volumes/weaviate:/var/lib/weaviate environment: # The Weaviate configurations # You can refer to the [Weaviate](https://weaviate.io/developers/weaviate/config-refs/env-vars) documentation for more information. PERSISTENCE_DATA_PATH: ${WEAVIATE_PERSISTENCE_DATA_PATH:-/var/lib/weaviate} QUERY_DEFAULTS_LIMIT: ${WEAVIATE_QUERY_DEFAULTS_LIMIT:-25} AUTHENTICATION_ANONYMOUS_ACCESS_ENABLED: ${WEAVIATE_AUTHENTICATION_ANONYMOUS_ACCESS_ENABLED:-false} DEFAULT_VECTORIZER_MODULE: ${WEAVIATE_DEFAULT_VECTORIZER_MODULE:-none} CLUSTER_HOSTNAME: ${WEAVIATE_CLUSTER_HOSTNAME:-node1} AUTHENTICATION_APIKEY_ENABLED: ${WEAVIATE_AUTHENTICATION_APIKEY_ENABLED:-true} AUTHENTICATION_APIKEY_ALLOWED_KEYS: ${WEAVIATE_AUTHENTICATION_APIKEY_ALLOWED_KEYS:-WVF5YThaHlkYwhGUSmCRgsX3tD5ngdN8pkih} AUTHENTICATION_APIKEY_USERS: ${WEAVIATE_AUTHENTICATION_APIKEY_USERS:-hello@dify.ai} AUTHORIZATION_ADMINLIST_ENABLED: ${WEAVIATE_AUTHORIZATION_ADMINLIST_ENABLED:-true} AUTHORIZATION_ADMINLIST_USERS: ${WEAVIATE_AUTHORIZATION_ADMINLIST_USERS:-hello@dify.ai} # Qdrant vector store. # (if used, you need to set VECTOR_STORE to qdrant in the api & worker service.) qdrant: image: langgenius/qdrant:v1.7.3 profiles: - qdrant restart: always volumes: - ./volumes/qdrant:/qdrant/storage environment: QDRANT_API_KEY: ${QDRANT_API_KEY:-difyai123456} # The Couchbase vector store. couchbase-server: build: ./couchbase-server profiles: - couchbase restart: always environment: - CLUSTER_NAME=dify_search - COUCHBASE_ADMINISTRATOR_USERNAME=${COUCHBASE_USER:-Administrator} - COUCHBASE_ADMINISTRATOR_PASSWORD=${COUCHBASE_PASSWORD:-password} - COUCHBASE_BUCKET=${COUCHBASE_BUCKET_NAME:-Embeddings} - COUCHBASE_BUCKET_RAMSIZE=512 - COUCHBASE_RAM_SIZE=2048 - COUCHBASE_EVENTING_RAM_SIZE=512 - COUCHBASE_INDEX_RAM_SIZE=512 - COUCHBASE_FTS_RAM_SIZE=1024 hostname: couchbase-server container_name: couchbase-server working_dir: /opt/couchbase stdin_open: true tty: true entrypoint: [ "" ] command: sh -c "/opt/couchbase/init/init-cbserver.sh" volumes: - ./volumes/couchbase/data:/opt/couchbase/var/lib/couchbase/data healthcheck: # ensure bucket was created before proceeding test: [ "CMD-SHELL", "curl -s -f -u Administrator:password http://localhost:8091/pools/default/buckets | grep -q '\\[{' || exit 1" ] interval: 10s retries: 10 start_period: 30s timeout: 10s # The pgvector vector database. pgvector: image: pgvector/pgvector:pg16 profiles: - pgvector restart: always environment: PGUSER: ${PGVECTOR_PGUSER:-postgres} # The password for the default postgres user. POSTGRES_PASSWORD: ${PGVECTOR_POSTGRES_PASSWORD:-difyai123456} # The name of the default postgres database. POSTGRES_DB: ${PGVECTOR_POSTGRES_DB:-dify} # postgres data directory PGDATA: ${PGVECTOR_PGDATA:-/var/lib/postgresql/data/pgdata} # pg_bigm module for full text search PG_BIGM: ${PGVECTOR_PG_BIGM:-false} PG_BIGM_VERSION: ${PGVECTOR_PG_BIGM_VERSION:-1.2-20240606} volumes: - ./volumes/pgvector/data:/var/lib/postgresql/data - ./pgvector/docker-entrypoint.sh:/docker-entrypoint.sh entrypoint: [ '/docker-entrypoint.sh' ] healthcheck: test: [ 'CMD', 'pg_isready' ] interval: 1s timeout: 3s retries: 30 # get image from https://www.vastdata.com.cn/ vastbase: image: vastdata/vastbase-vector profiles: - vastbase restart: always environment: - VB_DBCOMPATIBILITY=PG - VB_DB=dify - VB_USERNAME=dify - VB_PASSWORD=Difyai123456 ports: - '5434:5432' volumes: - ./vastbase/lic:/home/vastbase/vastbase/lic - ./vastbase/data:/home/vastbase/data - ./vastbase/backup:/home/vastbase/backup - ./vastbase/backup_log:/home/vastbase/backup_log healthcheck: test: [ 'CMD', 'pg_isready' ] interval: 1s timeout: 3s retries: 30 # pgvecto-rs vector store pgvecto-rs: image: tensorchord/pgvecto-rs:pg16-v0.3.0 profiles: - pgvecto-rs restart: always environment: PGUSER: ${PGVECTOR_PGUSER:-postgres} # The password for the default postgres user. POSTGRES_PASSWORD: ${PGVECTOR_POSTGRES_PASSWORD:-difyai123456} # The name of the default postgres database. POSTGRES_DB: ${PGVECTOR_POSTGRES_DB:-dify} # postgres data directory PGDATA: ${PGVECTOR_PGDATA:-/var/lib/postgresql/data/pgdata} volumes: - ./volumes/pgvecto_rs/data:/var/lib/postgresql/data healthcheck: test: [ 'CMD', 'pg_isready' ] interval: 1s timeout: 3s retries: 30 # Chroma vector database chroma: image: ghcr.io/chroma-core/chroma:0.5.20 profiles: - chroma restart: always volumes: - ./volumes/chroma:/chroma/chroma environment: CHROMA_SERVER_AUTHN_CREDENTIALS: ${CHROMA_SERVER_AUTHN_CREDENTIALS:-difyai123456} CHROMA_SERVER_AUTHN_PROVIDER: ${CHROMA_SERVER_AUTHN_PROVIDER:-chromadb.auth.token_authn.TokenAuthenticationServerProvider} IS_PERSISTENT: ${CHROMA_IS_PERSISTENT:-TRUE} # OceanBase vector database oceanbase: image: oceanbase/oceanbase-ce:4.3.5-lts container_name: oceanbase profiles: - oceanbase restart: always volumes: - ./volumes/oceanbase/data:/root/ob - ./volumes/oceanbase/conf:/root/.obd/cluster - ./volumes/oceanbase/init.d:/root/boot/init.d environment: OB_MEMORY_LIMIT: ${OCEANBASE_MEMORY_LIMIT:-6G} OB_SYS_PASSWORD: ${OCEANBASE_VECTOR_PASSWORD:-difyai123456} OB_TENANT_PASSWORD: ${OCEANBASE_VECTOR_PASSWORD:-difyai123456} OB_CLUSTER_NAME: ${OCEANBASE_CLUSTER_NAME:-difyai} OB_SERVER_IP: 127.0.0.1 MODE: mini ports: - "${OCEANBASE_VECTOR_PORT:-2881}:2881" healthcheck: test: [ 'CMD-SHELL', 'obclient -h127.0.0.1 -P2881 -uroot@test -p$${OB_TENANT_PASSWORD} -e "SELECT 1;"' ] interval: 10s retries: 30 start_period: 30s timeout: 10s # Oracle vector database oracle: image: container-registry.oracle.com/database/free:latest profiles: - oracle restart: always volumes: - source: oradata type: volume target: /opt/oracle/oradata - ./startupscripts:/opt/oracle/scripts/startup environment: ORACLE_PWD: ${ORACLE_PWD:-Dify123456} ORACLE_CHARACTERSET: ${ORACLE_CHARACTERSET:-AL32UTF8} # Milvus vector database services etcd: container_name: milvus-etcd image: quay.io/coreos/etcd:v3.5.5 profiles: - milvus environment: ETCD_AUTO_COMPACTION_MODE: ${ETCD_AUTO_COMPACTION_MODE:-revision} ETCD_AUTO_COMPACTION_RETENTION: ${ETCD_AUTO_COMPACTION_RETENTION:-1000} ETCD_QUOTA_BACKEND_BYTES: ${ETCD_QUOTA_BACKEND_BYTES:-4294967296} ETCD_SNAPSHOT_COUNT: ${ETCD_SNAPSHOT_COUNT:-50000} volumes: - ./volumes/milvus/etcd:/etcd command: etcd -advertise-client-urls=http://127.0.0.1:2379 -listen-client-urls http://0.0.0.0:2379 --data-dir /etcd healthcheck: test: [ 'CMD', 'etcdctl', 'endpoint', 'health' ] interval: 30s timeout: 20s retries: 3 networks: - milvus minio: container_name: milvus-minio image: minio/minio:RELEASE.2023-03-20T20-16-18Z profiles: - milvus environment: MINIO_ACCESS_KEY: ${MINIO_ACCESS_KEY:-minioadmin} MINIO_SECRET_KEY: ${MINIO_SECRET_KEY:-minioadmin} volumes: - ./volumes/milvus/minio:/minio_data command: minio server /minio_data --console-address ":9001" healthcheck: test: [ 'CMD', 'curl', '-f', 'http://localhost:9000/minio/health/live' ] interval: 30s timeout: 20s retries: 3 networks: - milvus milvus-standalone: container_name: milvus-standalone image: milvusdb/milvus:v2.5.0-beta profiles: - milvus command: [ 'milvus', 'run', 'standalone' ] environment: ETCD_ENDPOINTS: ${ETCD_ENDPOINTS:-etcd:2379} MINIO_ADDRESS: ${MINIO_ADDRESS:-minio:9000} common.security.authorizationEnabled: ${MILVUS_AUTHORIZATION_ENABLED:-true} volumes: - ./volumes/milvus/milvus:/var/lib/milvus healthcheck: test: [ 'CMD', 'curl', '-f', 'http://localhost:9091/healthz' ] interval: 30s start_period: 90s timeout: 20s retries: 3 depends_on: - etcd - minio ports: - 19530:19530 - 9091:9091 networks: - milvus # Opensearch vector database opensearch: container_name: opensearch image: opensearchproject/opensearch:latest profiles: - opensearch environment: discovery.type: ${OPENSEARCH_DISCOVERY_TYPE:-single-node} bootstrap.memory_lock: ${OPENSEARCH_BOOTSTRAP_MEMORY_LOCK:-true} OPENSEARCH_JAVA_OPTS: -Xms${OPENSEARCH_JAVA_OPTS_MIN:-512m} -Xmx${OPENSEARCH_JAVA_OPTS_MAX:-1024m} OPENSEARCH_INITIAL_ADMIN_PASSWORD: ${OPENSEARCH_INITIAL_ADMIN_PASSWORD:-Qazwsxedc!@#123} ulimits: memlock: soft: ${OPENSEARCH_MEMLOCK_SOFT:--1} hard: ${OPENSEARCH_MEMLOCK_HARD:--1} nofile: soft: ${OPENSEARCH_NOFILE_SOFT:-65536} hard: ${OPENSEARCH_NOFILE_HARD:-65536} volumes: - ./volumes/opensearch/data:/usr/share/opensearch/data networks: - opensearch-net opensearch-dashboards: container_name: opensearch-dashboards image: opensearchproject/opensearch-dashboards:latest profiles: - opensearch environment: OPENSEARCH_HOSTS: '["https://opensearch:9200"]' volumes: - ./volumes/opensearch/opensearch_dashboards.yml:/usr/share/opensearch-dashboards/config/opensearch_dashboards.yml networks: - opensearch-net depends_on: - opensearch # opengauss vector database. opengauss: image: opengauss/opengauss:7.0.0-RC1 profiles: - opengauss privileged: true restart: always environment: GS_USERNAME: ${OPENGAUSS_USER:-postgres} GS_PASSWORD: ${OPENGAUSS_PASSWORD:-Dify@123} GS_PORT: ${OPENGAUSS_PORT:-6600} GS_DB: ${OPENGAUSS_DATABASE:-dify} volumes: - ./volumes/opengauss/data:/var/lib/opengauss/data healthcheck: test: [ "CMD-SHELL", "netstat -lntp | grep tcp6 > /dev/null 2>&1" ] interval: 10s timeout: 10s retries: 10 ports: - ${OPENGAUSS_PORT:-6600}:${OPENGAUSS_PORT:-6600} # MyScale vector database myscale: container_name: myscale image: myscale/myscaledb:1.6.4 profiles: - myscale restart: always tty: true volumes: - ./volumes/myscale/data:/var/lib/clickhouse - ./volumes/myscale/log:/var/log/clickhouse-server - ./volumes/myscale/config/users.d/custom_users_config.xml:/etc/clickhouse-server/users.d/custom_users_config.xml ports: - ${MYSCALE_PORT:-8123}:${MYSCALE_PORT:-8123} # Matrixone vector store. matrixone: hostname: matrixone image: matrixorigin/matrixone:2.1.1 profiles: - matrixone restart: always volumes: - ./volumes/matrixone/data:/mo-data ports: - ${MATRIXONE_PORT:-6001}:${MATRIXONE_PORT:-6001} # https://www.elastic.co/guide/en/elasticsearch/reference/current/settings.html # https://www.elastic.co/guide/en/elasticsearch/reference/current/docker.html#docker-prod-prerequisites elasticsearch: image: docker.elastic.co/elasticsearch/elasticsearch:8.14.3 container_name: elasticsearch profiles: - elasticsearch - elasticsearch-ja restart: always volumes: - ./elasticsearch/docker-entrypoint.sh:/docker-entrypoint-mount.sh - dify_es01_data:/usr/share/elasticsearch/data environment: ELASTIC_PASSWORD: ${ELASTICSEARCH_PASSWORD:-elastic} VECTOR_STORE: ${VECTOR_STORE:-} cluster.name: dify-es-cluster node.name: dify-es0 discovery.type: single-node xpack.license.self_generated.type: basic xpack.security.enabled: 'true' xpack.security.enrollment.enabled: 'false' xpack.security.http.ssl.enabled: 'false' ports: - ${ELASTICSEARCH_PORT:-9200}:9200 deploy: resources: limits: memory: 2g entrypoint: [ 'sh', '-c', "sh /docker-entrypoint-mount.sh" ] healthcheck: test: [ 'CMD', 'curl', '-s', 'http://localhost:9200/_cluster/health?pretty' ] interval: 30s timeout: 10s retries: 50 # https://www.elastic.co/guide/en/kibana/current/docker.html # https://www.elastic.co/guide/en/kibana/current/settings.html kibana: image: docker.elastic.co/kibana/kibana:8.14.3 container_name: kibana profiles: - elasticsearch depends_on: - elasticsearch restart: always environment: XPACK_ENCRYPTEDSAVEDOBJECTS_ENCRYPTIONKEY: d1a66dfd-c4d3-4a0a-8290-2abcb83ab3aa NO_PROXY: localhost,127.0.0.1,elasticsearch,kibana XPACK_SECURITY_ENABLED: 'true' XPACK_SECURITY_ENROLLMENT_ENABLED: 'false' XPACK_SECURITY_HTTP_SSL_ENABLED: 'false' XPACK_FLEET_ISAIRGAPPED: 'true' I18N_LOCALE: zh-CN SERVER_PORT: '5601' ELASTICSEARCH_HOSTS: http://elasticsearch:9200 ports: - ${KIBANA_PORT:-5601}:5601 healthcheck: test: [ 'CMD-SHELL', 'curl -s http://localhost:5601 >/dev/null || exit 1' ] interval: 30s timeout: 10s retries: 3 # unstructured . # (if used, you need to set ETL_TYPE to Unstructured in the api & worker service.) unstructured: image: downloads.unstructured.io/unstructured-io/unstructured-api:latest profiles: - unstructured restart: always volumes: - ./volumes/unstructured:/app/data networks: # create a network between sandbox, api and ssrf_proxy, and can not access outside. ssrf_proxy_network: driver: bridge internal: true milvus: driver: bridge opensearch-net: driver: bridge internal: true volumes: oradata: dify_es01_data:
07-02
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值