dolphinscheduler集群搭建

1.解压安装包

[hadoop@hadoop101 package]$ tar -zxvf apache-dolphinscheduler-1.3.6-bin.tar.gz -C /opt/software/

2.创建安装目录

在每台服务器的相同目录下都创建dolphinscheduler-1.3.6

注意:这是ds的安装目录,不能和解压目录相同

[hadoop@hadoop101 software]$ mkdir dolphinscheduler-1.3.6

进入dolphinscheduler中

[hadoop@hadoop101 software]$ cd apache-dolphinscheduler-1.3.6-bin/

3.数据库初始化

[hadoop@hadoop101 dolphinscheduler]$ mysql -uroot -pRoot123456#

mysql> CREATE DATABASE dolphinscheduler DEFAULT CHARACTER SET utf8 DEFAULT COLLATE utf8_general_ci;
mysql> GRANT ALL PRIVILEGES ON dolphinscheduler.* TO 'dolphinscheduler'@'%' IDENTIFIED BY 'Ds123456#';
mysql> GRANT ALL PRIVILEGES ON dolphinscheduler.* TO 'dolphinscheduler'@'localhost' IDENTIFIED BY 'Ds123456#';
mysql> flush privileges;

4.配置conf/datasource.properties文件

我们用的是mysql 所以把默认的postgresql注释掉

[hadoop@hadoop101 conf]$ vim datasource.properties

#mysql
spring.datasource.driver-class-name=com.mysql.jdbc.Driver
spring.datasource.url=jdbc:mysql://192.168.2.128:3306/dolphinscheduler?useUnicode=true&characterEncoding=UTF-8
spring.datasource.username=dolphinscheduler
spring.datasource.password=Ds123456#

5.进入script目录执行

要注意的是:将mysql的驱动包放到/lib里面,要不然会报错

[hadoop@hadoop101 script]$ ./create-dolphinscheduler.sh

可能会报错
11:50:00.177 [main] ERROR com.alibaba.druid.pool.DruidDataSource - {dataSource-1} init error
java.sql.SQLException: com.mysql.jdbc.Driver
        at com.alibaba.druid.util.JdbcUtils.createDriver(JdbcUtils.java:620)
        at com.alibaba.druid.pool.DruidDataSource.init(DruidDataSource.java:874)
        at com.alibaba.druid.pool.DruidDataSource.getConnection(DruidDataSource.java:1300)
        at com.alibaba.druid.pool.DruidDataSource.getConnection(DruidDataSource.java:1296)
        at com.alibaba.druid.pool.DruidDataSource.getConnection(DruidDataSource.java:109)
        at org.apache.dolphinscheduler.dao.upgrade.UpgradeDao.getCurrentDbType(UpgradeDao.java:79)
        at org.apache.dolphinscheduler.dao.upgrade.UpgradeDao.<clinit>(UpgradeDao.java:48)
        at org.apache.dolphinscheduler.dao.upgrade.DolphinSchedulerManager.initUpgradeDao(DolphinSchedulerManager.java:37)
        at org.apache.dolphinscheduler.dao.upgrade.DolphinSchedulerManager.<init>(DolphinSchedulerManager.java:57)
        at org.apache.dolphinscheduler.dao.upgrade.shell.CreateDolphinScheduler.main(CreateDolphinScheduler.java:36)
        
将mysql的驱动包放到/lib里面,然后再执行./create-dolphinscheduler.sh
然后会创建成功

6.进入/conf/env,修改dolphinscheduler_env.sh文件

注意:这里的SPARK_HOME1和SPARK_HOME2不能改成SPARK_HOME,否则后面执行spark任务的时候会报错

export HADOOP_HOME=/opt/software/hadoop-3.1.3
export HADOOP_CONF_DIR=/opt/software/hadoop-3.1.3/etc/hadoop
export SPARK_HOME2=/opt/software/spark-2.4.5
#export PYTHON_HOME=/opt/soft/python
export JAVA_HOME=/opt/software/jdk1.8.0_212
export HIVE_HOME=/opt/software/hive-3.1.2
#export FLINK_HOME=/opt/soft/flink
export DATAX_HOME=/opt/software/datax

export PATH=$HADOOP_HOME/bin:$SPARK_HOME2/bin:$JAVA_HOME/bin:$HIVE_HOME/bin:$DATAX_HOME/bin:$PATH

7.进入/conf/config,修改install_config.conf文件

# NOTICE :  If the following config has special characters in the variable `.*[]^${}\+?|()@#&`, Please escape, for example, `[` escape to `\[`
# postgresql or mysql
dbtype="mysql"

# db config
# db address and port
dbhost="192.168.2.128:3306"

# db username
username="dolphinscheduler"

# database name
dbname="dolphinscheduler"

# db passwprd
# NOTICE: if there are special characters, please use the \ to escape, for example, `[` escape to `\[` 这里的#是特殊符号 我们在#前面加\
password="Ds123456/#"

# zk cluster
zkQuorum="192.168.2.134:2181,192.168.2.135:2181,192.168.2.136:2181,192.168.2.137:2181,192.168.2.138:2181"

# Note: the target installation path for dolphinscheduler, please not config as the same as the current path (pwd)
installPath="/opt/software/dolphinscheduler-1.3.6"

# deployment user
# Note: the deployment user needs to have sudo privileges and permissions to operate hdfs. If hdfs is enabled, the root directory needs to be created by itself
deployUser="hadoop"


# alert config
# mail server host
mailServerHost="smtp.exmail.qq.com"

# mail server port
# note: Different protocols and encryption methods correspond to different ports, when SSL/TLS is enabled, make sure the port is correct.
mailServerPort="25"

# sender
mailSender="xxxxxxxxxx"

# user
mailUser="xxxxxxxxxx"

# sender password
# note: The mail.passwd is email service authorization code, not the email login password.
mailPassword="xxxxxxxxxx"

# TLS mail protocol support
starttlsEnable="true"

# SSL mail protocol support
# only one of TLS and SSL can be in the true state.
sslEnable="false"

#note: sslTrust is the same as mailServerHost
sslTrust="smtp.exmail.qq.com"


# resource storage type: HDFS, S3, NONE
resourceStorageType="HDFS"

# if resourceStorageType is HDFS,defaultFS write namenode address,HA you need to put core-site.xml and hdfs-site.xml in the conf directory.
# if S3,write S3 address,HA,for example :s3a://dolphinscheduler,
# Note,s3 be sure to create the root directory /dolphinscheduler
defaultFS="hdfs://mycluster:9000"

# if resourceStorageType is S3, the following three configuration is required, otherwise please ignore
s3Endpoint="http://192.168.xx.xx:9010"
s3AccessKey="xxxxxxxxxx"
s3SecretKey="xxxxxxxxxx"

# if resourcemanager HA is enabled, please set the HA IPs; if resourcemanager is single, keep this value empty  如果没用到ha 那么让这里保持为空值
yarnHaIps=""

# if resourcemanager HA is enabled or not use resourcemanager, please keep the default value; If resourcemanager is single, you only need to replace ds1 to actual resourcemanager hostname
singleYarnIp="192.168.2.128"

# resource store on HDFS/S3 path, resource file will store to this hadoop hdfs path, self configuration, please make sure the directory exists on hdfs and have read write permissions. "/dolphinscheduler" is recommended
resourceUploadPath="/dolphinscheduler"

# who have permissions to create directory under HDFS/S3 root path
# Note: if kerberos is enabled, please config hdfsRootUser=
hdfsRootUser="hadoop"

# kerberos config
# whether kerberos starts, if kerberos starts, following four items need to config, otherwise please ignore
kerberosStartUp="false"
# kdc krb5 config file path
krb5ConfPath="$installPath/conf/krb5.conf"
# keytab username
keytabUserName="hdfs-mycluster@ESZ.COM"
# username keytab path
keytabPath="$installPath/conf/hdfs.headless.keytab"

# api server port
apiServerPort="12345"

# install hosts
# Note: install the scheduled hostname list. If it is pseudo-distributed, just write a pseudo-distributed hostname
ips="192.168.2.128,192.168.2.134,192.168.2.135,192.168.2.136,192.168.2.137,192.168.2.138"

# ssh port, default 22
# Note: if ssh port is not default, modify here
sshPort="22"

# run master machine
# Note: list of hosts hostname for deploying master
masters="192.168.2.134,192.168.2.135"

# run worker machine
# note: need to write the worker group name of each worker, the default value is "default"
workers="192.168.2.134:default,192.168.2.135:default,192.168.2.136:default,192.168.2.137:default,192.168.2.138:default"

# run alert machine
# note: list of machine hostnames for deploying alert server
alertServer="192.168.2.128"

# run api machine
# note: list of machine hostnames for deploying api server
apiServers="192.168.2.128"

注意:如果hadoop是ha的话,将hadoop的core-site.xml和hdfs-site.xml拷贝到/conf目录中

8.修改ds的日志存放位置

进入/conf目录下,修改所有的logback-{name}.xml文件中的下面配置

<property name="log.base" value="/home/hadoop/logs/ds_logs"/>

9.然后执行install.sh

[hadoop@hadoop101 dolphinscheduler]$ ./install.sh

10.启动ds

[hadoop@hadoop101 bin]$ ./start-all.sh

11.然后登录webui http://hadoop:12345/dolphinscheduler

账号 admin
密码 dolphinscheduler123
  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
DolphinScheduler 2.0 是一个分布式大数据工作流调度系统,具有强大的任务调度和监控功能。安装包是用于在服务器上安装 DolphinScheduler 2.0 的程序包。 DolphinScheduler 2.0 安装包主要包含以下几个方面的内容: 1. 安装脚本:安装脚本是用来自动部署 DolphinScheduler 的工具。它可以帮助用户自动下载相关依赖库、配置环境变量和启动服务等步骤,简化了安装过程。 2. 依赖库:DolphinScheduler 2.0 使用了许多开源的大数据组件,如 Hadoop、Hive、Spark 等。安装包里包含了这些依赖库的二进制文件,安装过程中会自动解压和配置这些库文件,确保 DolphinScheduler 的正常运行。 3. 配置文件:安装包中也包含了 DolphinScheduler 的配置文件。这些配置文件定义了 DolphinScheduler 的各种参数,如数据库连接、调度器配置、日志路径等。用户在安装时可以根据自己的实际需求进行修改。 4. Web界面:DolphinScheduler 2.0 提供了一个Web界面,用于用户管理和监控任务。安装包中包含了这个 Web 界面的前端静态文件,用户在安装完成后即可通过浏览器访问到这个界面。 安装 DolphinScheduler 2.0 的步骤如下: 1. 下载安装包:从 DolphinScheduler 的官方网站下载最新版的安装包。 2. 解压安装包:将下载的安装包解压到服务器的指定目录下。 3. 修改配置文件:根据实际需求,修改配置文件中的参数。 4. 启动服务:执行安装脚本,启动 DolphinScheduler 的相关服务。脚本会自动下载依赖库并进行相应的配置。 5. 访问界面:使用浏览器访问 DolphinScheduler 的 Web 界面,并进行相关操作,如任务调度、监控等。 总结:DolphinScheduler 2.0 安装包是用于安装 DolphinScheduler 2.0 的程序包,包含了安装脚本、依赖库、配置文件以及 Web 界面等内容。用户可以通过执行安装脚本和配置参数来完成 DolphinScheduler 的安装和配置。安装完成后,用户可以使用 Web 界面来调度和监控任务。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值