1.解压安装包
[hadoop@hadoop101 package]$ tar -zxvf apache-dolphinscheduler-1.3.6-bin.tar.gz -C /opt/software/
2.创建安装目录
在每台服务器的相同目录下都创建dolphinscheduler-1.3.6
注意:这是ds的安装目录,不能和解压目录相同
[hadoop@hadoop101 software]$ mkdir dolphinscheduler-1.3.6
进入dolphinscheduler中
[hadoop@hadoop101 software]$ cd apache-dolphinscheduler-1.3.6-bin/
3.数据库初始化
[hadoop@hadoop101 dolphinscheduler]$ mysql -uroot -pRoot123456#
mysql> CREATE DATABASE dolphinscheduler DEFAULT CHARACTER SET utf8 DEFAULT COLLATE utf8_general_ci;
mysql> GRANT ALL PRIVILEGES ON dolphinscheduler.* TO 'dolphinscheduler'@'%' IDENTIFIED BY 'Ds123456#';
mysql> GRANT ALL PRIVILEGES ON dolphinscheduler.* TO 'dolphinscheduler'@'localhost' IDENTIFIED BY 'Ds123456#';
mysql> flush privileges;
4.配置conf/datasource.properties文件
我们用的是mysql 所以把默认的postgresql注释掉
[hadoop@hadoop101 conf]$ vim datasource.properties
#mysql
spring.datasource.driver-class-name=com.mysql.jdbc.Driver
spring.datasource.url=jdbc:mysql://192.168.2.128:3306/dolphinscheduler?useUnicode=true&characterEncoding=UTF-8
spring.datasource.username=dolphinscheduler
spring.datasource.password=Ds123456#
5.进入script目录执行
要注意的是:将mysql的驱动包放到/lib里面,要不然会报错
[hadoop@hadoop101 script]$ ./create-dolphinscheduler.sh
可能会报错
11:50:00.177 [main] ERROR com.alibaba.druid.pool.DruidDataSource - {dataSource-1} init error
java.sql.SQLException: com.mysql.jdbc.Driver
at com.alibaba.druid.util.JdbcUtils.createDriver(JdbcUtils.java:620)
at com.alibaba.druid.pool.DruidDataSource.init(DruidDataSource.java:874)
at com.alibaba.druid.pool.DruidDataSource.getConnection(DruidDataSource.java:1300)
at com.alibaba.druid.pool.DruidDataSource.getConnection(DruidDataSource.java:1296)
at com.alibaba.druid.pool.DruidDataSource.getConnection(DruidDataSource.java:109)
at org.apache.dolphinscheduler.dao.upgrade.UpgradeDao.getCurrentDbType(UpgradeDao.java:79)
at org.apache.dolphinscheduler.dao.upgrade.UpgradeDao.<clinit>(UpgradeDao.java:48)
at org.apache.dolphinscheduler.dao.upgrade.DolphinSchedulerManager.initUpgradeDao(DolphinSchedulerManager.java:37)
at org.apache.dolphinscheduler.dao.upgrade.DolphinSchedulerManager.<init>(DolphinSchedulerManager.java:57)
at org.apache.dolphinscheduler.dao.upgrade.shell.CreateDolphinScheduler.main(CreateDolphinScheduler.java:36)
将mysql的驱动包放到/lib里面,然后再执行./create-dolphinscheduler.sh
然后会创建成功
6.进入/conf/env,修改dolphinscheduler_env.sh文件
注意:这里的SPARK_HOME1和SPARK_HOME2不能改成SPARK_HOME,否则后面执行spark任务的时候会报错
export HADOOP_HOME=/opt/software/hadoop-3.1.3
export HADOOP_CONF_DIR=/opt/software/hadoop-3.1.3/etc/hadoop
export SPARK_HOME2=/opt/software/spark-2.4.5
#export PYTHON_HOME=/opt/soft/python
export JAVA_HOME=/opt/software/jdk1.8.0_212
export HIVE_HOME=/opt/software/hive-3.1.2
#export FLINK_HOME=/opt/soft/flink
export DATAX_HOME=/opt/software/datax
export PATH=$HADOOP_HOME/bin:$SPARK_HOME2/bin:$JAVA_HOME/bin:$HIVE_HOME/bin:$DATAX_HOME/bin:$PATH
7.进入/conf/config,修改install_config.conf文件
# NOTICE : If the following config has special characters in the variable `.*[]^${}\+?|()@#&`, Please escape, for example, `[` escape to `\[`
# postgresql or mysql
dbtype="mysql"
# db config
# db address and port
dbhost="192.168.2.128:3306"
# db username
username="dolphinscheduler"
# database name
dbname="dolphinscheduler"
# db passwprd
# NOTICE: if there are special characters, please use the \ to escape, for example, `[` escape to `\[` 这里的#是特殊符号 我们在#前面加\
password="Ds123456/#"
# zk cluster
zkQuorum="192.168.2.134:2181,192.168.2.135:2181,192.168.2.136:2181,192.168.2.137:2181,192.168.2.138:2181"
# Note: the target installation path for dolphinscheduler, please not config as the same as the current path (pwd)
installPath="/opt/software/dolphinscheduler-1.3.6"
# deployment user
# Note: the deployment user needs to have sudo privileges and permissions to operate hdfs. If hdfs is enabled, the root directory needs to be created by itself
deployUser="hadoop"
# alert config
# mail server host
mailServerHost="smtp.exmail.qq.com"
# mail server port
# note: Different protocols and encryption methods correspond to different ports, when SSL/TLS is enabled, make sure the port is correct.
mailServerPort="25"
# sender
mailSender="xxxxxxxxxx"
# user
mailUser="xxxxxxxxxx"
# sender password
# note: The mail.passwd is email service authorization code, not the email login password.
mailPassword="xxxxxxxxxx"
# TLS mail protocol support
starttlsEnable="true"
# SSL mail protocol support
# only one of TLS and SSL can be in the true state.
sslEnable="false"
#note: sslTrust is the same as mailServerHost
sslTrust="smtp.exmail.qq.com"
# resource storage type: HDFS, S3, NONE
resourceStorageType="HDFS"
# if resourceStorageType is HDFS,defaultFS write namenode address,HA you need to put core-site.xml and hdfs-site.xml in the conf directory.
# if S3,write S3 address,HA,for example :s3a://dolphinscheduler,
# Note,s3 be sure to create the root directory /dolphinscheduler
defaultFS="hdfs://mycluster:9000"
# if resourceStorageType is S3, the following three configuration is required, otherwise please ignore
s3Endpoint="http://192.168.xx.xx:9010"
s3AccessKey="xxxxxxxxxx"
s3SecretKey="xxxxxxxxxx"
# if resourcemanager HA is enabled, please set the HA IPs; if resourcemanager is single, keep this value empty 如果没用到ha 那么让这里保持为空值
yarnHaIps=""
# if resourcemanager HA is enabled or not use resourcemanager, please keep the default value; If resourcemanager is single, you only need to replace ds1 to actual resourcemanager hostname
singleYarnIp="192.168.2.128"
# resource store on HDFS/S3 path, resource file will store to this hadoop hdfs path, self configuration, please make sure the directory exists on hdfs and have read write permissions. "/dolphinscheduler" is recommended
resourceUploadPath="/dolphinscheduler"
# who have permissions to create directory under HDFS/S3 root path
# Note: if kerberos is enabled, please config hdfsRootUser=
hdfsRootUser="hadoop"
# kerberos config
# whether kerberos starts, if kerberos starts, following four items need to config, otherwise please ignore
kerberosStartUp="false"
# kdc krb5 config file path
krb5ConfPath="$installPath/conf/krb5.conf"
# keytab username
keytabUserName="hdfs-mycluster@ESZ.COM"
# username keytab path
keytabPath="$installPath/conf/hdfs.headless.keytab"
# api server port
apiServerPort="12345"
# install hosts
# Note: install the scheduled hostname list. If it is pseudo-distributed, just write a pseudo-distributed hostname
ips="192.168.2.128,192.168.2.134,192.168.2.135,192.168.2.136,192.168.2.137,192.168.2.138"
# ssh port, default 22
# Note: if ssh port is not default, modify here
sshPort="22"
# run master machine
# Note: list of hosts hostname for deploying master
masters="192.168.2.134,192.168.2.135"
# run worker machine
# note: need to write the worker group name of each worker, the default value is "default"
workers="192.168.2.134:default,192.168.2.135:default,192.168.2.136:default,192.168.2.137:default,192.168.2.138:default"
# run alert machine
# note: list of machine hostnames for deploying alert server
alertServer="192.168.2.128"
# run api machine
# note: list of machine hostnames for deploying api server
apiServers="192.168.2.128"
注意:如果hadoop是ha的话,将hadoop的core-site.xml和hdfs-site.xml拷贝到/conf目录中
8.修改ds的日志存放位置
进入/conf目录下,修改所有的logback-{name}.xml文件中的下面配置
<property name="log.base" value="/home/hadoop/logs/ds_logs"/>
9.然后执行install.sh
[hadoop@hadoop101 dolphinscheduler]$ ./install.sh
10.启动ds
[hadoop@hadoop101 bin]$ ./start-all.sh
11.然后登录webui http://hadoop:12345/dolphinscheduler
账号 admin
密码 dolphinscheduler123