dolphinscheduler3.2.0快速搭建

1,搭建环境:

1,软件版本

CentOS 7
jdk 1.8.0_212
mysql 5.7
zookeeper 3.8.1
dolphinscheduler 3.2.0
需要准备的安装包:jdk 1.8.0_212、dolphinscheduler 3.2.0,其余使用docker部署

2,机器信息

IP
主机名
部署服务
192.168.xxx.110
ds
api-server
master-server
worker-server
alert-server
zookeeper
192.168.xxx.111
hadoop001
worker-server
MySQL

2,前置条件:

    1,配置好主机名ip映射

[root@hadoop001 ~]# cat /etc/hostname
hadoop001
[root@hadoop001 ~]# cat /etc/hosts
192.168.xxx.110 ds
192.168.xxx.111 hadoop001

 

2,配置机器免密

cd /root/.ssh
ssh-keygen -t rsa
ssh-copy-id hadoop001
ssh-copy-id ds
#另一台机器执行相同命令

3,开始搭建:

1,安装JDK

[root@ds software]# tar -zxvf jdk-8u212-linux-x64.tar.gz -C /opt/module/
  1.     配置jdk环境变量
  2.     新建/etc/profile.d/my_env.sh文件
[root@ds software]# vim /etc/profile.d/my_env.sh
#JAVA_HOME
export JAVA_HOME=/opt/module/jdk1.8.0_212
export PATH=$PATH:$JAVA_HOME/bin

2,安装mysql

    部署docker
# 安装yum-config-manager配置工具
yum -y install yum-utils


# 建议使用阿里云yum源:(推荐)
#yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo


# 安装docker-ce版本
yum install -y docker-ce
# 启动并开机启动
systemctl enable --now docker
docker --version
    部署 docker-compose(因正常部署要连外网,这里用yum源安装)
yum install -y docker-compose-plugin
ln -s /usr/libexec/docker/cli-plugins/docker-compose /usr/local/bin/docker-compose
docker-compose version
    安装mysql并配置
cd /opt/module/

git clone https://gitee.com/hadoop-bigdata/docker-compose-mysql.git

cd docker-compose-mysql

# create network
docker network create hadoop-network

# 部署
docker-compose -f docker-compose.yaml up -d

# 查看
docker-compose -f docker-compose.yaml ps

# 进入容器
docker exec -it mysql-test bash

#登录mysql
mysql -uroot -p
# 输入密码:123456

# 创建数据库
create database dolphinscheduler character set utf8 ;  

CREATE USER 'dolphinscheduler'@'%'IDENTIFIED BY 'dolphinscheduler@123';
GRANT ALL PRIVILEGES ON dolphinscheduler.* TO 'dolphinscheduler'@'%';
FLUSH PRIVILEGES;

3,安装注册中心 Zookeeper 

git clone https://gitee.com/hadoop-bigdata/docker-compose-zookeeper.git

cd docker-compose-zookeeper

# 部署
docker-compose -f docker-compose.yaml up -d

# 查看
docker-compose -f docker-compose.yaml ps

4,部署dolphinscheduler3.2.0

    解压并重命名
#上传安装包到ds
tar -zxvf apache-dolphinscheduler-3.2.0-bin.tar.gz
mv apache-dolphinscheduler-3.2.0-bin dolphinscheduler
   配置安装脚本install_env.sh
vim dolphinscheduler/bin/env/install_env.sh
改成如下内容:
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements.  See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License.  You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#


# ---------------------------------------------------------
# INSTALL MACHINE
# ---------------------------------------------------------
# A comma separated list of machine hostname or IP would be installed DolphinScheduler,
# including master, worker, api, alert. If you want to deploy in pseudo-distributed
# mode, just write a pseudo-distributed hostname
# Example for hostnames: ips="ds1,ds2,ds3,ds4,ds5", Example for IPs: ips="192.168.8.1,192.168.8.2,192.168.8.3,192.168.8.4,192.168.8.5"
ips="ds,hadoop001"


# Port of SSH protocol, default value is 22. For now we only support same port in all `ips` machine
# modify it if you use different ssh port
sshPort="22"


# A comma separated list of machine hostname or IP would be installed Master server, it
# must be a subset of configuration `ips`.
# Example for hostnames: masters="ds1,ds2", Example for IPs: masters="192.168.8.1,192.168.8.2"
masters="ds"


# A comma separated list of machine <hostname>:<workerGroup> or <IP>:<workerGroup>.All hostname or IP must be a
# subset of configuration `ips`, And workerGroup have default value as `default`, but we recommend you declare behind the hosts
# Example for hostnames: workers="ds1:default,ds2:default,ds3:default", Example for IPs: workers="192.168.8.1:default,192.168.8.2:default,192.168.8.3:default"
workers="ds:default,hadoop001:default"


# A comma separated list of machine hostname or IP would be installed Alert server, it
# must be a subset of configuration `ips`.
# Example for hostname: alertServer="ds3", Example for IP: alertServer="192.168.8.3"
alertServer="ds"


# A comma separated list of machine hostname or IP would be installed API server, it
# must be a subset of configuration `ips`.
# Example for hostname: apiServers="ds1", Example for IP: apiServers="192.168.8.1"
apiServers="ds"


# The directory to install DolphinScheduler for all machine we config above. It will automatically be created by `install.sh` script if not exists.
# Do not set this configuration same as the current path (pwd). Do not add quotes to it if you using related path.
installPath=/tmp/dolphinscheduler


# The user to deploy DolphinScheduler for all machine we config above. For now user must create by yourself before running `install.sh`
# script. The user needs to have sudo privileges and permissions to operate hdfs. If hdfs is enabled than the root directory needs
# to be created by this user
deployUser=root


# The root of zookeeper, for now DolphinScheduler default registry server is zookeeper.
zkRoot=/dolphinscheduler





配置环境变量脚本
vim dolphinscheduler_env.sh
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#


# JAVA_HOME, will use it to start DolphinScheduler server
export JAVA_HOME=/opt/module/jdk1.8.0_212


# Database related configuration, set database type, username and password
export DATABASE=${DATABASE:-mysql}
export SPRING_PROFILES_ACTIVE=${DATABASE}
export SPRING_DATASOURCE_URL="jdbc:mysql://hadoop001:3306/dolphinscheduler?useUnicode=true&characterEncoding=UTF-8&useSSL=false"
export SPRING_DATASOURCE_USERNAME=dolphinscheduler
export SPRING_DATASOURCE_PASSWORD=dolphinscheduler@123


# DolphinScheduler server related configuration
export SPRING_CACHE_TYPE=${SPRING_CACHE_TYPE:-none}
export SPRING_JACKSON_TIME_ZONE=${SPRING_JACKSON_TIME_ZONE:-Asia/Shanghai}
export MASTER_FETCH_COMMAND_NUM=${MASTER_FETCH_COMMAND_NUM:-10}


# Registry center configuration, determines the type and link of the registry center
export REGISTRY_TYPE=${REGISTRY_TYPE:-zookeeper}
export REGISTRY_ZOOKEEPER_CONNECT_STRING="ds:31181,ds:32181,ds:33181"

#下面的环境变量需要在admin用户下安全中心-环境管理加,3.2.0版本在这边加不生效,这里仅供参考
# Tasks related configurations, need to change the configuration if you use the related tasks.
export HADOOP_HOME=${HADOOP_HOME:-/opt/soft/hadoop}
export HADOOP_CONF_DIR=${HADOOP_CONF_DIR:-/opt/soft/hadoop/etc/hadoop}
export SPARK_HOME1=/opt/module/spark
export SPARK_HOME2=${SPARK_HOME2:-/opt/soft/spark2}
export PYTHON_LAUNCHER=/usr/bin/python
export HIVE_HOME=${HIVE_HOME:-/opt/soft/hive}
export FLINK_HOME=${FLINK_HOME:-/opt/soft/flink}
export DATAX_LAUNCHER=/opt/module/datax/bin/datax.py
export SEATUNNEL_HOME=${SEATUNNEL_HOME:-/opt/soft/seatunnel}
export CHUNJUN_HOME=${CHUNJUN_HOME:-/opt/soft/chunjun}


export PATH=$HADOOP_HOME/bin:$SPARK_HOME1/bin:$SPARK_HOME2/bin:$PYTHON_LAUNCHER:$JAVA_HOME/bin:$HIVE_HOME/bin:$FLINK_HOME/bin:$DATAX_LAUNCHER:$SEATUNNEL_HOME/bin:$CHUNJUN_HOME/bin:$PATH
拷贝mysql驱动包到各个服务目录下
cp /opt/software/mysql-connector-java-8.0.16.jar tools/libs/
cp tools/libs/mysql-connector-java-8.0.16.jar master-server/libs/
cp tools/libs/mysql-connector-java-8.0.16.jar worker-server/libs/
cp tools/libs/mysql-connector-java-8.0.16.jar alert-server/libs/
cp tools/libs/mysql-connector-java-8.0.16.jar api-server/libs/
初始化数据库
sh tools/bin/upgrade-schema.sh
将配置copy其它节点
[root@ds bin]# scp -r /opt/module/dolphinscheduler hadoop001:/opt/module/
启动安装
sh bin/install.sh
登录web页面:
http://192.168.xxx.110:12345/dolphinscheduler/ui/login
默认账户密码: admin/dolphinscheduler123

4,其他问题:

  1,如何配置dolphinscheduler资源中心为HDFS

在3.2.0版本中,资源中心默认为本地,如需配置为HDFS

 在api-server/conf/common.properties、worker-server/conf/common.properties中配置,修改资源路径为HDFS 

resource.storage.type=HDFS 
resource.storage.upload.base.path=/dolphinscheduler
resource.hdfs.root.user=root
resource.hdfs.fs.defaultFS=hdfs://hadoop001:8020

   

2,如何配置sqlserver数据源

在3.2.0版本,如果是sqlserver数据源,需要在数据源连接的地方增加jdbc连接参数

{"trustServerCertificate":"true"}

   

3,搭建完成后,发现master、worker节点和主机的ip对不上问题

原因是可能机器存在多个ip映射同一个主机名情况,用以下命令查看主机名映射IP
hostname -I
之后找到我们需要的IP所对应的网关(ifconfig),拿到网关后去修改common.properties 的网关偏好(api、alter、worker、master),然后重新安装即可
dolphin.scheduler.network.interface.preferred=eth0

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 3
    评论
评论 3
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值