搭建DophinScheduler

各服务所在节点部署如下

服务

node01

node02

node03

master

 

worker/logServer

 

alertServer

 

 

apiServer

  

ui

  

√(Nginx)

创建部署用户及SSH免密

在所有需要部署调度的机器上创建部署用户,因为worker服务是以 sudo -u {linux-user} 方式来执行作业,所以部署用户需要有 sudo 权限,而且是免密的。我们直接采用hadoop用户。

mysql> set global validate_password_policy=0;
Query OK, 0 rows affected (0.00 sec)

mysql> set global validate_password_mixed_case_count=0;
Query OK, 0 rows affected (0.00 sec)

mysql> set global validate_password_number_count=3;
Query OK, 0 rows affected (0.00 sec)

mysql> set global validate_password_special_char_count=0;
Query OK, 0 rows affected (0.00 sec)

mysql> set global validate_password_length=3;
Query OK, 0 rows affected (0.00 sec)

mysql> CREATE DATABASE dolphinscheduler DEFAULT CHARACTER SET utf8 DEFAULT COLLATE utf8_general_ci;
Query OK, 1 row affected (0.00 sec)

mysql> CREATE USER 'ds'@'%' IDENTIFIED BY '123456';
Query OK, 0 rows affected (0.01 sec)

mysql> GRANT ALL PRIVILEGES ON dolphinscheduler.* TO 'ds'@'%' IDENTIFIED BY '123456';
Query OK, 0 rows affected, 1 warning (0.01 sec)

mysql> flush privileges;
Query OK, 0 rows affected (0.00 sec)

[hadoop@hadoop001 ~]$ curl https://bootstrap.pypa.io/get-pip.py -o get-pip.py

[hadoop@hadoop001 ~]$ sudo python get-pip.py

[hadoop@hadoop001 ~]$ pip --version

[hadoop@hadoop001 ~]$ sudo pip install kazoo

[hadoop@hadoop001 software]$ tar -zxvf apache-dolphinscheduler-incubating-1.2.0-hdp3.1.4.0-dolphinscheduler-backend-bin.tar.gz -C ~/app/

[hadoop@hadoop001 software]$ cd ../app/
[hadoop@hadoop001 app]$ ls
apache-dolphinscheduler-incubating-1.2.0-hdp3.1.4.0-dolphinscheduler-backend-bin  spark-2.3.2-bin-hadoop2.7
apache-kylin-2.6.4-bin                                                            spark-2.3.2-bin-hadoop2.7.tgz
kylin

[hadoop@hadoop001 app]$ ln -s apache-dolphinscheduler-incubating-1.2.0-hdp3.1.4.0-dolphinscheduler-backend-bin ds-backend

[hadoop@hadoop001 app]$ chmod ugo+x ds-backend/bin/*

[hadoop@hadoop001 app]$ chmod ugo+x ds-backend/script/*

[hadoop@hadoop001 app]$ chmod ugo+x ds-backend/install.sh

[hadoop@hadoop001 app]$ chmod ugo+x /home/hadoop/app/ds-backend/conf/env/.dolphinscheduler_env.sh

[hadoop@hadoop001 app]$ vi /home/hadoop/app/ds-backend/conf/application-dao.properties

#spring.datasource.driver-class-name=org.postgresql.Driver
spring.datasource.driver-class-name=com.mysql.jdbc.Driver
#spring.datasource.url=jdbc:postgresql://192.168.xx.xx:5432/dolphinscheduler
spring.datasource.url=jdbc:mysql://hadoop001:3306/dolphinscheduler?useUnicode=true&characterEncoding=UTF-8
# mysql
#spring.datasource.driver-class-name=com.mysql.jdbc.Driver
#spring.datasource.url=jdbc:mysql://192.168.xx.xx:3306/dolphinscheduler?useUnicode=true&characterEncoding=UTF-8
spring.datasource.username=ds
spring.datasource.password=123456

初始化数据库

[hadoop@hadoop001 app]$ cd /home/hadoop/app/ds-backend/lib/

[hadoop@hadoop001 lib]$ ln -s /usr/share/java/mysql-connector-java-8.0.18.jar mysql-connector-java-8.0.18.jar

[hadoop@hadoop001 lib]$ ll mysql-connector-java-8.0.18.jar
lrwxrwxrwx 1 hadoop hadoop 47 Jan 24 17:34 mysql-connector-java-8.0.18.jar -> /usr/share/java/mysql-connector-java-8.0.18.jar

[hadoop@hadoop001 lib]$ cd /home/hadoop/app/ds-backend

[hadoop@hadoop001 ds-backend]$ sh ./script/create-dolphinscheduler.sh

[hadoop@hadoop001 ds-backend]$ vi /home/hadoop/app/ds-backend/conf/env/.dolphinscheduler_env.sh

export HADOOP_HOME=/usr/hdp/current/hadoop-client
export HADOOP_CONF_DIR=/etc/hadoop/conf
#export SPARK_HOME1=/opt/soft/spark1
export SPARK_HOME2=/usr/hdp/current/spark2-client
export PYTHON_HOME=/usr/bin/python
export JAVA_HOME=/usr/local/jdk
export HIVE_HOME=/usr/hdp/current/hive-client
#export FLINK_HOME=/home/hadoop/app/flink
#export
PATH=$HADOOP_HOME/bin:$SPARK_HOME1/bin:$SPARK_HOME2/bin:$PYTHON_HOME:$JAVA_HOME/bin:$HIVE_HOME/bin:$PATH:$FLINK_HOME/bin:$PATH
export PATH=$HADOOP_HOME/bin:$SPARK_HOME2/bin:$PYTHON_HOME:$JAVA_HOME/bin:$HIVE_HOME/bin:$PATH

[hadoop@hadoop001 ds-backend]$ vi /home/hadoop/app/ds-backend/install.sh

[hadoop@hadoop001 ds-backend]$ cp /etc/hadoop/conf/core-site.xml /home/hadoop/app/ds-backend/conf/

[hadoop@hadoop001 ds-backend]$ cp /etc/hadoop/conf/hdfs-site.xml /home/hadoop/app/ds-backend/conf/

[hadoop@hadoop001 ds-backend]$ vi /home/hadoop/app/ds-backend/bin/dolphinscheduler-daemon.sh
[hadoop@hadoop001 ds-backend]$ vi /home/hadoop/app/ds-backend/script/dolphinscheduler-daemon.sh

export JAVA_HOME=/usr/local/jdk
根据自己机器的情况调整jvm内存:
export DOLPHINSCHEDULER_OPTS="-server -Xmx2g -Xms2g -Xss512k -XX:+DisableExplicitGC -XX:+UseConcMarkSweepGC -XX:+CMSParallelRemarkEnabled -XX:LargePageSizeInBytes=128m -XX:+UseFastAccessorMethods -XX:+UseCMSInitiatingOccupancyOnly -XX:CMSInitiatingOccupancyFraction=70"

 

[hadoop@hadoop001 ds-backend]$ cd /home/hadoop/app/ds-backend/

[hadoop@hadoop001 ds-backend]$ ./install.sh

成功

[hadoop@hadoop003 app]$ tar -zxvf apache-dolphinscheduler-incubating-1.2.0-hdp3.1.4.0-dolphinscheduler-front-bin.tar.gz -C ../app/

[hadoop@hadoop003 software]$ cd ../app/

[hadoop@hadoop003 app]$ cd apache-dolphinscheduler-incubating-1.2.0-hdp3.1.4.0-dolphinscheduler-front-bin/

[hadoop@hadoop003 apache-dolphinscheduler-incubating-1.2.0-hdp3.1.4.0-dolphinscheduler-front-bin]$ ls
DISCLAIMER-WIP  dist  install-dolphinscheduler-ui.sh  LICENSE  licenses  NOTICE

[hadoop@hadoop003 apache-dolphinscheduler-incubating-1.2.0-hdp3.1.4.0-dolphinscheduler-front-bin]$ sudo cp -r dist /usr/share/nginx/html/

[hadoop@hadoop003 app]$ sudo yum install nginx -y

#user nginx;
worker_processes  1;
error_log  /var/log/nginx/error.log warn;
pid        /var/run/nginx.pid;
events {
    worker_connections  1024;
}

http {
    include       /etc/nginx/mime.types;
    default_type  application/octet-stream;
    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
    '$status $body_bytes_sent "$http_referer" '
    '"$http_user_agent" "$http_x_forwarded_for"';
    access_log  /var/log/nginx/access.log  main;
    client_max_body_size 2000M;
    keepalive_timeout  65;
    server {
        listen       8080;
        server_name  localhost;
        #charset utf-8;
        #access_log  logs/host.access.log  main;
        location / {
            root /usr/share/nginx/html/build;
            index index.html index.htm;
            try_files $uri /index.html;
            add_header Cache-Control "private, no-store, no-cache, must-revalidate, proxy-revalidate";
        }
        location /keeper/ {
            proxy_pass http://localhost:5090/v1/keeper/;
        }
        location /keeper-webSocket/ {
            proxy_pass http://localhost:8901/;
            proxy_read_timeout 60s;
            proxy_http_version 1.1;
            proxy_set_header Upgrade $http_upgrade;
            proxy_set_header Connection "upgrade";
        }
    }

    server {
        listen 8888;# 访问端口
        server_name localhost;
        #charset koi8-r;
        #access_log /var/log/nginx/host.access.log main;
        location / {
            root /usr/share/nginx/html/dist; # 上面前端解压的dist目录地址(自行修改)
            index  index.html index.htm;
        }
        location /dolphinscheduler {
            proxy_pass http://hadoop001:12345; # 接口地址(自行修改)
            proxy_set_header Host $host;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header x_real_ipP $remote_addr;
            proxy_set_header remote_addr $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_http_version 1.1;
            proxy_connect_timeout 4s;
            proxy_read_timeout 30s;
            proxy_send_timeout 12s;
            proxy_set_header Upgrade $http_upgrade;
            proxy_set_header Connection "upgrade";
        }
        #error_page  404              /404.html;
        # redirect server error pages to the static page /50x.html
        error_page   500 502 503 504  /50x.html;
        location = /50x.html {
            root   /usr/share/nginx/html;
        }
    }
    include /etc/nginx/conf.d/*.conf;
}

打开浏览器,一直跳转不到登录页面,百思不得其解

那就查看日志吧

[hadoop@hadoop001 logs]$ vim dolphinscheduler-api-server.2020-01-24_17.0.log

发现了端倪没有

[hadoop@hadoop001 logs]$ mv /ds/lib/jetty-runner-9.3.20.v20170531.jar /ds/lib/jetty-runner-9.3.20.v20170531.jar.bak

重启DophinScheduler

进来之后先把密码改了

创建队列

这里创建的队列在执行 spark、mapreduce 等程序时作为“队列”参数使用(创建后不可删除)。

创建租户

租户对应的是 Linux 的用户,用于 worker 提交作业所使用的用户。如果 Linux 没有这个用户, worker 会在执行脚本的时候创建这个用户。
租户编码:是 Linux 上的用户,唯一,不能重复。
新建的租户会在 HDFS 上 $hdfsPath(默认为:"/dolphinscheduler") 目录下创建租户目录,租户 目录下为该租户上传的文件和 UDF 函数 租户名称:租户编码的别名,可以重复,但是建议不要重复,以免混淆。

新建了一个用户,然后用这个用户登录

{"useUnicode":"true","characterEncoding":"utf8","autoReconnect":"true","failOverReadOnly":"false","noAccessToProcedureBodies":"true","zeroDateTimeBehavior":"convertToNull","tinyInt1isBit":"false"}

创建项目

 

 

新增olap队列

上次jar包

CREATE TABLE `user` (
    `id` bigint(19) NOT NULL,
    `name` varchar(64) NOT NULL,
    `cardBank` varchar(64) NOT NULL,
    `phone` varchar(64) NOT NULL,
    `city` varchar(256) NOT NULL
) ENGINE=InnoDB DEFAULT CHARSET=utf8;


INSERT INTO test_schema1.user (id, name, cardBank,phone,city) VALUES (5, "zhangsan","CMB","18074546423","BEIJING");
INSERT INTO test_schema1.user (id, name, cardBank,phone,city) VALUES (6, "lisi","ICBC","18074546423","BEIJING");
INSERT INTO test_schema1.user (id, name, cardBank,phone,city) VALUES (7,
"wangwu","CMB","18074546423","BEIJING");
INSERT INTO test_schema1.user (id, name, cardBank,phone,city) VALUES (8,
"zhaoliu","CMB","18074546423","BEIJING");

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值