Flinkx的启动以及运转流程,从哪个类到哪个类(小白版)

启动类package com.dtstack.flinkx.launcher;

local模式启动要经过main方法

com.dtstack.flinkx.Main.main(localArgs);

在这里插入图片描述

Main类里面的main方法

在这里插入图片描述
找到运行的方法

JobExecutionResult result = env.execute(jobIdString);//执行的语句

在这里插入图片描述
实现类
在这里插入图片描述
实现类MyLocalStreamEnvironment,里面的execute方法触发

在这里插入图片描述

下面展示一些 内联代码片

try {
            miniCluster.start();
            configuration.setInteger(RestOptions.PORT, miniCluster.getRestAddress().get().getPort());
            JobExecutionResult result =  executeJobBlocking(jobGraph,miniCluster);
            return result;
        }catch (Exception e){
            printResultForNull();
            return null;
        }finally {
            transformations.clear();
            miniCluster.close();
        }
开始走flink的jar包方法了
miniCluster.start();
数据迁移后获取的迁移相关信息
JobExecutionResult result =  miniCluster.executeJobBlocking(jobGraph);

特别注意一下的代码对比(原版和我的版本)。

原版
 try {
            miniCluster.start();
            configuration.setInteger(RestOptions.PORT, miniCluster.getRestAddress().get().getPort());
            JobExecutionResult result =  miniCluster.executeJobBlocking(jobGraph);
            return result;
        }finally {
            transformations.clear();
            miniCluster.close();
        }
修改版
 try {
            miniCluster.start();
            configuration.setInteger(RestOptions.PORT, miniCluster.getRestAddress().get().getPort());
            JobExecutionResult result =  miniCluster.executeJobBlocking(jobGraph);
            return result;
        }catch (Exception e){
            printResultForNull();
            return null;
        }finally {
            transformations.clear();
            miniCluster.close();
        }

注意:这里其实是多了catch这一段,为什么要多出这一段呢?
因为假如数据迁移的时候出错了,就会报错,如果抛出异常,那么return的内容就是报错内容,这报错内容不包含打印最终结果
在这里插入图片描述
上图的打印出的结果,是由Main类里面的main方法里的一段代码打印出来的
在这里插入图片描述
问题就出在这里了,一旦数据迁移报错,那么这段代码将不会执行,所以就没有最终的打印结果。
所以我catch掉之后添加了方法

printResultForNull();
// An highlighted block
public static void printResultForNull(){
        List<String> names = Lists.newArrayList();
        List<String> values = Lists.newArrayList();
        Map<String,Object> map = new HashMap<String,Object>();

        map.put("numWrite",0);
        map.put("last_write_num_0",0);
        map.put("conversionErrors",0);
        map.put("writeDuration",0);
        map.put("duplicateErrors",0);
        map.put("numRead",0);
        map.put("snapshotWrite",0);
        map.put("otherErrors",0);
        map.put("readDuration",0);
        map.put("byteRead","");
        map.put("last_write_location_0",0);
        map.put("numWrite ",0);
        map.put("byteWrite",0);
        map.put("nullErrors",0);
        map.put("nErrors",0);

        map.forEach((name, val) -> {
            names.add(name);
            values.add(String.valueOf(val));
        });

        int maxLength = 0;
        for (String name : names) {
            maxLength = Math.max(maxLength, name.length());
        }
        maxLength += 5;

        StringBuilder builder = new StringBuilder();
        for (int i = 0; i < names.size(); i++) {
            String name = names.get(i);
            builder.append(name + StringUtils.repeat(" ", maxLength - name.length()));
            builder.append("|  ").append(values.get(i));

            if(i+1 < names.size()){
                builder.append("\n");
            }
        }

        System.out.println("---------------------------------");
        System.out.println(builder.toString());
        System.out.println("---------------------------------");
    }

各种用到的类

Launcher

// An highlighted block
/*
 * Licensed to the Apache Software Foundation (ASF) under one
 * or more contributor license agreements.  See the NOTICE file
 * distributed with this work for additional information
 * regarding copyright ownership.  The ASF licenses this file
 * to you under the Apache License, Version 2.0 (the
 * "License"); you may not use this file except in compliance
 * with the License.  You may obtain a copy of the License at
 *
 * http://www.apache.org/licenses/LICENSE-2.0
 *
 * Unless required by applicable law or agreed to in writing, software
 * distributed under the License is distributed on an "AS IS" BASIS,
 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 * See the License for the specific language governing permissions and
 * limitations under the License.
 */
package com.dtstack.flinkx.launcher;

import ch.qos.logback.classic.Level;
import ch.qos.logback.classic.LoggerContext;
import com.dtstack.flinkx.config.ContentConfig;
import com.dtstack.flinkx.config.DataTransferConfig;
import com.dtstack.flinkx.enums.ClusterMode;
import com.dtstack.flinkx.launcher.perjob.PerJobSubmitter;
import com.dtstack.flinkx.options.OptionParser;
import com.dtstack.flinkx.options.Options;
import com.dtstack.flinkx.util.SysUtil;
import org.apache.commons.lang.StringUtils;
import org.apache.flink.client.program.ClusterClient;
import org.apache.flink.client.program.PackagedProgram;
import org.apache.flink.runtime.jobgraph.SavepointRestoreSettings;
import org.apache.flink.util.Preconditions;
import org.slf4j.LoggerFactory;

import java.io.File;
import java.io.FileInputStream;
import java.io.FileNotFoundException;
import java.io.FilenameFilter;
import java.net.MalformedURLException;
import java.net.URL;
import java.nio.charset.StandardCharsets;
import java.util.ArrayList;
import java.util.List;

/**
 * FlinkX commandline Launcher
 *
 * Company: www.dtstack.com
 * @author huyifan.zju@163.com
 */
public class Launcher {

    public static final String KEY_FLINKX_HOME = "FLINKX_HOME";
    public static final String KEY_FLINK_HOME = "FLINK_HOME";
    public static final String KEY_HADOOP_HOME = "HADOOP_HOME";

    public static final String PLUGINS_DIR_NAME = "plugins";

    public static final String CORE_JAR_NAME_PREFIX = "flinkx";

    private static List<URL> analyzeUserClasspath(String content, String pluginRoot) {

        List<URL> urlList = new ArrayList<>();

        String jobJson = readJob(content);
        DataTransferConfig config = DataTransferConfig.parse(jobJson);

        Preconditions.checkNotNull(pluginRoot);

        ContentConfig contentConfig = config.getJob().getContent().get(0);
        String readerName = contentConfig.getReader().getName().toLowerCase();
        File readerDir = new File(pluginRoot + File.separator + readerName);
        String writerName = contentConfig.getWriter().getName().toLowerCase();
        File writerDir = new File(pluginRoot + File.separator + writerName);
        File commonDir = new File(pluginRoot + File.separator + "common");

        try {
            urlList.addAll(SysUtil.findJarsInDir(readerDir));
            urlList.addAll(SysUtil.findJarsInDir(writerDir));
            urlList.addAll(SysUtil.findJarsInDir(commonDir));
        } catch (MalformedURLException e) {
            throw new RuntimeException(e);
        }

        return urlList;
    }

    public static void main(String[] args) throws Exception {
        setLogLevel(Level.INFO.toString());
        OptionParser optionParser = new OptionParser(args);
        Options launcherOptions = optionParser.getOptions();
        findDefaultConfigDir(launcherOptions);

        String mode = launcherOptions.getMode();
        List<String> argList = optionParser.getProgramExeArgList();
        if(mode.equals(ClusterMode.local.name())) {
            String[] localArgs = argList.toArray(new String[argList.size()]);
            com.dtstack.flinkx.Main.main(localArgs);
        } else {
            String pluginRoot = launcherOptions.getPluginRoot();
            String content = launcherOptions.getJob();
            String coreJarName = getCoreJarFileName(pluginRoot);
            File jarFile = new File(pluginRoot + File.separator + coreJarName);
            List<URL> urlList = analyzeUserClasspath(content, pluginRoot);
            if(!ClusterMode.yarnPer.name().equals(mode)){
                ClusterClient clusterClient = ClusterClientFactory.createClusterClient(launcherOptions);
                String monitor = clusterClient.getWebInterfaceURL();
                argList.add("-monitor");
                argList.add(monitor);

                String[] remoteArgs = argList.toArray(new String[0]);

                ClassLoaderType classLoaderType = ClassLoaderType.getByClassMode(launcherOptions.getPluginLoadMode());
                PackagedProgram program = new PackagedProgram(jarFile, urlList, classLoaderType, "com.dtstack.flinkx.Main", remoteArgs);

                if (StringUtils.isNotEmpty(launcherOptions.getS())){
                    program.setSavepointRestoreSettings(SavepointRestoreSettings.forPath(launcherOptions.getS()));
                }

                clusterClient.run(program, Integer.parseInt(launcherOptions.getParallelism()));
                clusterClient.shutdown();
            }else{
                String confProp = launcherOptions.getConfProp();
                if (StringUtils.isBlank(confProp)){
                    throw new IllegalArgumentException("per-job mode must have confProp!");
                }

                String libJar = launcherOptions.getFlinkLibJar();
                if (StringUtils.isBlank(libJar)){
                    throw new IllegalArgumentException("per-job mode must have flink lib path!");
                }

                argList.add("-monitor");
                argList.add("");

                //jdk内在优化,使用空数组效率更高
                String[] remoteArgs = argList.toArray(new String[0]);
                PerJobSubmitter.submit(launcherOptions, jarFile, remoteArgs);
            }
        }
    }

    private static void findDefaultConfigDir(Options launcherOptions) {
        findDefaultPluginRoot(launcherOptions);

        if (ClusterMode.local.name().equalsIgnoreCase(launcherOptions.getMode())) {
            return;
        }

        findDefaultFlinkConf(launcherOptions);
        findDefaultHadoopConf(launcherOptions);
    }

    private static void findDefaultHadoopConf(Options launcherOptions) {
        if (StringUtils.isNotEmpty(launcherOptions.getYarnconf())) {
            return;
        }

        String hadoopHome = getSystemProperty(KEY_HADOOP_HOME);
        if (StringUtils.isNotEmpty(hadoopHome)) {
            hadoopHome = hadoopHome.trim();
            if (hadoopHome.endsWith(File.separator)) {
                hadoopHome = hadoopHome.substring(0, hadoopHome.lastIndexOf(File.separator));
            }

            launcherOptions.setYarnconf(hadoopHome + "/etc/hadoop");
        }
    }

    private static void findDefaultFlinkConf(Options launcherOptions) {
        if (StringUtils.isNotEmpty(launcherOptions.getFlinkconf()) && StringUtils.isNotEmpty(launcherOptions.getFlinkLibJar())) {
            return;
        }

        String flinkHome = getSystemProperty(KEY_FLINK_HOME);
        if (StringUtils.isNotEmpty(flinkHome)) {
            flinkHome = flinkHome.trim();
            if (flinkHome.endsWith(File.separator)){
                flinkHome = flinkHome.substring(0, flinkHome.lastIndexOf(File.separator));
            }

            launcherOptions.setFlinkconf(flinkHome + "/conf");
            launcherOptions.setFlinkLibJar(flinkHome + "/lib");
        }
    }

    private static void findDefaultPluginRoot(Options launcherOptions) {
        String pluginRoot = launcherOptions.getPluginRoot();
        if (StringUtils.isNotEmpty(pluginRoot)) {
            return;
        }

        String flinkxHome = getSystemProperty(KEY_FLINKX_HOME);
        if (StringUtils.isNotEmpty(flinkxHome)) {
            flinkxHome = flinkxHome.trim();
            if (flinkxHome.endsWith(File.separator)) {
                pluginRoot = flinkxHome + PLUGINS_DIR_NAME;
            } else {
                pluginRoot = flinkxHome + File.separator + PLUGINS_DIR_NAME;
            }

            launcherOptions.setPluginRoot(pluginRoot);
        }
    }

    private static String getSystemProperty(String name) {
        String property = System.getenv(name);
        if (StringUtils.isEmpty(property)) {
            property = System.getProperty(name);
        }

        return property;
    }

    private static String getCoreJarFileName (String pluginRoot) throws FileNotFoundException{
        String coreJarFileName = null;
        File pluginDir = new File(pluginRoot);
        if (pluginDir.exists() && pluginDir.isDirectory()){
            File[] jarFiles = pluginDir.listFiles(new FilenameFilter() {
                @Override
                public boolean accept(File dir, String name) {
                    return name.toLowerCase().startsWith(CORE_JAR_NAME_PREFIX) && name.toLowerCase().endsWith(".jar");
                }
            });

            if (jarFiles != null && jarFiles.length > 0){
                coreJarFileName = jarFiles[0].getName();
            }
        }

        if (StringUtils.isEmpty(coreJarFileName)){
            throw new FileNotFoundException("Can not find core jar file in path:" + pluginRoot);
        }

        return coreJarFileName;
    }

    private static String readJob(String job) {
        try {
            File file = new File(job);
            FileInputStream in = new FileInputStream(file);
            byte[] fileContent = new byte[(int) file.length()];
            in.read(fileContent);
            in.close();
            return new String(fileContent, StandardCharsets.UTF_8);
        } catch (Exception e){
            throw new RuntimeException(e);
        }
    }

    private static void setLogLevel(String level){
        LoggerContext loggerContext= (LoggerContext) LoggerFactory.getILoggerFactory();
        //设置全局日志级别
        ch.qos.logback.classic.Logger logger=loggerContext.getLogger("root");
        logger.setLevel(Level.toLevel(level));
    }
}

Main

// An highlighted block
/*
 * Licensed to the Apache Software Foundation (ASF) under one
 * or more contributor license agreements.  See the NOTICE file
 * distributed with this work for additional information
 * regarding copyright ownership.  The ASF licenses this file
 * to you under the Apache License, Version 2.0 (the
 * "License"); you may not use this file except in compliance
 * with the License.  You may obtain a copy of the License at
 *
 *     http://www.apache.org/licenses/LICENSE-2.0
 *
 * Unless required by applicable law or agreed to in writing, software
 * distributed under the License is distributed on an "AS IS" BASIS,
 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 * See the License for the specific language governing permissions and
 * limitations under the License.
 */

package com.dtstack.flinkx;

import com.dtstack.flink.api.java.MyLocalStreamEnvironment;
import com.dtstack.flinkx.classloader.ClassLoaderManager;
import com.dtstack.flinkx.config.ContentConfig;
import com.dtstack.flinkx.config.DataTransferConfig;
import com.dtstack.flinkx.config.SpeedConfig;
import com.dtstack.flinkx.config.RestartConfig;
import com.dtstack.flinkx.config.TestConfig;
import com.dtstack.flinkx.constants.ConfigConstant;
import com.dtstack.flinkx.options.OptionParser;
import com.dtstack.flinkx.reader.BaseDataReader;
import com.dtstack.flinkx.reader.DataReaderFactory;
import com.dtstack.flinkx.util.ResultPrintUtil;
import com.dtstack.flinkx.writer.BaseDataWriter;
import com.dtstack.flinkx.writer.DataWriterFactory;
import com.fasterxml.jackson.databind.ObjectMapper;
import org.apache.commons.io.Charsets;
import org.apache.commons.lang.StringUtils;
import org.apache.flink.api.common.JobExecutionResult;
import org.apache.flink.api.common.restartstrategy.RestartStrategies;
import org.apache.flink.api.common.time.Time;
import org.apache.flink.client.program.ContextEnvironment;
import org.apache.flink.configuration.Configuration;
import org.apache.flink.configuration.GlobalConfiguration;
import org.apache.flink.runtime.jobgraph.SavepointRestoreSettings;
import org.apache.flink.streaming.api.CheckpointingMode;
import org.apache.flink.streaming.api.datastream.DataStream;
import org.apache.flink.streaming.api.datastream.DataStreamSource;
import org.apache.flink.streaming.api.environment.CheckpointConfig;
import org.apache.flink.streaming.api.environment.StreamContextEnvironment;
import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
import org.apache.flink.types.Row;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

import java.lang.reflect.Field;
import java.net.URL;
import java.net.URLDecoder;
import java.util.*;
import java.util.concurrent.TimeUnit;

/**
 * The main class entry
 *
 * Company: www.dtstack.com
 * @author huyifan.zju@163.com
 */
public class Main {

    public static Logger LOG = LoggerFactory.getLogger(Main.class);

    public static final String READER = "reader";
    public static final String WRITER = "writer";
    public static final String STREAM_READER = "streamreader";
    public static final String STREAM_WRITER = "streamwriter";

    private static final String CLASS_FILE_NAME_FMT = "class_path_%d";

    private static ObjectMapper objectMapper = new ObjectMapper();

    public static void main(String[] args) throws Exception {
        com.dtstack.flinkx.options.Options options = new OptionParser(args).getOptions();
        String job = options.getJob();
        String jobIdString = options.getJobid();
        String monitor = options.getMonitor();
        String pluginRoot = options.getPluginRoot();
        String savepointPath = options.getS();
        Properties confProperties = parseConf(options.getConfProp());

        // 解析jobPath指定的任务配置文件
        DataTransferConfig config = DataTransferConfig.parse(job);
        speedTest(config);

        if(StringUtils.isNotEmpty(monitor)) {
            config.setMonitorUrls(monitor);
        }

        if(StringUtils.isNotEmpty(pluginRoot)) {
            config.setPluginRoot(pluginRoot);
        }

        Configuration flinkConf = new Configuration();
        if (StringUtils.isNotEmpty(options.getFlinkconf())) {
            flinkConf = GlobalConfiguration.loadConfiguration(options.getFlinkconf());
        }

        StreamExecutionEnvironment env = (StringUtils.isNotBlank(monitor)) ?
                StreamExecutionEnvironment.getExecutionEnvironment() :
                new MyLocalStreamEnvironment(flinkConf);

        env = openCheckpointConf(env, confProperties);
        configRestartStrategy(env, config);

        SpeedConfig speedConfig = config.getJob().getSetting().getSpeed();

        env.setParallelism(speedConfig.getChannel());
        env.setRestartStrategy(RestartStrategies.noRestart());
        BaseDataReader dataReader = DataReaderFactory.getDataReader(config, env);
        DataStream<Row> dataStream = dataReader.readData();
        dataStream = ((DataStreamSource<Row>) dataStream).setParallelism(speedConfig.getReaderChannel());

        if (speedConfig.isRebalance()) {
            dataStream = dataStream.rebalance();
        }

        BaseDataWriter dataWriter = DataWriterFactory.getDataWriter(config);
        dataWriter.writeData(dataStream).setParallelism(speedConfig.getWriterChannel());

        if(env instanceof MyLocalStreamEnvironment) {
            if(StringUtils.isNotEmpty(savepointPath)){
                ((MyLocalStreamEnvironment) env).setSettings(SavepointRestoreSettings.forPath(savepointPath));
            }
        }

        addEnvClassPath(env, ClassLoaderManager.getClassPath());

        JobExecutionResult result = env.execute(jobIdString);//执行的语句
        System.out.println("测试是否可以走·····················································");
        if(env instanceof MyLocalStreamEnvironment){
            ResultPrintUtil.printResult(result);
        }
    }

    private static void configRestartStrategy(StreamExecutionEnvironment env, DataTransferConfig config){
        if (needRestart(config)) {
            RestartConfig restartConfig = findRestartConfig(config);
            if (RestartConfig.STRATEGY_FIXED_DELAY.equalsIgnoreCase(restartConfig.getStrategy())) {
                env.setRestartStrategy(RestartStrategies.fixedDelayRestart(
                        restartConfig.getRestartAttempts(),
                        Time.of(restartConfig.getDelayInterval(), TimeUnit.SECONDS)
                ));
            } else if (RestartConfig.STRATEGY_FAILURE_RATE.equalsIgnoreCase(restartConfig.getStrategy())) {
                env.setRestartStrategy(RestartStrategies.failureRateRestart(
                        restartConfig.getFailureRate(),
                        Time.of(restartConfig.getFailureInterval(), TimeUnit.SECONDS),
                        Time.of(restartConfig.getDelayInterval(), TimeUnit.SECONDS)
                ));
            } else {
                env.setRestartStrategy(RestartStrategies.noRestart());
            }
        }
    }

    private static RestartConfig findRestartConfig(DataTransferConfig config) {
        RestartConfig restartConfig = config.getJob().getSetting().getRestartConfig();
        if (null != restartConfig) {
            return restartConfig;
        }

        Object restartConfigObj = config.getJob().getContent().get(0).getReader().getParameter().getVal(RestartConfig.KEY_STRATEGY);
        if (null != restartConfigObj) {
            return new RestartConfig((Map<String, Object>)restartConfigObj);
        }

        restartConfigObj = config.getJob().getContent().get(0).getWriter().getParameter().getVal(RestartConfig.KEY_STRATEGY);
        if (null != restartConfigObj) {
            return new RestartConfig((Map<String, Object>)restartConfigObj);
        }

        return RestartConfig.defaultConfig();
    }

    private static boolean needRestart(DataTransferConfig config){
        return config.getJob().getSetting().getRestoreConfig().isStream();
    }

    private static void speedTest(DataTransferConfig config) {
        TestConfig testConfig = config.getJob().getSetting().getTestConfig();
        if (READER.equalsIgnoreCase(testConfig.getSpeedTest())) {
            ContentConfig contentConfig = config.getJob().getContent().get(0);
            contentConfig.getWriter().setName(STREAM_WRITER);
        } else if (WRITER.equalsIgnoreCase(testConfig.getSpeedTest())){
            ContentConfig contentConfig = config.getJob().getContent().get(0);
            contentConfig.getReader().setName(STREAM_READER);
        }

        config.getJob().getSetting().getSpeed().setBytes(-1);
    }

    private static void addEnvClassPath(StreamExecutionEnvironment env, Set<URL> classPathSet) throws Exception{
        int i = 0;
        for(URL url : classPathSet){
            String classFileName = String.format(CLASS_FILE_NAME_FMT, i);
            env.registerCachedFile(url.getPath(),  classFileName, true);
            i++;
        }

        if(env instanceof MyLocalStreamEnvironment){
            ((MyLocalStreamEnvironment) env).setClasspaths(new ArrayList<>(classPathSet));
        } else if(env instanceof StreamContextEnvironment){
            Field field = env.getClass().getDeclaredField("ctx");
            field.setAccessible(true);
            ContextEnvironment contextEnvironment= (ContextEnvironment) field.get(env);

            List<String> originUrlList = new ArrayList<>();
            for (URL url : contextEnvironment.getClasspaths()) {
                originUrlList.add(url.toString());
            }

            for (URL url : classPathSet) {
                if (!originUrlList.contains(url.toString())){
                    contextEnvironment.getClasspaths().add(url);
                }
            }
        }
    }

    private static Properties parseConf(String confStr) throws Exception{
        if(StringUtils.isEmpty(confStr)){
            return new Properties();
        }

        confStr = URLDecoder.decode(confStr, Charsets.UTF_8.toString());
        return objectMapper.readValue(confStr, Properties.class);
    }

    private static StreamExecutionEnvironment openCheckpointConf(StreamExecutionEnvironment env, Properties properties){
        if(properties!=null){
            String interval = properties.getProperty(ConfigConstant.FLINK_CHECKPOINT_INTERVAL_KEY);
            if(StringUtils.isNotBlank(interval)){
                env.enableCheckpointing(Long.parseLong(interval.trim()));
                LOG.info("Open checkpoint with interval:" + interval);
            }
            String checkpointTimeoutStr = properties.getProperty(ConfigConstant.FLINK_CHECKPOINT_TIMEOUT_KEY);
            if(checkpointTimeoutStr != null){
                long checkpointTimeout = Long.parseLong(checkpointTimeoutStr.trim());
                //checkpoints have to complete within one min,or are discard
                env.getCheckpointConfig().setCheckpointTimeout(checkpointTimeout);

                LOG.info("Set checkpoint timeout:" + checkpointTimeout);
            }
            env.getCheckpointConfig().setCheckpointingMode(CheckpointingMode.EXACTLY_ONCE);
            env.getCheckpointConfig().enableExternalizedCheckpoints(
                    CheckpointConfig.ExternalizedCheckpointCleanup.RETAIN_ON_CANCELLATION);
        }
        return env;
    }
}

MyLocalStreamEnvironment

/*
 * Licensed to the Apache Software Foundation (ASF) under one
 * or more contributor license agreements.  See the NOTICE file
 * distributed with this work for additional information
 * regarding copyright ownership.  The ASF licenses this file
 * to you under the Apache License, Version 2.0 (the
 * "License"); you may not use this file except in compliance
 * with the License.  You may obtain a copy of the License at
 *
 *     http://www.apache.org/licenses/LICENSE-2.0
 *
 * Unless required by applicable law or agreed to in writing, software
 * distributed under the License is distributed on an "AS IS" BASIS,
 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 * See the License for the specific language governing permissions and
 * limitations under the License.
 */

package com.dtstack.flink.api.java;

import com.google.common.collect.Lists;
import org.apache.commons.lang.StringUtils;
import org.apache.flink.annotation.Public;
import org.apache.flink.api.common.InvalidProgramException;
import org.apache.flink.api.common.JobExecutionResult;
import org.apache.flink.api.common.JobSubmissionResult;
import org.apache.flink.api.common.accumulators.AccumulatorHelper;
import org.apache.flink.api.java.ExecutionEnvironment;
import org.apache.flink.configuration.ConfigConstants;
import org.apache.flink.configuration.Configuration;
import org.apache.flink.configuration.RestOptions;
import org.apache.flink.configuration.TaskManagerOptions;
import org.apache.flink.runtime.client.JobExecutionException;
import org.apache.flink.runtime.jobgraph.JobGraph;
import org.apache.flink.runtime.jobgraph.SavepointRestoreSettings;
import org.apache.flink.runtime.jobmaster.JobResult;
import org.apache.flink.runtime.minicluster.MiniCluster;
import org.apache.flink.runtime.minicluster.MiniClusterConfiguration;
import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
import org.apache.flink.streaming.api.graph.StreamGraph;

import org.apache.flink.util.ExceptionUtils;
import org.apache.flink.util.Preconditions;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

import javax.annotation.Nonnull;
import java.io.IOException;
import java.net.URL;
import java.util.Collections;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.concurrent.CompletableFuture;
import java.util.concurrent.ExecutionException;

/**
 * The LocalStreamEnvironment is a StreamExecutionEnvironment that runs the program locally,
 * multi-threaded, in the JVM where the environment is instantiated. It spawns an embedded
 * Flink cluster in the background and executes the program on that cluster.
 *
 * <p>When this environment is instantiated, it uses a default parallelism of {@code 1}. The default
 * parallelism can be set via {@link #setParallelism(int)}.
 *
 * @author jiangbo
 */
@Public
public class MyLocalStreamEnvironment extends StreamExecutionEnvironment {

    private static final Logger LOG = LoggerFactory.getLogger(org.apache.flink.streaming.api.environment.LocalStreamEnvironment.class);

    private final Configuration configuration;

    public List<URL> getClasspaths() {
        return classpaths;
    }

    public void setClasspaths(List<URL> classpaths) {
        this.classpaths = classpaths;
    }

    private List<URL> classpaths = Collections.emptyList();

    private SavepointRestoreSettings settings;

    public void setSettings(SavepointRestoreSettings settings) {
        this.settings = settings;
    }

    /**
     * Creates a new mini cluster stream environment that uses the default configuration.
     */
    public MyLocalStreamEnvironment() {
        this(new Configuration());
    }

    /**
     * Creates a new mini cluster stream environment that configures its local executor with the given configuration.
     *
     * @param configuration The configuration used to configure the local executor.
     */
    public MyLocalStreamEnvironment(@Nonnull Configuration configuration) {
        if (!ExecutionEnvironment.areExplicitEnvironmentsAllowed()) {
            throw new InvalidProgramException(
                    "The LocalStreamEnvironment cannot be used when submitting a program through a client, " +
                            "or running in a TestEnvironment context.");
        }
        this.configuration = configuration;
        setParallelism(1);
    }

    protected Configuration getConfiguration() {
        return configuration;
    }

    /**
     * Executes the JobGraph of the on a mini cluster of CLusterUtil with a user
     * specified name.
     *
     * @param jobName
     *            name of the job
     * @return The result of the job execution, containing elapsed time and accumulators.
     */
    @Override
    public JobExecutionResult execute(String jobName) throws Exception {
        // transform the streaming program into a JobGraph
        StreamGraph streamGraph = getStreamGraph();
        streamGraph.setJobName(jobName);

        JobGraph jobGraph = streamGraph.getJobGraph();
        jobGraph.setClasspaths(classpaths);
        jobGraph.setAllowQueuedScheduling(true);

        if (settings != null){
            jobGraph.setSavepointRestoreSettings(settings);
        }

        Configuration configuration = new Configuration();
        configuration.addAll(jobGraph.getJobConfiguration());
        configuration.setString(TaskManagerOptions.MANAGED_MEMORY_SIZE, "0");
        configuration.setInteger(TaskManagerOptions.NUM_TASK_SLOTS.key(), jobGraph.getMaximumParallelism());

        // add (and override) the settings with what the user defined
        configuration.addAll(this.configuration);

        if (!configuration.contains(RestOptions.BIND_PORT)) {
            configuration.setString(RestOptions.BIND_PORT, "0");
        }

        int numSlotsPerTaskManager = configuration.getInteger(TaskManagerOptions.NUM_TASK_SLOTS, jobGraph.getMaximumParallelism());

        MiniClusterConfiguration cfg = new MiniClusterConfiguration.Builder()
                .setConfiguration(configuration)
                .setNumSlotsPerTaskManager(numSlotsPerTaskManager)
                .build();

        if (LOG.isInfoEnabled()) {
            LOG.info("Running job on local embedded Flink mini cluster");
        }

        MiniCluster miniCluster = new MiniCluster(cfg);

        try {
            miniCluster.start();
            configuration.setInteger(RestOptions.PORT, miniCluster.getRestAddress().get().getPort());
            JobExecutionResult result =  miniCluster.executeJobBlocking(jobGraph);
            return result;
        }catch (Exception e){
            printResultForNull();
            return null;
        }finally {
            transformations.clear();
            miniCluster.close();
        }
    }


    public static void printResultForNull(){
        List<String> names = Lists.newArrayList();
        List<String> values = Lists.newArrayList();
        Map<String,Object> map = new HashMap<String,Object>();

        map.put("numWrite",0);
        map.put("last_write_num_0",0);
        map.put("conversionErrors",0);
        map.put("writeDuration",0);
        map.put("duplicateErrors",0);
        map.put("numRead",0);
        map.put("snapshotWrite",0);
        map.put("otherErrors",0);
        map.put("readDuration",0);
        map.put("byteRead","");
        map.put("last_write_location_0",0);
        map.put("numWrite ",0);
        map.put("byteWrite",0);
        map.put("nullErrors",0);
        map.put("nErrors",0);

        map.forEach((name, val) -> {
            names.add(name);
            values.add(String.valueOf(val));
        });

        int maxLength = 0;
        for (String name : names) {
            maxLength = Math.max(maxLength, name.length());
        }
        maxLength += 5;

        StringBuilder builder = new StringBuilder();
        for (int i = 0; i < names.size(); i++) {
            String name = names.get(i);
            builder.append(name + StringUtils.repeat(" ", maxLength - name.length()));
            builder.append("|  ").append(values.get(i));

            if(i+1 < names.size()){
                builder.append("\n");
            }
        }

        System.out.println("---------------------------------");
        System.out.println(builder.toString());
        System.out.println("---------------------------------");
    }
}

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值