Storm之——利用Trident topology实现预测疾病暴发

转载请注明出处:http://blog.csdn.net/l1028386804/article/details/79120204

今天,我们一起来实现一个Trident topology收集医学诊断报告来判断是否有疾病暴发的实例。今天,我们和以往的实例有所不同,今天,我们要贯穿整篇文章来实现这个具体的实例,这里,我们会将整个实例拆分成一个个的知识点来具体实现。

注意:本文基于Storm 0.9.1-incubating / 0.9.2-incubating来实现,其他版本的Storm可能需要修改部分代码。

首先我们这个topology会处理的医学诊断事件包括如下信息:

Latitude        Longitude        Timestamp                        Diagnosis Code(ICD9-CM)
39.9522            -75.1624        01/21/2018 at 11:00 AM            320.0(Hemophilus meningitis)
40.3588            -75.6269        01/21/2018 at 11:30 AM             324.0(Intracranial abscess)

每个事件包括事件发生时的全球定位系统(GPS)的位置坐标,经度和维度使用十进制小数表示,事件还包括ICD9-CM编码,表示诊断结果,以及事件发生的时间戳。完整的ICD9-CM编码参见http://www.icd9data.com/
为了判断是否有疾病暴发,系统会按照地理位置来统计各种疾病代码在一段时间内出现的次数。为了简化例子,我们按照城市划分诊断结果的地理位置。实际系统会对地理位置做出更精细的划分。
另外,实例中会按小时对针对时间进行分组。使用移动平均值来计算趋势。
最后,我们使用简单的阈值来判断是否有疾病暴发。如果某个时间时间发生的系数超过了阈值,系统会发生告警。同时,为了维护历史记录,我们还需要将每个城市、小时、疾病的统计量持久化存储。

一、Trident tology

为了满足这些需求,我们需要在tolopogy中对疾病的发生进行统计。使用标准的Storm topology进行统计会遇到难题,因为tuple可能会重复发送,这会导致重复计数的问题,Trident提供了操作源于来解决这个问题。我们将使用topology,代码如下:

package com.lyz.storm.trident.topology;

import backtype.storm.Config;
import backtype.storm.LocalCluster;
import backtype.storm.generated.StormTopology;
import backtype.storm.tuple.Fields;

import com.lyz.storm.trident.operator.*;
import com.lyz.storm.trident.spout.DiagnosisEventSpout;
import com.lyz.storm.trident.state.OutbreakTrendFactory;

import storm.trident.Stream;
import storm.trident.TridentTopology;
import storm.trident.operation.builtin.Count;

public class OutbreakDetectionTopology {

    public static StormTopology buildTopology() {
        TridentTopology topology = new TridentTopology();
        DiagnosisEventSpout spout = new DiagnosisEventSpout();
        Stream inputStream = topology.newStream("event", spout);

        inputStream.each(new Fields("event"), new DiseaseFilter())
                .each(new Fields("event"), new CityAssignment(), new Fields("city"))
                .each(new Fields("event", "city"), new HourAssignment(), new Fields("hour", "cityDiseaseHour"))
                .groupBy(new Fields("cityDiseaseHour"))
                .persistentAggregate(new OutbreakTrendFactory(), new Count(), new Fields("count")).newValuesStream()
                .each(new Fields("cityDiseaseHour", "count"), new OutbreakDetector(), new Fields("alert"))
                .each(new Fields("alert"), new DispatchAlert(), new Fields());
        return topology.build();
    }
}
首先,DiagnosisEventSpout函数发射疾病事件,然后事件由DiseaseFilter函数过滤,过滤掉我们不关心的疾病事件。之后,事件由CityAssignment函数赋值一个对应的城市名。然后HourAssignment函数复制一个表示小时的时间戳,并且增加一个Key CityDiseaseHour到tuple的字段中,这个Key包括城市、小时和疾病代码。后续就使用这个Key进行分组统计并使用persistAggregate函数对统计量持久性存储。统计量传递给OutbreakDetector函数,如果统计量超过阈值,OutbreakDetector向后发送一个告警信息。最后DispatchAlert接收到告警信息,记录日志,并且流程结束。

二、Trident Spout

和Storm相比,Trident引入了"数据批次"的概念,不像Storm的spout,Trident spout必须成批的发送tuple。
每个Batch会分配一个唯一的事务标识符。spout基于约定决定batch的组成方式,spout有三种约定:非事务型、事务型、非透明型。
非事务型spout:对batch的组成部分不提供保证,并且可能出现重复,两个不同的batch可能含有相同的tuple
事务型spout:保证batch是非重复的,并且batch总是包含相同的tuple。
非透明型spout:保证数据是非重复的,但不能保证batch的内容是不变的。
具体的DiagnosisEventSpout代码如下:

package com.lyz.storm.trident.spout;

import backtype.storm.task.TopologyContext;
import backtype.storm.tuple.Fields;
import storm.trident.spout.ITridentSpout;

import java.util.Map;

@SuppressWarnings("rawtypes")
public class DiagnosisEventSpout implements ITridentSpout<Long> {
    private static final long serialVersionUID = 1L;
    BatchCoordinator<Long> coordinator = new DefaultCoordinator();
    Emitter<Long> emitter = new DiagnosisEventEmitter();

    @Override
    public BatchCoordinator<Long> getCoordinator(String txStateId, Map conf, TopologyContext context) {
        return coordinator;
    }

    @Override
    public Emitter<Long> getEmitter(String txStateId, Map conf, TopologyContext context) {
        return emitter;
    }

    @Override
    public Map getComponentConfiguration() {
        return null;
    }

    @Override
    public Fields getOutputFields() {
        return new Fields("event");
    }
}
如上述代码中的getOutputFields()方法所示,在我们的实例topology中,spout发射一个字段event,值是一个DiagnosisEvent类。
随后我们建立DefultCoordinator类来进行协调。具体代码如下:
package com.lyz.storm.trident.spout;

import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import storm.trident.spout.ITridentSpout.BatchCoordinator;

import java.io.Serializable;

public class DefaultCoordinator implements BatchCoordinator<Long>, Serializable {
    private static final long serialVersionUID = 1L;
    private static final Logger LOG = LoggerFactory.getLogger(DefaultCoordinator.class);

    @Override
    public boolean isReady(long txid) {
        return true;
    }

    @Override
    public void close() {
    }

    @Override
    public Long initializeTransaction(long txid, Long prevMetadata, Long currMetadata) {
        LOG.info("Initializing Transaction [" + txid + "]");
        return null;
    }

    @Override
    public void success(long txid) {
        LOG.info("Successful Transaction [" + txid + "]");
    }
}
解析来我建立DiagnosisEventEmitter来讲tuple打包发射出去。为了实这个功能,函数接收的参数包括batch(由coordinator生成的)的元数据、事务信息和Emitter用来发送tupple的collector。具体代码如下:
package com.lyz.storm.trident.spout;

import storm.trident.operation.TridentCollector;
import storm.trident.spout.ITridentSpout.Emitter;
import storm.trident.topology.TransactionAttempt;

import java.io.Serializable;
import java.util.ArrayList;
import java.util.List;
import java.util.concurrent.atomic.AtomicInteger;

import com.lyz.storm.trident.model.DiagnosisEvent;

public class DiagnosisEventEmitter implements Emitter<Long>, Serializable {
    private static final long serialVersionUID = 1L;
    AtomicInteger successfulTransactions = new AtomicInteger(0);

    @Override
    public void emitBatch(TransactionAttempt tx, Long coordinatorMeta, TridentCollector collector) {
        for (int i = 0; i < 10000; i++) {
            List<Object> events = new ArrayList<Object>();
            double lat = new Double(-30 + (int) (Math.random() * 75));
            double lng = new Double(-120 + (int) (Math.random() * 70));
            long time = System.currentTimeMillis();

            String diag = new Integer(320 + (int) (Math.random() * 7)).toString();
            DiagnosisEvent event = new DiagnosisEvent(lat, lng, time, diag);
            events.add(event);
            collector.emit(events);
        }
    }

    @Override
    public void success(TransactionAttempt tx) {
        successfulTransactions.incrementAndGet();
    }

    @Override
    public void close() {
    }

}
发送的工作在emitBatch()中进行。我们随机分配一个经纬度,使用System.currentTimeStamp()方法生成时间戳。
针对本例,我们使用320-327之间的诊断码。
代码               描述
320            细菌性脑膜炎
321            其他病原体导致的脑膜炎
322            未知病因的脑膜炎
323            脑炎脊髓炎
324            颅内和脊髓内脓肿
325            静脉炎及颅内静脉窦血栓性静脉炎
326         颅内脓肿或化脓性感染后期影响
327         器官性睡眠障碍
这些代码随机分配给事件。
接下来,我们建立DiagosisEvent类来表示tology处理的关键数据,具体代码如下:

package com.lyz.storm.trident.model;

import java.io.Serializable;

public class DiagnosisEvent implements Serializable {
    private static final long serialVersionUID = 1L;
    public double lat;
    public double lng;
    public long time;
    public String diagnosisCode;

    public DiagnosisEvent(double lat, double lng, long time, String diagnosisCode) {
        super();
        this.time = time;
        this.lat = lat;
        this.lng = lng;
        this.diagnosisCode = diagnosisCode;
    }
}
这个对象就是一个简单的JavaBean。时间戳使用long变量存储,存储的是纪元时间的秒数。经度和维度使用double存储。DiagnosisCode类使用string,以防系统可能需要处理非ICD-9数据,比如有字母的代码。

三、Trident filter

时间戳生成好后,下一步就是加入处理时间的逻辑组件。在Trident中,这些组件称为运算。使用不同的运算:filter和function。
为了通过疾病代码过滤事件,我们需要利用Trident filter。Trident通过提供BaseFilter类,我们通过实现子类就可以方便的对tuple过滤,滤除系统不需要的tuple。这里,我们建立DiseaseFilter类来实现过滤时间:

package com.lyz.storm.trident.operator;

import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

import com.lyz.storm.trident.model.DiagnosisEvent;

import storm.trident.operation.BaseFilter;
import storm.trident.tuple.TridentTuple;

public class DiseaseFilter extends BaseFilter {
    private static final long serialVersionUID = 1L;
    private static final Logger LOG = LoggerFactory.getLogger(DiseaseFilter.class);

    @Override
    public boolean isKeep(TridentTuple tuple) {
        DiagnosisEvent diagnosis = (DiagnosisEvent) tuple.getValue(0);
        Integer code = Integer.parseInt(diagnosis.diagnosisCode);
        if (code.intValue() <= 322) {
            LOG.debug("Emitting disease [" + diagnosis.diagnosisCode + "]");
            return true;
        } else {
            LOG.debug("Filtering disease [" + diagnosis.diagnosisCode + "]");
            return false;
        }
    }
}
Filter操作结果返回true的tuple将会被发送到下游进行操作,如果返回false,该tuple就不会发送到下游。在我们的topology中,我们在数据流上使用each(inputFields, filter)方法,将这个过滤器应用到每个tuple中:
inputStream.each(new Fields("event"), new DiseaseFilter())

四、Trident function

在filter之外,STorm还提供了一个更通用的功能接口function。function和Storm的bolt类似,读取tuple并且发送新的tuple。其中一个区别是Trident function只能添加数据。function发送数据时,将新字段添加到tuple中,并不会删除或者变更已有的字段。和Storm的bolt类似,function实现了一个包括逻辑的方法execute。function的实现也可以选用TridentCollector来发送tuple到新的function中。用这种方式,function也可以用来过滤tuple。起到filter的作用。
这里我们建立CityAssignment类,具体代码如下:

package com.lyz.storm.trident.operator;

import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

import com.lyz.storm.trident.model.DiagnosisEvent;

import storm.trident.operation.BaseFunction;
import storm.trident.operation.TridentCollector;
import storm.trident.tuple.TridentTuple;

import java.util.ArrayList;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.Map.Entry;

public class CityAssignment extends BaseFunction {
	private static final long serialVersionUID = 1L;
	private static final Logger LOG = LoggerFactory.getLogger(CityAssignment.class);

	private static Map<String, double[]> CITIES = new HashMap<String, double[]>();

	{ // Initialize the cities we care about.
		double[] phl = { 39.875365, -75.249524 };
		CITIES.put("PHL", phl);
		double[] nyc = { 40.71448, -74.00598 };
		CITIES.put("NYC", nyc);
		double[] sf = { -31.4250142, -62.0841809 };
		CITIES.put("SF", sf);
		double[] la = { -34.05374, -118.24307 };
		CITIES.put("LA", la);
	}

	@Override
	public void execute(TridentTuple tuple, TridentCollector collector) {
		DiagnosisEvent diagnosis = (DiagnosisEvent) tuple.getValue(0);
		double leastDistance = Double.MAX_VALUE;
		String closestCity = "NONE";
		for (Entry<String, double[]> city : CITIES.entrySet()) {
			double R = 6371; // km
			double x = (city.getValue()[0] - diagnosis.lng) * Math.cos((city.getValue()[0] + diagnosis.lng) / 2);
			double y = (city.getValue()[1] - diagnosis.lat);
			double d = Math.sqrt(x * x + y * y) * R;
			if (d < leastDistance) {
				leastDistance = d;
				closestCity = city.getKey();
			}
		}
		List<Object> values = new ArrayList<Object>();
		values.add(closestCity);
		LOG.debug("Closest city to lat=[" + diagnosis.lat + "], lng=[" + diagnosis.lng + "] == [" + closestCity
				+ "], d=[" + leastDistance + "]");
		collector.emit(values);
	}

}
这里,我们使用静态初始化方式建立了一个城市地图。function包括了一个地图,存储了坐标信息。在execute()方法中,函数遍历城市计算事件和城市之间的距离。function声明的字段数量必须和它发射出的值的字段数一致。
接下来,我们建立HourAssignment,用来转化Unix时间戳,对事件进行时间分组操作。HourAssignment代码如下:
package com.lyz.storm.trident.operator;

import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

import com.lyz.storm.trident.model.DiagnosisEvent;

import storm.trident.operation.BaseFunction;
import storm.trident.operation.TridentCollector;
import storm.trident.tuple.TridentTuple;

import java.util.ArrayList;
import java.util.List;

public class HourAssignment extends BaseFunction {
    private static final long serialVersionUID = 1L;
    private static final Logger LOG = LoggerFactory.getLogger(HourAssignment.class);

    @Override
    public void execute(TridentTuple tuple, TridentCollector collector) {
        DiagnosisEvent diagnosis = (DiagnosisEvent) tuple.getValue(0);
        String city = (String) tuple.getValue(1);
        long timestamp = diagnosis.time;
        long hourSinceEpoch = timestamp / 1000 / 60 / 60;
        LOG.debug("Key =  [" + city + ":" + hourSinceEpoch + "]");
        String key = city + ":" + diagnosis.diagnosisCode + ":" + hourSinceEpoch;

        List<Object> values = new ArrayList<Object>();
        values.add(hourSinceEpoch);
        values.add(key);
        collector.emit(values);
    }
}
我们重写了这个function,同时发射了小时的数据,以及由城市、疾病代码、小时组合而成的key。实际上,这个组合值会作为聚合计数的唯一标识符。
这里,我们建立最后两个function用来侦测疾病暴发并改进。OutbreakDetector类代码如下:
package com.lyz.storm.trident.operator;

import storm.trident.operation.BaseFunction;
import storm.trident.operation.TridentCollector;
import storm.trident.tuple.TridentTuple;

import java.util.ArrayList;
import java.util.List;

public class OutbreakDetector extends BaseFunction {
    private static final long serialVersionUID = 1L;
    public static final int THRESHOLD = 10000;

    @Override
    public void execute(TridentTuple tuple, TridentCollector collector) {
        String key = (String) tuple.getValue(0);
        Long count = (Long) tuple.getValue(1);
        if (count > THRESHOLD) {
            List<Object> values = new ArrayList<Object>();
            values.add("Outbreak detected for [" + key + "]!");
            collector.emit(values);
        }
    }
}
这个function提取出了特定城市、疾病、时间的发生次数,并且检查计数是否超过了设定的阈值。若超过,则发送一个新的字段包括一条告警信息。
我们再新建一个function功能就是发布一个告警(并且结束程序),代码如下:
package com.lyz.storm.trident.operator;

import com.esotericsoftware.minlog.Log;
import storm.trident.operation.BaseFunction;
import storm.trident.operation.TridentCollector;
import storm.trident.tuple.TridentTuple;

public class DispatchAlert extends BaseFunction {
    private static final long serialVersionUID = 1L;

    @Override
    public void execute(TridentTuple tuple, TridentCollector collector) {
        String alert = (String) tuple.getValue(0);
        Log.error("ALERT RECEIVED [" + alert + "]");
        Log.error("Dispatch the national guard!");
        System.exit(0);
    }
}

五、Trident state

接下来我们要完成持久化的操作,首先,我们建立OutbreakTrendFactory类,具体如下:

package com.lyz.storm.trident.state;

import backtype.storm.task.IMetricsContext;
import storm.trident.state.State;
import storm.trident.state.StateFactory;

import java.util.Map;

@SuppressWarnings("rawtypes")
public class OutbreakTrendFactory implements StateFactory {
    private static final long serialVersionUID = 1L;

    @Override
    public State makeState(Map conf, IMetricsContext metrics, int partitionIndex, int numPartitions) {
        return new OutbreakTrendState(new OutbreakTrendBackingMap());
    }
}
工厂类返回一个State对象,STorm用它来持久化存储信息。在Storm中,有三种类型的状态。
1)非事务型:没有回滚能力,更新操作是永久性的,commit操作会被忽略
2)重复事务型:由同一批tuple提供的结果是幂等的
3)不透明事务型: 更新操作基于先前的值,这样一批数据组成不同,持久化的数据也会变
在分布式环境下,数据可能被重放,为了支持计数和状态更新,Trident将状态更新操作进行序列化,使用不同的状态更新模式对重放和错误数据进行容错。
这里,我们新建OutBreakTrendState代码如下:
package com.lyz.storm.trident.state;

import storm.trident.state.map.NonTransactionalMap;

public class OutbreakTrendState extends NonTransactionalMap<Long> {
    protected OutbreakTrendState(OutbreakTrendBackingMap outbreakBackingMap) {
        super(outbreakBackingMap);
    }
}
接下来建立OutbreakTrendBackingMap如下:
package com.lyz.storm.trident.state;

import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import storm.trident.state.map.IBackingMap;

import java.util.ArrayList;
import java.util.List;
import java.util.Map;
import java.util.concurrent.ConcurrentHashMap;

public class OutbreakTrendBackingMap implements IBackingMap<Long> {
    private static final Logger LOG = LoggerFactory.getLogger(OutbreakTrendBackingMap.class);
    Map<String, Long> storage = new ConcurrentHashMap<String, Long>();

    @Override
    public List<Long> multiGet(List<List<Object>> keys) {
        List<Long> values = new ArrayList<Long>();
        for (List<Object> key : keys) {
            Long value = storage.get(key.get(0));
            if (value == null) {
                values.add(new Long(0));
            } else {
                values.add(value);
            }
        }
        return values;
    }

    @Override
    public void multiPut(List<List<Object>> keys, List<Long> vals) {
        for (int i = 0; i < keys.size(); i++) {
            LOG.info("Persisting [" + keys.get(i).get(0) + "] ==> [" + vals.get(i) + "]");
            storage.put((String) keys.get(i).get(0), vals.get(i));
        }
    }
}
在我们的topology中,实际上没有固化存储数据。我们简单的将数据放入ConcurrentHashMap中。显然,对于多个机器的环境下,这样是不可行的。然而,BackingMap是一个抽象。只需要将传入的MapState对象的backing map的实例替换就可以更换持久层的实现。

六、执行topology

具体代码如下:

package com.lyz.storm.trident.topology;

import backtype.storm.Config;
import backtype.storm.LocalCluster;
import backtype.storm.generated.StormTopology;
import backtype.storm.tuple.Fields;

import com.lyz.storm.trident.operator.*;
import com.lyz.storm.trident.spout.DiagnosisEventSpout;
import com.lyz.storm.trident.state.OutbreakTrendFactory;

import storm.trident.Stream;
import storm.trident.TridentTopology;
import storm.trident.operation.builtin.Count;

public class OutbreakDetectionTopology {

    public static StormTopology buildTopology() {
        TridentTopology topology = new TridentTopology();
        DiagnosisEventSpout spout = new DiagnosisEventSpout();
        Stream inputStream = topology.newStream("event", spout);

        inputStream.each(new Fields("event"), new DiseaseFilter())
                .each(new Fields("event"), new CityAssignment(), new Fields("city"))
                .each(new Fields("event", "city"), new HourAssignment(), new Fields("hour", "cityDiseaseHour"))
                .groupBy(new Fields("cityDiseaseHour"))
                .persistentAggregate(new OutbreakTrendFactory(), new Count(), new Fields("count")).newValuesStream()
                .each(new Fields("cityDiseaseHour", "count"), new OutbreakDetector(), new Fields("alert"))
                .each(new Fields("alert"), new DispatchAlert(), new Fields());
        return topology.build();
    }

    public static void main(String[] args) throws Exception {
        Config conf = new Config();
        LocalCluster cluster = new LocalCluster();
        cluster.submitTopology("cdc", conf, buildTopology());
        Thread.sleep(200000);
        cluster.shutdown();
    }
}

七、pom.xml

<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">

	<modelVersion>4.0.0</modelVersion>
	<groupId>com.lyz</groupId>
	<artifactId>storm</artifactId>
	<version>1.0.0-SNAPSHOT</version>
	<packaging>jar</packaging>
	   
	<dependencies>
		 <dependency>
            <groupId>org.apache.storm</groupId>
            <artifactId>storm-core</artifactId>
            <version>0.9.1-incubating</version>
            <scope>provided</scope>
        </dependency>
		
	     <dependency>
            <groupId>com.google.guava</groupId>
            <artifactId>guava</artifactId>
            <version>13.0.1</version>
        </dependency>
        <dependency>
            <groupId>commons-collections</groupId>
            <artifactId>commons-collections</artifactId>
            <version>3.2.1</version>
        </dependency>
   </dependencies>
   
  <build>
		<finalName>storm</finalName>
		<resources>
			<resource>
				<targetPath>${project.build.directory}/classes</targetPath>
				<directory>src/main/resources</directory>
				<filtering>true</filtering>
				<includes>
					<include>**/*.xml</include>
					<include>**/*.properties</include>
				</includes>
			</resource>
			<!-- 结合com.alibaba.dubbo.container.Main -->
			<resource>
				<targetPath>${project.build.directory}/classes/META-INF/spring</targetPath>
				<directory>src/main/resources/spring</directory>
				<filtering>true</filtering>
				<includes>
					<include>spring-context.xml</include>
				</includes>
			</resource>
		</resources>
		
		<pluginManagement>
			<plugins>
				<!-- 解决Maven插件在Eclipse内执行了一系列的生命周期引起冲突 -->
				<plugin>
					<groupId>org.eclipse.m2e</groupId>
					<artifactId>lifecycle-mapping</artifactId>
					<version>1.0.0</version>
					<configuration>
						<lifecycleMappingMetadata>
							<pluginExecutions>
								<pluginExecution>
									<pluginExecutionFilter>
										<groupId>org.apache.maven.plugins</groupId>
										<artifactId>maven-dependency-plugin</artifactId>
										  <versionRange>[2.0,)</versionRange>  
										<goals>
											<goal>copy-dependencies</goal>
										</goals>
									</pluginExecutionFilter>
									<action>
										<ignore />
									</action>
								</pluginExecution>
							</pluginExecutions>
						</lifecycleMappingMetadata>
					</configuration>
				</plugin>
			</plugins>
		</pluginManagement>
		<plugins>
			<!-- 打包jar文件时,配置manifest文件,加入lib包的jar依赖 -->
			<plugin>
				<groupId>org.apache.maven.plugins</groupId>
				<artifactId>maven-jar-plugin</artifactId>
				<configuration>
					<classesDirectory>target/classes/</classesDirectory>
					<archive>
						<manifest>
							<mainClass>com.lyz.chapter1.main.WordCountTopology</mainClass>
							<!-- 打包时 MANIFEST.MF文件不记录的时间戳版本 -->
							<useUniqueVersions>false</useUniqueVersions>
							<addClasspath>true</addClasspath>
							<classpathPrefix>lib/</classpathPrefix>
						</manifest>
						<manifestEntries>
							<Class-Path>.</Class-Path>
						</manifestEntries>
					</archive>
				</configuration>
			</plugin>
			<plugin>
				<groupId>org.apache.maven.plugins</groupId>
				<artifactId>maven-dependency-plugin</artifactId>
				<executions>
					<execution>
						<id>copy-dependencies</id>
						<phase>package</phase>
						<goals>
							<goal>copy-dependencies</goal>
						</goals>
						<configuration>
							<type>jar</type>
							<includeTypes>jar</includeTypes>
							<useUniqueVersions>false</useUniqueVersions>
							<outputDirectory>
								${project.build.directory}/lib
							</outputDirectory>
						</configuration>
					</execution>
				</executions>
			</plugin>
			
			 <plugin>
                <groupId>org.apache.maven.plugins</groupId>
                <artifactId>maven-compiler-plugin</artifactId>
                <version>3.1</version>
                <configuration>
                    <source>1.8</source>
                    <target>1.8</target>
                </configuration>
            </plugin>
			
		</plugins>

</build>
</project>

八、温馨提示

大家可以到链接http://download.csdn.net/download/l1028386804/10216613下载利用Trident topology实现预测疾病暴发的实例完整实例源码


评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

冰 河

可以吃鸡腿么?

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值