Storm Trident
作者:jiangzz
电话:15652034180
微信:jiangzz_wx
微信公众账号:jiangzz_wy
百知教育
Trident是一个高级抽象,用于在Storm之上进行实时计算。它允许您无缝混合高吞吐量(每秒数百万条消息),有状态流处理和低延迟分布式查询。如果您熟悉Pig或Cascading等高级批处理工具,Trident的概念将非常熟悉 - Trident具有连接,聚合,分组,功能和过滤器。除此之外,Trident还添加了基元,用于在任何数据库或持久性存储之上执行有状态的增量处理。 Trident具有一致,exactly-once
的语义,因此很容易推理Trident拓扑。
Trident Kafka集成
<dependency>
<groupId>org.apache.storm</groupId>
<artifactId>storm-kafka-client</artifactId>
<version>2.0.0</version>
</dependency>
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka-clients</artifactId>
<version>2.2.0</version>
</dependency>
public class KafkaSpoutUtils {
public static KafkaSpout<String, String> buildKafkaSpout(String boostrapServers, String topic){
KafkaSpoutConfig<String,String> kafkaspoutConfig=KafkaSpoutConfig.builder(boostrapServers,topic)
.setProp(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG,"org.apache.kafka.common.serialization.StringDeserializer")
.setProp(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG,"org.apache.kafka.common.serialization.StringDeserializer")
.setProp(ConsumerConfig.GROUP_ID_CONFIG,"g1")
.setEmitNullTuples(false)
.setFirstPollOffsetStrategy(FirstPollOffsetStrategy.LATEST)
.setProcessingGuarantee(KafkaSpoutConfig.ProcessingGuarantee.AT_LEAST_ONCE)
.setMaxUncommittedOffsets(10)//一旦分区积压有10个未提交offset,Spout停止poll数据,解决Storm背压问题
.build();
return new KafkaSpout<String, String>(kafkaspoutConfig);
}
//可以保证精准一次更新,推荐使用
public static KafkaTridentSpoutOpaque<String,String> buildKafkaSpoutOpaque(String boostrapServers, String topic){
KafkaTridentSpoutConfig<String, String> kafkaOpaqueSpoutConfig = KafkaTridentSpoutConfig.builder(boostrapServers, topic)
.setProp(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG,"org.apache.kafka.common.serialization.StringDeserializer")
.setProp(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG,"org.apache.kafka.common.serialization.StringDeserializer")
.setProp(ConsumerConfig.GROUP_ID_CONFIG,"g1")
.setFirstPollOffsetStrategy(FirstPollOffsetStrategy.LATEST)
.setRecordTranslator(new Func<ConsumerRecord<String, String>, List<Object>>() {
public List<Object> apply(ConsumerRecord<String, String> record) {
return new Values(record.key(),record.value(),record.timestamp());
}
},new Fields("key","value","timestamp"))
.build();
return new KafkaTridentSpoutOpaque<String, String>(kafkaOpaqueSpoutConfig);
}
}
public static void main(String[] args) throws Exception {
TridentTopology tridentTopology=new TridentTopology();
tridentTopology.newStream("KafkaSpoutOpaque",KafkaSpoutUtils.buildKafkaSpoutOpaque("CentOSA:9092,CentOSB:9092,CentOSC:9092","topic01"))
.peek((TridentTuple input) ->{
System.out.println(input);
});
new LocalCluster().submitTopology("tridentTopology",new Config(),tridentTopology.build());
}
常见算子介绍
Map算子
将一个Tuple转换为另外一个Tuple,如果用户修改了Tuple元素的个数,需要指定输出的Fields
tridentTopology.newStream("KafkaSpoutOpaque",KafkaSpoutUtils.buildKafkaSpoutOpaque("CentOSA:9092,CentOSB:9092,CentOSC:9092","topic01"))
.map((tuple)-> new Values("Hello~"+tuple.getStringByField("value")),new Fields("name"))
.peek((tuple) -> System.out.println(tuple));
Filter
过滤上游输入的Tuple将满足条件的Tuple向下游输出。
tridentTopology.newStream("KafkaSpoutOpaque",KafkaSpoutUtils.buildKafkaSpoutOpaque("CentOSA:9092,CentOSB:9092,CentOSC:9092","topic01"))
.filter(new Fields("value"), new BaseFilter() {
@Override
public boolean isKeep(TridentTuple tuple) {
System.out.