Flafka: Apache Flume Meets Apache Kafka for Event Processing

The new integration between Flume and Kafka offers sub-second-latency event processing without the need for dedicated infrastructure.

In this previous post you learned some Apache Kafka basics and explored a scenario for using Kafka in an online application. This post takes you a step further and highlights the integration of Kafka with Apache Hadoop, demonstrating both a basic ingestion capability as well as how different open-source components can be easily combined to create a near-real time stream processing workflow using Kafka, Apache Flume, and Hadoop. 

The Case for Flafka

One key feature of Kafka is its functional simplicity. While there is a lot of sophisticated engineering under the covers, Kafka’s general functionality is relatively straightforward. Part of this simplicity comes from its independence from any other applications (excepting Apache ZooKeeper). As a consequence however, the responsibility is on the developer to write code to either produce or consume messages from Kafka. While there are a number of Kafka clients that support this process, for the most part custom coding is required.

Cloudera engineers and other open source community members have recently committed code for Kafka-Flume integration, informally called “Flafka,” to the Flume project. Flume is a distributed, reliable, and available system for efficiently collecting, aggregating, and moving large amounts of data from many different sources to a centralized data store. Flume provides a tested, production-hardened framework for implementing ingest and real-time processing pipelines. Using the new Flafka source and sink, now available in CDH 5.2, Flume can both read and write messages with Kafka.

Flume can act as a both a consumer (above) and producer for Kafka (below).

Flume-Kafka integration offers the following functionality that Kafka, absent custom coding, does not.

  • Producers – Use Flume sources to write to Kafka
  • Consumers – Write to Flume sinks reading from Kafka
  • A combination of the above
  • In-flight transformations and processing

This functionality expands your ability to utilize all the features of Flume such as bucketing and event modification / routingKite SDK Morphline Integration, and NRT indexing with Cloudera Search.

Next, we’ll walk you through an example application using the ingestion of credit-card data as the use case. All example code and configuration info involved are available here. A detailed walkthrough of the setup and example code is in the readme.

Example: Transaction Ingest

Assume that you are ingesting transaction data from a card processing system, and want to pull the transactions directly from Kafka and write them into HDFS.

The record simply contains a UUID for a transaction_id, a dummy credit-card number, timestamp, amount, and store_id for the transaction.

 

 

To import this data directly into HDFS, you could use the following Flume configuration.

 

 

This configuration defines an agent using the Kafka Source and a standard HDFS sink. Connecting to Kafka from Flume is as simple as setting the topic, ZooKeeper server, and channel. Your generated transactions will be persisted to HDFS with no coding necessary.

The Kafka Source allows for a number of different configuration options.

Property

Default

Description

type*

 

Must be set to org.apache.flume.source.kafka.KafkaSource

topic*

 

The Kafka topic from which this source reads messages. Flume supports only one topic per source.

zookeeperConnect*

 

The URI of the ZooKeeper server or quorum used by Kafka. This URI can be a single node (for example, zk01.example.com:2181) or a comma-separated list of nodes in a ZooKeeper quorum (for example, zk01.example.com:2181zk02.example.com:2181zk03.example.com:2181). If you have created a path in ZooKeeper for storing Kafka data, specify the path in the last entry in the list (for example, zk01.example.com:2181zk02.example.com:2181zk03.example.com:2181/kafka). Use the /kafka ZooKeeper path for Cloudera Labs Kafka, because it is created automatically at installation.

batchSize

1000

The maximum number of messages that can be written to a channel in a single batch.

batchDurationMillis

1000

The maximum time (in ms) before a batch is written to the channel. The batch is written when the batchSize limit or batchDurationMillis limit is reached, whichever comes first.

consumer.timeout.ms

10

kafka.consumer.timeout.ms (polling interval for new data for batch)

auto.commit.enabled

false

If true, periodically commit to ZooKeeper the offset of messages already fetched by the consumer. This committed offset will be used when the process fails as the position from which the new consumer will begin.

groupId

flume

The unique identifier of the Kafka consumer group. Set the same groupID in all sources to indicate that they belong to the same consumer group.

*Required

Any other properties to pass when creating a Kafka consumer can be accomplished by using the kafka.prefix.

You can declare the batch size can be declared in one of two ways: by specifying the size of the batch in terms of number of events (batchSize), or as a number of milliseconds (batchDurationMillis) to wait while receiving events from Kafka. In this manner, latency-based SLAs can be maintained for lower volume flows.

Note: With any real-time ingestion or processing system there is a tradeoff involved between throughput and single-event processing latency. There is some overhead in processing a batch of events; and so by decreasing the batch size, this overhead is incurred more frequently. Furthermore, events wait until the batch size is attained so per-event latency can suffer. You should experiment with different batch sizes to attain the proper latency and throughput SLAs.

By default, Flume uses the groupId “flume” when reading from Kafka. Adding multiple Flume sources with the same groupId will mean that each Flume agent will get a subset of the messages and can increase throughput. It is best to have any other consumers outside of Flume use a separate groupId so as to avoid message loss.

Example: Event Processing During Ingest

Let’s take our example further and assume that you not only want to use Hadoop for a long-term persistence layer, but also like to build a pipeline for performing arbitrary event processing. Flume provides a key component called the interceptor, part of the Flume extensibility model. Interceptors have the following characteristics; they can

  • Inspect events as they pass between source and channel
  • Modify or drop events as required
  • Be chained together to form a processing pipeline
  • Execute any custom code within the event processing

You can use Flume interceptors to do a variety of processing against incoming events as they pass through the system. In this example, you’ll be calculating a simple “Travel Score” to attempt to identify whether a banking customer is traveling while using their debit card. The exact use case is fabricated, but the architecture can be used to apply virtually any online model or scoring while returning results in sub-second times. Other uses of the interceptor could include:

  • Inspecting the content of the message for proper routing to a particular location such as by geo region
  • Calculating a streaming TopN list
  • Callout to a machine learning serving layer
  • Event enrichment / augmentation
  • In-flight data masking

Thus you can essentially deploy a Hadoop-enabled Kafka consumer group with built-in metrics and manageability via Cloudera Manager—as any Java code, such as a Spring Integration or Apache Camel flow, can be dropped into the interceptor.

(Note: For complex stream processing use cases, Spark Streaming provides the most flexible and feature rich execution engine. Flume Interceptors provide a great way to process events with very low latency and minimal complexity. For per-event response latencies under 50 ms, building a custom application is the right choice.)

To do any meaningful processing of the event as it arrives, you need to enrich the incoming transaction with information from your other systems. For that, call Apache HBase to get additional values related to the transaction and modify the record to reflect the results of the processing performed by Interceptor.

Now you can write your event directly to HDFS as before or back to Kafka, where the event could be picked up by other systems or for more comprehensive stream processing. In this case, you’ll return it directly back to Kafka so that the authorization result can be immediately returned to the client.

The updated Flume configuration looks like this:

 

 

Configuring the Flafka sink is as easy as configuring the source, with just a few declarations needed. The interceptor also just needs a few lines for configuration. After configuration is done, place the project jar in the Flume classpath, restart, and the pipeline is ready to go.

Like the source, the sink also supports passing configs to use in the Kafka producer by using the kafka. prefix. The sink supports the following:

Property

Default

Description

type*

 

Must be set to org.apache.flume.sink.kafka.KafkaSink

brokerList*

 

The brokers the Kafka sink uses to discover topic partitions formatted as a comma-separated list of hostname:port entries. You do not need to specify the entire list of brokers but Cloudera recommends that you specify at least two for HA.

topic

default-flume-topic

The Kafka topic to which messages are published by default. If the event header contains a topic field the event is published to the designated topic, overriding the configured topic.

batchSize

100

The number of messages to process in a single batch. Specifying a larger batchSize can improve throughput and increase latency.

requiredAcks

1

The number of replicas that must acknowledge a message before it is written successfully. Possible values are 0 (do not wait for an acknowledgement), 1 (wait for the leader to acknowledge only) and -1 (wait for all replicas to acknowledge). To avoid potential loss of data in case of a leader failure, set this to -1.

 

*Required

Furthermore, the sink supports the addition of per-event topic and key headers as set in the interceptor. As mentioned previously, if the source of the message is the Kafka source, the topic header will be set to the topic of the Flume source.

In testing this simple scenario, we were able to achieve sub-150ms latency using one Flume agent, one Kafka partition, and one broker using a small 3-node m2.2xlarge cluster in AWS.

Flume’s Kafka Channel

The recent commit of FLUME-2500 introduces Kafka as a channel in Flume in addition to the traditional file and memory channels. This functionality will be available in CDH 5.3/Flume 1.6, and provides the ability to:

  • Write to Hadoop directly from Kafka without using a source
  • Be used as a reliable and highly available channel for any source/sink combination

The Flume memory channel does not protect against data loss in the event of agent failure, and the when using the file channel, any data in a channel not yet written to a sink will be unavailable until the agent is recovered. The Kafka channel addresses both of these limitations.

Utilizing a Flume source allows you to use interceptors and selectors before writing to Kafka. But the channel can also be utilized in the following way:

Building on our example to instead use the Kafka channel, the configuration might look like this:

 

 

Using this configuration, your enriched transaction would go directly to Kafka and then on to HDFS using the HDFS sink.

The Kafka channel implements both a Kafka consumer and producer and is configured as follows.

Property

Default

Description

type*

 

Must be set to org.apache.flume.channel.kafka.KafkaChannel

brokerList*

 

The brokers the Kafka channel uses to discover topic partitions formatted as a comma-separated list of hostname:port entries. You do not need to specify the entire list of brokers but Cloudera recommends that you specify at least two for HA.

zookeeperConnect*

 

The URI of the ZooKeeper server or quorum used by Kafka. This can be a single node (for example, zk01.example.com:2181) or a comma-separated list of nodes in a ZooKeeper quorum (for example, zk01.example.com:2181,zk02.example.com:2181zk03.example.com:2181). If you have created a path in ZooKeeper for storing Kafka data, specify the path in the last entry in the list (for example, zk01.example.com:2181,zk02.example.com:2181zk03.example.com:2181/kafka). Use the /kafka ZooKeeper path for Cloudera Labs Kafka, because it is created automatically at installation.

topic

flume-channel

The Kafka topic the channel will use.

groupId

flume

Consumer group ID the channel uses to register with Kafka.

parseAsFlumeEvent

true

This should be true if a Flume source is writing to the channel and will expect AvroDataumswith the FlumeEvent schema (org.apache.flume.source.avro.AvroFlumeEvent) in the channel. Should be set to false if other producers are writing into the topic that the channel is using.

readSmallestOffset

false

If true will read all data in the topic, if false will only read data written after the channel has started. Only relevant when parseAsFlumeEvent is false.

consumer.timeout.ms

100

kafka.consumer.timeout.ms (polling interval when writing to the sink)

*Required

Other properties can be overridden as with the Source and Sink by supplying the kafka. prefix.

When parseAsFlumeEvent is set to true, if other consumers are reading from the channel they will need the FlumeEvent class as mentioned in the table above. The channel in this case serializes the event as an <AvroFlumeEvent>. To provide reliability you should configure multiple agents with the same topic and groupId for the channel so that when an agent fails, other agents can remove data from the channel. The producer mode is always set to sync (required acks -1) and auto.commit.enabled is always overridden to false.

As Kafka sink and Kafka channel provide overlapping functionality, our recommendations are as follows:

  • If you are ingesting from Kafka to Hadoop and need the capabilities of an interceptor or selector, use the Kafka source and file or Kafka channel and standard Flume sink that you require.
  • If you want to ingest directly from Kafka to HDFS, then the Kafka channel by itself is recommended.
  • For writing events to Kafka from either Kafka or other source, the Kafka channel is recommended.
  • If you can’t wait until CDH 5.3/Flume 1.6, the Kafka sink provides this functionality today.

Conclusion

Flafka provides a lot of flexibility in pipeline architecture. The right combination of options will depend on your requirements.

We hope that this post demonstrates the ease of use of Flafka as well as that implementing fairly sophisticated event processing doesn’t necessarily dictate the need for a dedicated stream-processing system when sub-second latencies are required.

Gwen Shapira is a Software Engineer at Cloudera, and a Kafka contributor.

Jeff Holoman is a Systems Engineer at Cloudera.

├─第一阶段 │      源码+ppt.rar │      高并发编程第一阶段01讲、课程大纲及主要内容介绍.wmv │      高并发编程第一阶段02讲、简单介绍什么是线程.wmv │      高并发编程第一阶段03讲、创建并启动线程.mp4 │      高并发编程第一阶段04讲、线程生命周期以及start方法源码剖析.mp4 │      高并发编程第一阶段05讲、采用多线程方式模拟银行排队叫号.mp4 │      高并发编程第一阶段06讲、用Runnable接口将线程的逻辑执行单元从控制中抽取出来.mp4 │      高并发编程第一阶段07讲、策略模式在Thread和Runnable中的应用分析.mp4 │      高并发编程第一阶段08讲、构造Thread对象你也许不知道的几件事.mp4 │      高并发编程第一阶段09讲、多线程与JVM内存结构的关系,虚拟机栈实验.mp4 │      高并发编程第一阶段10讲、Thread构造函数StackSize详细讲解.mp4 │      高并发编程第一阶段11讲、Thread构造函数StackSize详细讲解-续.mp4 │      高并发编程第一阶段12讲、Daemon线程的创建以及使用场景分析.mp4 │      高并发编程第一阶段13讲、线程ID,优先级讲解.mp4 │      高并发编程第一阶段14讲、Thread的join方法详细介绍,结合一个典型案例.mp4 │      高并发编程第一阶段15讲、Thread中断Interrupt方法详细讲解.mp4 │      高并发编程第一阶段16讲、采用优雅的方式结束线程生命周期.mp4 │      高并发编程第一阶段17讲、Thread API综合实战,编写ThreadService实现暴力结束线程的综合实战.mp4 │      高并发编程第一阶段18讲、数据同步的引入与Synchronized的简单介绍.mp4 │      高并发编程第一阶段19讲、结合jconsole,jstack以及汇编指令认识synchronized关键字.mp4 │      高并发编程第一阶段20讲、同步代码块以及同步方法之间的区别和关系.mp4 │      高并发编程第一阶段21讲、通过实验分析This锁的存在.mp4 │      高并发编程第一阶段22讲、通过实验分析Class锁的存在.mp4 │      高并发编程第一阶段23讲、多线程死锁分析,案例介绍.mp4 │      高并发编程第一阶段24讲、线程间通信快速入门,使用wait和notify进行线程间的数据通信.mp4 │      高并发编程第一阶段25讲、多Produce多Consume之间的通讯导致出现程序假死的原因分析.mp4 │      高并发编程第一阶段26讲、多线程下的生产者消费者模型,以及详细介绍notifyAll方法.mp4 │      高并发编程第一阶段27讲、wait和sleep的本质区别是什么,深入分析(面试常见问题).mp4 │      高并发编程第一阶段28讲、线程生产者消费者的综合实战结合Java8语法.mp4 │      高并发编程第一阶段29讲、如何实现一个自己的显式锁Lock精讲上.mp4 │      高并发编程第一阶段30讲、如何实现一个自己的显式锁Lock精讲下(让锁具备超时功能).mp4 │      高并发编程第一阶段31讲、如何给你的应用程序注入钩子程序,Linux下演示.mp4 │      高并发编程第一阶段32讲、如何捕获线程运行期间的异常.mp4 │      高并发编程第一阶段33讲、ThreadGroup API介绍之一.mp4 │      高并发编程第一阶段34讲、ThreadGroup API介绍之二.mp4 │      高并发编程第一阶段35讲、线程池原理与自定义线程池.mp4 │      高并发编程第一阶段36讲、自定义个简单的线程池并且测试.mp4 │      高并发编程第一阶段37讲、给线程池增加拒绝策略以及停止方法.mp4 │      高并发编程第一阶段38讲、给线程池增加自动扩充线程数量,以及闲时自动回收的功能.mp4 │      高并发编程第一阶段39讲、课程结束,内容回顾,下季内容预告.mp4 │ ├─第二阶段 │       Java并发编程.png │       ppt+源码.rar │       高并发编程第二阶段01讲、课程大纲及主要内容介绍.wmv │       高并发编程第二阶段02讲、介绍四种Singleton方式的优缺点在多线程情况下.wmv │       高并发编程第二阶段03讲、介绍三种高效优雅的Singleton实现方式.wmv │       高并发编程第二阶段04讲、多线程的休息室WaitSet详细介绍与知识点总结.mp4 │       高并发编程第二阶段05讲、一个解释volatile关键字作用最好的例子.mp4 │       高并发编程第二阶段06讲、Java内存模型以及CPU缓存不一致问题的引入.mp4 │       高并发编程第二阶段07讲、CPU以及CPU缓存的结构,解决高速缓存一致性问题的两种方案介绍.mp4 │       高并发编程第二阶段08讲、并发编程的三个重要概念,原子性,可见性,有序性.mp4 │       高并发编程第二阶段09讲、指令重排序,happens-before规则精讲.mp4 │       高并发编程第二阶段10讲、volatile关键字深入详解.mp4 │       高并发编程第二阶段11讲、volatile关键字总结.mp4 │       高并发编程第二阶段12讲、观察者设计模式介绍.mp4 │       高并发编程第二阶段13讲、使用观察者设计模式观察线程的生命周期.mp4 │       高并发编程第二阶段14讲、单线程执行设计模式,有一个门,始终只能一个人通过-上.mp4 │       高并发编程第二阶段15讲、单线程执行设计模式,有一个门,始终只能一个人通过-下.mp4 │       高并发编程第二阶段16讲、多线程读写锁分离设计模式讲解-上.mp4 │       高并发编程第二阶段17讲、多线程读写锁分离设计模式讲解-中.mp4 │       高并发编程第二阶段18讲、多线程读写锁分离设计模式讲解-下.mp4 │       高并发编程第二阶段19讲、多线程不可变对象设计模式Immutable-上.mp4 │       高并发编程第二阶段20讲、多线程不可变对象设计模式Immutable-下.mp4 │       高并发编程第二阶段21讲、多线程Future设计模式详细介绍-上.mp4 │       高并发编程第二阶段22讲、多线程Future设计模式详细介绍-下.mp4 │       高并发编程第二阶段23讲、第二阶段课程答疑学员问题.mp4 │       高并发编程第二阶段24讲、Guarded Suspension设计模式-上.mp4 │       高并发编程第二阶段25讲、Guarded Suspension设计模式-下.mp4 │       高并发编程第二阶段26讲、ThreadLocal使用详解,深入原理介绍.mp4 │       高并发编程第二阶段27讲、多线程运行上下文设计模式介绍.mp4 │       高并发编程第二阶段28讲、使用ThreadLocal重新实现一个上下文设计模式.mp4 │       高并发编程第二阶段29讲、多线程Balking设计模式-上.mp4 │       高并发编程第二阶段30讲、多线程Balking设计模式-下.mp4 │       高并发编程第二阶段31讲、多线程Producer and Consumer设计模式.mp4 │       高并发编程第二阶段32讲、多线程Count Down设计模式.mp4 │       高并发编程第二阶段33讲、多线程Thread-Per-Message设计模式.mp4 │       高并发编程第二阶段34讲、多线程Two Phase Termination设计模式-上.mp4 │       高并发编程第二阶段35讲、多线程Two Phase Termination设计模式-下.mp4 │       高并发编程第二阶段36讲、多线程Worker-Thread设计模式-上.mp4 │       高并发编程第二阶段37讲、多线程Worker-Thread设计模式-上.mp4 │       高并发编程第二阶段38讲、多线程Active Objects设计模式(接受异步消息的主动对象)-上.mp4 │       高并发编程第二阶段39讲、多线程Active Objects设计模式(接受异步消息的主动对象)-中.mp4 │       高并发编程第二阶段40讲、多线程Active Objects设计模式(接受异步消息的主动对象)-下.mp4 │       高并发编程第二阶段41讲、多线程设计模式内容回顾与总结.mp4 │       高并发编程第二阶段42讲、ClassLoader课程大纲介绍.mp4 │       高并发编程第二阶段43讲、类加载的过程以及类主动使用的六种情况详细介绍.mp4 │       高并发编程第二阶段44讲、被动引用和类加载过程的练习巩固训练题.mp4 │       高并发编程第二阶段45讲、ClassLoader加载阶段发生的故事.mp4 │       高并发编程第二阶段46讲、ClassLoader链接阶段(验证,准备,解析)过程详细介绍.mp4 │       高并发编程第二阶段47讲、ClassLoader初始化阶段详细介绍clinit.mp4 │       高并发编程第二阶段48讲、JVM内置三大类加载器的详细介绍.mp4 │       高并发编程第二阶段49讲、自定义类加载器ClassLoader顺便问候了一下世界.mp4 │       高并发编程第二阶段50讲、ClassLoader父委托机制详细介绍.mp4 │       高并发编程第二阶段51讲、加密解密类加载实战演示.mp4 │       高并发编程第二阶段52讲、加密解密类加载实战演示-续.mp4 │       高并发编程第二阶段53讲、ClassLoader打破双父亲委托机制,重写loadClass实战练习.mp4 │       高并发编程第二阶段54讲、ClassLoader命名空间,运行时包,类卸载详细介绍.mp4 │       高并发编程第二阶段55讲、线程上下文类加载器以及数据库驱动案例分析.mp4 │       └─第三阶段        Java并发编程.png        Java高并发第三阶段(JUC).png        高并发编程第三阶段01讲 AtomicInteger多线程下测试讲解.mkv        高并发编程第三阶段02讲 AtomicInteger API详解,以及CAS算法详细介绍.mkv        高并发编程第三阶段03讲 利用CAS构造一个TryLock自定义显式锁.mp4        高并发编程第三阶段04讲 利用CAS构造一个TryLock自定义显式锁-增强并发情况下.mp4        高并发编程第三阶段05讲 AtomicBoolean源码分析.mp4        高并发编程第三阶段06讲 AtomicLong源码分析.mp4        高并发编程第三阶段07讲 AtomicReference详解,CAS算法带来的ABA问题详解.mp4        高并发编程第三阶段08讲 AtomicStampReference详解,解决CAS带来的ABA问题.mp4        高并发编程第三阶段09讲 AtomicIntegerArray,AtomicLongArray,AtomicReferenceArray讲解.mp4        高并发编程第三阶段10讲 AtomicIntegerFieldUpdater,AtomicLongFieldUpdater,AtomicReferenceFieldUpdater讲解.mp4        高并发编程第三阶段11讲 AtomicXXXFieldUpdater源码分析及使用场景分析.mp4        高并发编程第三阶段12讲 sun.misc.Unsafe介绍以及几种Counter方案性能对比.mp4        高并发编程第三阶段13讲 一个JNI程序的编写,通过Java去调用C,C++程序.mp4        高并发编程第三阶段14讲 Unsafe中的方法使用,一半是天使,一半是魔鬼的Unsafe.mp4        高并发编程第三阶段15讲 Unsafe背后的汇编指令,牛逼男人背后的女人_.mp4        高并发编程第三阶段16讲 CountDownLatch经典案例讲解-上_.mp4        高并发编程第三阶段17讲 CountDownLatch经典案例讲解及API精讲-中_.mp4        高并发编程第三阶段18讲 CountDownLatch经典案例讲解如何给离散平行任务增加逻辑层次关系-下_.mp4        高并发编程第三阶段19讲 CyclicBarrier工具的使用场景介绍_.mp4        高并发编程第三阶段20讲 CyclicBarrier vs CountDownLatch_.mp4        高并发编程第三阶段21讲 Exchanger工具的使用以及常见问题分析-上_.mp4        高并发编程第三阶段22讲 Exchanger工具的使用以及常见问题分析-下_.mp4        高并发编程第三阶段23讲 Semaphore工具的介绍以及借助于Semaphore构造一个Lock_.mp4        高并发编程第三阶段24讲 Semaphore工具API详细介绍-上_.mp4        高并发编程第三阶段25讲 Semaphore工具API详细介绍-下_.mp4        高并发编程第三阶段26讲 Lock&ReentrantLock详细讲解_.mp4        高并发编程第三阶段27讲 ReadWriteLock&ReentrantReadWriteLock详细讲解_.mp4        高并发编程第三阶段28讲 Condition初步使用,提出几个疑问_.mp4        高并发编程第三阶段29讲 关于Condition疑问的几个小实验,对比Wait&Notify_.mp4        高并发编程第三阶段30讲 使用Condition实现一个多线程下的Producer-Consumer_.mp4        高并发编程第三阶段31讲 JDK8-StampedLock详细介绍-上_.mp4        高并发编程第三阶段32讲 JDK8-StampedLock详细介绍-下.mp4        高并发编程第三阶段33讲 ForkJoin框架之RecursiveTask_.mp4        高并发编程第三阶段34讲 ForkJoin框架之RecursiveAction_.mp4        高并发编程第三阶段35讲 Phaser工具的实战案例使用第一部分_.mp4        高并发编程第三阶段36讲 Phaser工具的实战案例使用第二部分_.mp4        高并发编程第三阶段37讲 Phaser工具的实战案例使用第三部分_.mp4        高并发编程第三阶段38讲 Executor&ExecutorService讲解_.mp4        高并发编程第三阶段39讲 ThreadPoolExecutor七大构造参数详细讲解_.mp4        高并发编程第三阶段40讲 ThreadPoolExecutor关闭(很重要)精讲_.mp4        高并发编程第三阶段41讲 newCache&newFixed&single ExecutorService详解_.mp4        高并发编程第三阶段42讲 newWorkStealingPool ExecutorService详解_.mp4        高并发编程第三阶段43讲 Scheduler的前奏Timer&Linux Crontab & quartz比较_.mp4        高并发编程第三阶段44讲 ExecutorService API详细讲解-上_.mp4        高并发编程第三阶段45讲 ExecutorService 四大内置拒绝策略深入探究_.mp4        高并发编程第三阶段46讲 ExecutorService API详细讲解-中_.mp4        高并发编程第三阶段47讲 ExecutorService API详细讲解-下_.mp4        高并发编程第三阶段48讲 Future&Callable详细讲解-上_.mp4        高并发编程第三阶段49讲 Future&Callable详细讲解-下_.mp4        高并发编程第三阶段50讲 CompletionService详细介绍_.mp4        高并发编程第三阶段51讲 ScheduledExecutorService详细讲解-上_.mp4        高并发编程第三阶段52讲 ScheduledExecutorService详细讲解-下_.mp4        高并发编程第三阶段53讲 知识回顾与串联_.mp4        高并发编程第三阶段54讲 课程问题答疑,ExecutorService中的陷阱_.mp4        高并发编程第三阶段55讲 CompletableFuture的使用精讲(体验)-1_.mp4        高并发编程第三阶段56讲 CompletableFuture的使用精讲(构建)-2_.mp4        高并发编程第三阶段57讲 CompletableFuture的使用精讲(熟练)-3_.mp4        高并发编程第三阶段58讲 CompletableFuture的使用精讲(深入)-4_.mp4        高并发编程第三阶段59讲 CompletableFuture的使用精讲(掌握)-5_.mp4        高并发编程第三阶段60讲 LinkedList和有序LinkedList的实现_.mp4        高并发编程第三阶段61讲 跳表数据结构的Java实现-1_.mp4        高并发编程第三阶段62讲 跳表数据结构的Java实现-2_.mp4        高并发编程第三阶段63讲 跳表数据结构的Java实现(解决Bug)-3_.mp4        高并发编程第三阶段64讲 ArrayBlockingList详细讲解_.mp4        高并发编程第三阶段65讲 PriorityBlockingQueue详细讲解_.mp4        高并发编程第三阶段66讲 LinkedBlockingQueue详细讲解_.mp4        高并发编程第三阶段67讲 SynchronousQueue详细讲解_.mp4        高并发编程第三阶段68讲 DelayQueue详细讲解_.mp4        高并发编程第三阶段69讲 LinkedBlockingDeque详细讲解_.mp4        高并发编程第三阶段70讲 LinkedTransferQueue详细讲解_.mp4        高并发编程第三阶段71讲 七大BlockingQueue的特点总结,可以不用详细看_.mp4        高并发编程第三阶段72讲 ConcurrentHashMap性能测试以及JDK1.7原理讲解_.mp4        高并发编程第三阶段73讲 ConcurrentHashMap性能测试以及JDK1.8原理讲解_.mp4        高并发编程第三阶段74讲 ConcurrentSkipListMap详细讲解_.mp4        高并发编程第三阶段75讲 ConcurrentSkipListMap vs ConcurrentHashMap_.mp4        高并发编程第三阶段76讲 ConcurrentLinkedQueue&ConcurrentLinkedDeque_.mp4        高并发编程第三阶段77讲 CopyOnWriteArrayList&CopyOnWriteArraySet源码分析_.mp4        高并发编程第三阶段78讲 ConcurrentLinkedList vs CopyOnWriteArrayList vs SynchronizedList性能对比_.mp4        高并发编程第三阶段79讲 实现一个高并发的无锁队列(Lock-Free).mp4        高并发编程第三阶段80讲 总结与回顾,闲聊与感谢.mp4
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值