Flink官方文档笔记12 数据管道和ETL的部分知识点

虽然官方文档东西相当多,但是我不怕,预估这个系列能写至少一百篇博客= =

Data Pipelines & ETL 数据管道和ETL工作

One very common use case for Apache Flink is to implement ETL (extract, transform, load) pipelines that take data from one or more sources, perform some transformations and/or enrichments, and then store the results somewhere.
Apache Flink的一个非常常见的用例是实现ETL(提取、转换、加载)管道,该管道从一个或多个源获取数据,执行一些转换和/或充实,然后将结果存储在某个地方。

In this section we are going to look at how to use Flink’s DataStream API to implement this kind of application.
在本节中,我们将了解如何使用Flink的DataStream API来实现这种应用程序。

Note that Flink’s Table and SQL APIs are well suited for many ETL use cases.
注意,Flink的表和SQL api非常适合许多ETL用例。

But regardless of whether you ultimately use the DataStream API directly, or not, having a solid understanding the basics presented here will prove valuable.

但是,无论您最终是否直接使用DataStream API,对这里提供的基础知识有一个牢固的理解将被证明是有价值的。

Stateless Transformations

This section covers map() and flatmap(), the basic operations used to implement stateless transformations.
本节介绍用于实现无状态转换的基本操作map()和flatmap()。

The examples in this section assume you are familiar with the Taxi Ride data used in the hands-on exercises in the flink-training repo.(这个链接失效了,但是还在官网挂着,真烦人。)
本节中的示例假设您熟悉在flink-training repo中的实践练习中使用的出租车行驶数据。

map()

In the first exercise you filtered a stream of taxi ride events.

在第一个练习中,您过滤了一系列出租车乘坐事件。

In that same code base there’s a GeoUtils class that provides a static method GeoUtils.mapToGridCell(float lon, float lat) which maps a location (longitude, latitude) to a grid cell that refers to an area that is approximately 100x100 meters in size.
在同一个代码库中有一个GeoUtils类,它提供了一个静态方法GeoUtils。mapToGridCell(浮动lon,浮动lat),它将一个位置(经度、纬度)映射到一个网格单元,该网格单元引用一个大小约为100x100米的区域。

Now let’s enrich our stream of taxi ride objects by adding startCell and endCell fields to each event.

现在让我们通过向每个事件添加startCell和endCell字段来丰富我们的taxi ride对象流。

You can create an EnrichedRide object that extends TaxiRide, adding these fields:
您可以创建扩展TaxiRide的EnrichedRide对象,添加以下字段:

public static class EnrichedRide extends TaxiRide {
    public int startCell;
    public int endCell;

    public EnrichedRide() {}

    public EnrichedRide(TaxiRide ride) {
        this.rideId = ride.rideId;
        this.isStart = ride.isStart;
        ...
        this.startCell = GeoUtils.mapToGridCell(ride.startLon, ride.startLat);
        this.endCell = GeoUtils.mapToGridCell(ride.endLon, ride.endLat);
    }

    public String toString() {
        return super.toString() + "," +
            Integer.toString(this.startCell) + "," +
            Integer.toString(this.endCell);
    }
}

You can then create an application that transforms the stream
然后您可以创建转换流的应用程序

DataStream<TaxiRide> rides = env.addSource(new TaxiRideSource(...));

DataStream<EnrichedRide> enrichedNYCRides = rides
    .filter(new RideCleansingSolution.NYCFilter())
    .map(new Enrichment());

enrichedNYCRides.print();

with this MapFunction:
用这个MapFunction:

public static class Enrichment implements MapFunction<TaxiRide, EnrichedRide> {

    @Override
    public EnrichedRide map(TaxiRide taxiRide) throws Exception {
        return new EnrichedRide(taxiRide);
    }
}

With the Collector provided in this interface, the flatmap() method can emit as many stream elements as you like, including none at all.
使用此接口中提供的收集器,flatmap()方法可以释放任意多的流元素,不包括任何元素。

Keyed Streams 通过key分组的流,键控流

keyBy()

It is often very useful to be able to partition a stream around one of its attributes, so that all events with the same value of that attribute are grouped together.
能够围绕某个属性对流进行分区通常非常有用,以便将具有该属性相同值的所有事件分组在一起。

For example, suppose you wanted to find the longest taxi rides starting in each of the grid cells.
例如,假设您想要查找从每个网格单元开始的最长的出租车行程。

Thinking in terms of a SQL query, this would mean doing some sort of GROUP BY with the startCell, while in Flink this is done with keyBy(KeySelector)
从SQL查询的角度考虑,这意味着使用startCell执行某种类型的GROUP BY,而在Flink中,这是使用keyBy(KeySelector)完成的。

rides
    .flatMap(new NYCEnrichment())
    .keyBy("startCell")

Every keyBy causes a network shuffle that repartitions the stream.
每个keyBy都会导致网络shuffle,从而对流进行重新分区。

In general this is pretty expensive, since it involves network communication along with serialization and deserialization.
一般来说,这是非常昂贵的,因为它涉及到网络通信以及序列化和反序列化。
在这里插入图片描述

In the example above, the key has been specified by a field name, “startCell”.
在上面的示例中,键是由字段名“startCell”指定的。

This style of key selection has the drawback that the compiler is unable to infer the type of the field being used for keying, and so Flink will pass around the key values as Tuples, which can be awkward.
这种类型的键选择有一个缺点,即编译器无法推断用于键键入的字段的类型,因此Flink将键值作为元组传递,这可能会很尴尬。

It is better to use a properly typed KeySelector, e.g.,
最好使用正确类型的键选择器,例如,

rides
    .flatMap(new NYCEnrichment())
    .keyBy(
        new KeySelector<EnrichedRide, int>() {

            @Override
            public int getKey(EnrichedRide enrichedRide) throws Exception {
                return enrichedRide.startCell;
            }
        })

which can be more succinctly expressed with a lambda:
可以用lambda更简洁地表达:

rides
    .flatMap(new NYCEnrichment())
    .keyBy(enrichedRide -> enrichedRide.startCell)

Keys are computed

KeySelectors aren’t limited to extracting a key from your events.
键选择器并不仅限于从事件中提取键。

They can, instead, compute the key in whatever way you want, so long as the resulting key is deterministic, and has valid implementations of hashCode() and equals().
相反,它们可以按照您希望的任何方式计算键值,只要结果键值是确定性的,并且具有hashCode()和equals()的有效实现。

This restriction rules out KeySelectors that generate random numbers, or that return Arrays or Enums, but you can have composite keys using Tuples or POJOs, for example, so long as their elements follow these same rules.
这个限制排除了生成随机数或返回数组或枚举的键选择器,但是您可以使用元组或pojo组成复合键,例如,只要它们的元素遵循这些相同的规则。

The keys must be produced in a deterministic way, because they are recomputed whenever they are needed, rather than being attached to the stream records.
键必须以确定的方式产生,因为它们在需要时被重新计算,而不是附加到流记录上。

For example, rather than creating a new EnrichedRide class with a startCell field that we then use as a key via
例如,我们不用使用startCell字段创建一个新的EnrichedRide类,然后通过它作为键

keyBy(enrichedRide -> enrichedRide.startCell)

we could do this, instead:
我们也可以这样做:

keyBy(ride -> GeoUtils.mapToGridCell(ride.startLon, ride.startLat))

Aggregations on Keyed Streams 在通过key进行分类的流上进行聚合操作。

This bit of code creates a new stream of tuples containing the startCell and duration (in minutes) for each end-of-ride event:
这段代码为每个骑行结束事件创建了一个新的元组流,其中包含startCell和duration(以分钟为单位):

import org.joda.time.Interval;

DataStream<Tuple2<Integer, Minutes>> minutesByStartCell = enrichedNYCRides
    .flatMap(new FlatMapFunction<EnrichedRide, Tuple2<Integer, Minutes>>() {

        @Override
        public void flatMap(EnrichedRide ride,
                            Collector<Tuple2<Integer, Minutes>> out) throws Exception {
            if (!ride.isStart) {
                Interval rideInterval = new Interval(ride.startTime, ride.endTime);
                Minutes duration = rideInterval.toDuration().toStandardMinutes();
                out.collect(new Tuple2<>(ride.startCell, duration));
            }
        }
    });

Now it is possible to produce a stream that contains only those rides that are the longest rides ever seen (to that point) for each startCell.
现在可以为每个startCell生成一个流,该流只包含(到那时为止)所见过的最长的骑行。

There are a variety of ways that the field to use as the key can be expressed.
可以通过多种方式表示要用作键的字段。

Earlier you saw an example with an EnrichedRide POJO, where the field to use as the key was specified with its name.
前面我们看到了一个使用EnrichedRide POJO的示例,其中要用作键的字段是用它的名称指定的。

This case involves Tuple2 objects, and the index within the tuple (starting from 0) is used to specify the key.
这种情况涉及到Tuple2对象,tuple中的索引(从0开始)用于指定键。

minutesByStartCell
  .keyBy(0) // startCell
  .maxBy(1) // duration
  .print();

The output stream now contains a record for each key every time the duration reaches a new maximum – as shown here with cell 50797:
现在,每当持续时间达到一个新的最大值时,输出流就会为每个键包含一条记录——如图单元格50797所示:

...
4> (64549,5M)
4> (46298,18M)
1> (51549,14M)
1> (53043,13M)
1> (56031,22M)
1> (50797,6M)
...
1> (50797,8M)
...
1> (50797,11M)
...
1> (50797,12M)

(Implicit) State 隐式状态

This is the first example in this training that involves stateful streaming.
这是本培训中涉及有状态流的第一个示例。

Though the state is being handled transparently, Flink has to keep track of the maximum duration for each distinct key.
虽然状态处理是透明的,但Flink必须跟踪每个不同键的最大持续时间。

Whenever state gets involved in your application, you should think about how large the state might become.
每当您的应用程序涉及到状态时,您都应该考虑状态可能会有多大。

Whenever the key space is unbounded, then so is the amount of state Flink will need.
只要键空间是无限的,那么Flink需要的状态量也是无限的。

When working with streams, it generally makes more sense to think in terms of aggregations over finite windows, rather than over the entire stream.
在处理流时,考虑有限窗口上的聚合通常比考虑整个流更有意义。

reduce() and other aggregators reduce()和其他的聚合函数

maxBy(), used above, is just one example of a number of aggregator functions available on Flink’s KeyedStreams.
上面使用的maxBy()只是Flink的KeyedStreams上可用的聚合器函数的一个示例。

There is also a more general purpose reduce() function that you can use to implement your own custom aggregations.
还有一个更通用的reduce()函数,您可以使用它实现自己的自定义聚合。

Stateful Transformations 有状态的转换

Why is Flink Involved in Managing State?为什么Flink涉及状态管理?

Your applications are certainly capable of using state without getting Flink involved in managing it – but Flink offers some compelling features for the state it manages:
你的应用程序当然能够使用状态而不需要使用Flink来管理它——但是Flink为它管理的状态提供了一些引人注目的特性:

  • local: Flink state is kept local to the machine that processes it, and can be accessed at memory speed
    Flink状态保存在处理它的机器的本地,并且可以以内存速度访问
  • durable: Flink state is fault-tolerant, i.e., it is automatically checkpointed at regular intervals, and is restored upon failure
    Flink状态是容错的。时,将定期对其进行自动检查点,并在出现故障时恢复
  • vertically scalable: Flink state can be kept in embedded RocksDB instances that scale by adding more local disk
    Flink状态可以保存在嵌入的RocksDB实例中,通过添加更多的本地磁盘来伸缩
  • horizontally scalable: Flink state is redistributed as your cluster grows and shrinks
    Flink状态会随着集群的增长和缩小而重新分布
  • queryable: Flink state can be queried externally via the Queryable State API.
    Flink状态可以通过Queryable状态API在外部进行查询。

In this section you will learn how to work with Flink’s APIs that manage keyed state.
在本节中,您将学习如何使用Flink的api来管理键控状态。

Rich Functions 富函数

At this point you have already seen several of Flink’s function interfaces, including FilterFunction, MapFunction, and FlatMapFunction.
至此,您已经看到了几个Flink的函数接口,包括FilterFunction、MapFunction和FlatMapFunction。

These are all examples of the Single Abstract Method pattern.
这些都是单一抽象方法模式的例子。

For each of these interfaces, Flink also provides a so-called “rich” variant, e.g., RichFlatMapFunction, which has some additional methods, including:
对于这些接口,Flink还提供了一个所谓的“富”变体,例如RichFlatMapFunction,它有一些额外的方法,包括:

  • open(Configuration c)
  • close()
  • getRuntimeContext()

open() is called once, during operator initialization.
在操作符初始化期间,只调用一次open()。

This is an opportunity to load some static data, or to open a connection to an external service, for example.
例如,这是加载一些静态数据或打开到外部服务的连接的机会。

getRuntimeContext() provides access to a whole suite of potentially interesting things, but most notably it is how you can create and access state managed by Flink.
getRuntimeContext()提供对一整套可能很有趣的东西的访问,但最引人注目的是如何创建和访问由Flink管理的状态。

An Example with Keyed State

In this example, imagine you have a stream of events that you want to de-duplicate, so that you only keep the first event with each key.
在本例中,假设您有一个事件流,您想要对其进行反复制,因此您只保留每个键的第一个事件。

Here’s an application that does that, using a RichFlatMapFunction called Deduplicator:
这里有一个应用程序可以做到,使用RichFlatMapFunction称为Deduplicator:

private static class Event {
    public final String key;
    public final long timestamp;
    ...
}

public static void main(String[] args) throws Exception {
    StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
  
    env.addSource(new EventSource())
        .keyBy(e -> e.key)
        .flatMap(new Deduplicator())
        .print();
  
    env.execute();
}

To accomplish this, Deduplicator will need to somehow remember, for each key, whether or not there has already been an event for that key.
要做到这一点,重复数据删除器需要记住每个键是否已经有一个事件。

It will do so using Flink’s keyed state interface.
它将使用Flink的键控状态接口来完成此操作。

When you are working with a keyed stream like this one, Flink will maintain a key/value store for each item of state being managed.
当您使用像这样的键控流时,Flink将为被管理的每个状态项维护一个键/值存储。

Flink supports several different types of keyed state, and this example uses the simplest one, namely ValueState.
Flink支持几种不同类型的键控状态,本例使用最简单的键控状态,即ValueState。

This means that for each key, Flink will store a single object – in this case, an object of type Boolean.
这意味着对于每个键,Flink将存储一个对象——在本例中,是一个布尔类型的对象。

Our Deduplicator class has two methods: open() and flatMap().
我们的Deduplicator类有两个方法:open()和flatMap()。

The open method establishes the use of managed state by defining a ValueStateDescriptor<Boolean>.
open方法通过定义一个ValueStateDescriptor<Boolean>来建立托管状态的使用。

The arguments to the constructor specify a name for this item of keyed state (“keyHasBeenSeen”), and provide information that can be used to serialize these objects (in this case, Types.BOOLEAN).
构造函数的参数为键控状态项指定了一个名称(" keyHasBeenSeen "),并提供了可用于序列化这些对象的信息(在本例中为Types.BOOLEAN)。

public static class Deduplicator extends RichFlatMapFunction<Event, Event> {
    ValueState<Boolean> keyHasBeenSeen;

    @Override
    public void open(Configuration conf) {
        ValueStateDescriptor<Boolean> desc = new ValueStateDescriptor<>("keyHasBeenSeen", Types.BOOLEAN);
        keyHasBeenSeen = getRuntimeContext().getState(desc);
    }

    @Override
    public void flatMap(Event event, Collector<Event> out) throws Exception {
        if (keyHasBeenSeen.value() == null) {
            out.collect(event);
            keyHasBeenSeen.update(true);
        }
    }
}

When the flatMap method calls keyHasBeenSeen.value(), Flink’s runtime looks up the value of this piece of state for the key in context, and only if it is null does it go ahead and collect the event to the output.
当flatMap方法调用keyhasbeenseent .value()时,Flink的运行时在上下文中查找键的这段状态的值,只有当它为null时,它才会继续收集事件到输出。

It also updates keyHasBeenSeen to true in this case.
在这种情况下,它还将keyHasBeenSeen更新为true。

This mechanism for accessing and updating key-partitioned state may seem rather magical, since the key is not explicitly visible in the implementation of our Deduplicator.
这种访问和更新键分区状态的机制似乎相当神奇,因为键在重复数据删除器的实现中并不显式可见。

When Flink’s runtime calls the open method of our RichFlatMapFunction, there is no event, and thus no key in context at that moment.
当Flink的运行时调用RichFlatMapFunction的open方法时,没有事件,因此此时上下文中没有键。

But when it calls the flatMap method, the key for the event being processed is available to the runtime, and is used behind the scenes to determine which entry in Flink’s state backend is being operated on.
但是当它调用flatMap方法时,正在处理的事件的键对运行时可用,并且在后台使用它来确定正在操作Flink的状态后端中的哪个条目。

When deployed to a distributed cluster, there will be many instances of this Deduplicator, each of which will responsible for a disjoint subset of the entire keyspace.
当部署到分布式集群时,该重复数据删除器将有许多实例,每个实例将负责整个密钥空间的一个独立子集。

Thus, when you see a single item of ValueState, such as
因此,当您看到ValueState的单个项时,例如

ValueState<Boolean> keyHasBeenSeen;

understand that this represents not just a single Boolean, but rather a distributed, sharded, key/value store.
请理解,这不仅仅代表一个布尔值,而是一个分布的、分片的键/值存储。

Clearing State 清除状态

There’s a potential problem with the example above: What will happen if the key space is unbounded?
上面的例子有一个潜在的问题:如果键空间是无限的,会发生什么?

Flink is storing somewhere an instance of Boolean for every distinct key that is used.
Flink为每个使用的不同键存储一个布尔值实例。

If there’s a bounded set of keys then this will be fine, but in applications where the set of keys is growing in an unbounded way, it’s necessary to clear the state for keys that are no longer needed.
如果存在一个有界的键集,那么这是可以的,但是在键集以无限制的方式增长的应用程序中,有必要清除不再需要的键的状态。

This is done by calling clear() on the state object, as in:
这是通过调用state对象上的clear()来完成的,如:

keyHasBeenSeen.clear()

You might want to do this, for example, after a period of inactivity for a given key.
例如,在给定键的一段时间不活动之后,您可能希望这样做。

You’ll see how to use Timers to do this when you learn about ProcessFunctions in the section on event-driven applications.
在有关事件驱动的应用程序一节中学习processfunction时,您将了解如何使用计时器来完成此任务。

There’s also a State Time-to-Live (TTL) option that you can configure with the state descriptor that specifies when you want the state for stale keys to be automatically cleared.

还有一个状态生存时间(TTL)选项,您可以使用状态描述符配置该选项,该描述符指定何时希望自动清除过期密钥的状态。

Non-keyed State 非键状态

It is also possible to work with managed state in non-keyed contexts.
还可以在非键控上下文中使用托管状态。

This is sometimes called operator state.
这有时被称为操作符状态。

The interfaces involved are somewhat different, and since it is unusual for user-defined functions to need non-keyed state, it is not covered here.
所涉及的接口有些不同,由于用户定义的函数不需要非键控状态,所以这里不讨论它。

This feature is most often used in the implementation of sources and sinks.
这个特性最常用于源和接收的实现。

Connected Streams 什么是ConnectedStreams?

Sometimes instead of applying a pre-defined transformation like this:
在这里插入图片描述
you want to be able to dynamically alter some aspects of the transformation – by streaming in thresholds, or rules, or other parameters.
您希望能够动态地改变转换的某些方面——通过流化阈值、规则或其他参数。

The pattern in Flink that supports this is something called connected streams, wherein a single operator has two input streams, like this:
在Flink中支持这一点的模式叫做连接流,其中一个操作符有两个输入流,像这样:
在这里插入图片描述
Connected streams can also be used to implement streaming joins.
连接的流也可以用来实现流连接。

Example–ConnectedStream的案例

In this example, a control stream is used to specify words which must be filtered out of the streamOfWords.
在此示例中,使用控制流来指定必须从单词流中筛选出来的单词。

A RichCoFlatMapFunction called ControlFunction is applied to the connected streams to get this done.
将一个名为ControlFunction的RichCoFlatMapFunction应用于连接的流以完成这项工作。

public static void main(String[] args) throws Exception {
    StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();

    DataStream<String> control = env.fromElements("DROP", "IGNORE").keyBy(x -> x);
    DataStream<String> streamOfWords = env.fromElements("Apache", "DROP", "Flink", "IGNORE").keyBy(x -> x);
  
    control
        .connect(datastreamOfWords)
        .flatMap(new ControlFunction())
        .print();

    env.execute();
}

Note that the two streams being connected must be keyed in compatible ways.
注意,连接的两个流必须以兼容的方式进行键控。

The role of a keyBy is to partition a stream’s data, and when keyed streams are connected, they must be partitioned in the same way.
keyBy的作用是对流的数据进行分区,当连接到键控流时,必须以同样的方式对它们进行分区。

This ensures that all of the events from both streams with the same key are sent to the same instance. This makes it possible, then, to join the two streams on that key, for example.
这确保了来自两个流的具有相同键的所有事件都被发送到相同的实例。例如,这样就可以在该键上连接两个流。

In this case the streams are both of type DataStream<String>, and both streams are keyed by the string.
在本例中,流都是DataStream<String>类型的,并且两个流都是由字符串键控的。

As you will see below, this RichCoFlatMapFunction is storing a Boolean value in keyed state, and this Boolean is shared by the two streams.
正如您将在下面看到的,这个RichCoFlatMapFunction以键控状态存储一个布尔值,这个布尔值由两个流共享。

public static class ControlFunction extends RichCoFlatMapFunction<String, String, String> {
    private ValueState<Boolean> blocked;
      
    @Override
    public void open(Configuration config) {
        blocked = getRuntimeContext().getState(new ValueStateDescriptor<>("blocked", Boolean.class));
    }
      
    @Override
    public void flatMap1(String control_value, Collector<String> out) throws Exception {
        blocked.update(Boolean.TRUE);
    }
      
    @Override
    public void flatMap2(String data_value, Collector<String> out) throws Exception {
        if (blocked.value() == null) {
            out.collect(data_value);
        }
    }
}

A RichCoFlatMapFunction is a kind of FlatMapFunction that can be applied to a pair of connected streams, and it has access to the rich function interface.
RichCoFlatMapFunction是一种可以应用于一对连接的流的FlatMapFunction,它可以访问富函数接口。

This means that it can be made stateful.
这意味着它可以是有状态的。

The blocked Boolean is being used to remember the keys (words, in this case) that have been mentioned on the control stream, and those words are being filtered out of the streamOfWords stream.
阻塞的布尔值用于记住控制流中提到的关键字(在本例中是单词),这些单词将从单词流中过滤出来。

This is keyed state, and it is shared between the two streams, which is why the two streams have to share the same keyspace.
这是键控状态,它在两个流之间共享,这就是为什么两个流必须共享相同的键空间。

flatMap1 and flatMap2 are called by the Flink runtime with elements from each of the two connected streams – in our case, elements from the control stream are passed into flatMap1, and elements from streamOfWords are passed into flatMap2.
Flink运行时使用来自这两个连接流的元素调用flatMap1和flatMap2——在我们的例子中,来自控制流的元素被传递到flatMap1,而来自streamOfWords的元素被传递到flatMap2。

This was determined by the order in which the two streams are connected with control.connect(datastreamOfWords).
这是由两个流与control.connect(datastreamOfWords)连接的顺序决定的。

It is important to recognize that you have no control over the order in which the flatMap1 and flatMap2 callbacks are called.
重要的是要认识到您无法控制flatMap1和flatMap2回调的调用顺序。

These two input streams are racing against each other, and the Flink runtime will do what it wants to regarding consuming events from one stream or the other.
这两个输入流正在相互竞争,而Flink运行时将按照它想要的方式处理来自一个流或另一个流的事件。

In cases where timing and/or ordering matter, you may find it necessary to buffer events in managed Flink state until your application is ready to process them.
在计时和/或顺序问题的情况下,您可能会发现有必要在托管Flink状态下缓冲事件,直到应用程序准备好处理它们。

(Note: if you are truly desperate, it is possible to exert some limited control over the order in which a two-input operator consumes its inputs by using a custom Operator that implements the InputSelectable interface.)
(注意:如果您真的很绝望,可以通过使用实现InputSelectable接口的自定义操作符,对双输入操作符使用其输入的顺序进行一些有限的控制。)

Hands-on上手操作一下

The hands-on exercise that goes with this section is the Rides and Fares Exercise.
与此部分相关的实际操作是乘车和票价练习

基本上这些链接都失效了,但是官网还没有更新它们,不过到现在为止仅仅是介绍Flink阶段,看到这里你对Flink的了解可能不到10%,所以继续看下去吧。

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 1
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值