Flink 1.10编译实战(CDH版本)

Flink1.10增加了一些新的特性

Flink 1.10.0 正式宣告发布!作为 Flink 社区迄今为止规模最大的一次版本升级,Flink 1.10 容纳了超过 200 位贡献者对超过 1200 个 issue 的开发实现,包含对 Flink 作业的整体性能及稳定性的显著优化、对原生 Kubernetes 的初步集成以及对 Python 支持(PyFlink)的重大优化。

Flink 1.10 同时还标志着对 Blink[1] 的整合宣告完成,随着对 Hive 的生产级别集成及对 TPC-DS 的全面覆盖,Flink 在增强流式 SQL 处理能力的同时也具备了成熟的批处理能力。

作为实时计算的爱好者呢,就想体验一下Flink on Kubernetes (Beta版)、Flink 集成HIVE的功能,这里先从编译源码开始

一:编译环境

1、Java(至少1.8,这里列举我的Java环境)

java version "1.8.0_144"
Java(TM) SE Runtime Environment (build 1.8.0_144-b01)
Java HotSpot(TM) 64-Bit Server VM (build 25.144-b01, mixed mode)

2、Maven(至少Maven 3以上版本,这里列举我的Maven环境)

Apache Maven 3.5.0 (ff8f5e7444045639af65f6095c62210b5713f426; 2017-04-04T03:39:06+08:00)
Maven home: D:\maven\apache-maven-3.5.0
Java version: 1.8.0_144, vendor: Oracle Corporation
Java home: C:\Program Files\Java\jdk1.8.0_144\jre
Default locale: zh_CN, platform encoding: GBK
OS name: "windows 10", version: "10.0", arch: "amd64", family: "windows"

最好将apache-maven-3.5.0\conf\settings.xml中的镜像仓库改成国内的镜像仓库,下面列举了我用的镜像(其中有一些并没有用到,有备无患)

<mirrors>
	<mirror>
		<id>alimaven</id>
		<mirrorOf>central</mirrorOf>
		<name>aliyun maven</name>
		<url>http://maven.aliyun.com/nexus/content/repositories/central/</url>
	</mirror>
	<mirror>
		<id>alimaven</id>
		<name>aliyun maven</name>
		<url>http://maven.aliyun.com/nexus/content/groups/public/</url>
		<mirrorOf>central</mirrorOf>
	</mirror>
	<mirror>
		<id>central</id>
		<name>Maven Repository Switchboard</name>
		<url>http://repo1.maven.org/maven2/</url>
		<mirrorOf>central</mirrorOf>
	</mirror>
	<mirror>
		<id>repo2</id>
		<mirrorOf>central</mirrorOf>
		<name>Human Readable Name for this Mirror.</name>
		<url>http://repo2.maven.org/maven2/</url>
	</mirror>
	<mirror>
		<id>ibiblio</id>
		<mirrorOf>central</mirrorOf>
		<name>Human Readable Name for this Mirror.</name>
		<url>http://mirrors.ibiblio.org/pub/mirrors/maven2/</url>
	</mirror>
	<mirror>
		<id>jboss-public-repository-group</id>
		<mirrorOf>central</mirrorOf>
		<name>JBoss Public Repository Group</name>
		<url>http://repository.jboss.org/nexus/content/groups/public</url>
	</mirror>
	<mirror>
		<id>google-maven-central</id>
		<name>Google Maven Central</name>
		<url>https://maven-central.storage.googleapis.com
		</url>
		<mirrorOf>central</mirrorOf>
	</mirror>
	<!-- 中央仓库在中国的镜像 -->
	<mirror>
		<id>maven.net.cn</id>
		<name>oneof the central mirrors in china</name>
		<url>http://maven.net.cn/content/groups/public/</url>
		<mirrorOf>central</mirrorOf>
	</mirror>
</mirrors>

二:源码编译

1、github上下载源码。

git clone https://github.com/apache/flink.git

2、默认分支是Master,需要切换分支至社区发行版

git checkout release-1.10

3、执行编译打包。

注意:如果你的电脑是windows,源码存放路径一定为英文,同时建议用Git Bash来执行mvn 命令

mvn clean install -DskipTests -Dfast -Drat.skip=true -Dhaoop.version=2.6.0-cdh-5.7.0 -Pvendor-repos -Dinclude-hadoop -Dscala-2.11 -T2C

参数的含义:

# -Dfast  #在flink根目录下pom.xml文件中fast配置项目中含快速设置,其中包含了多项构建时的跳过参数. #例如apache的文件头(rat)合法校验,代码风格检查,javadoc生成的跳过等,详细可阅读pom.xml
# install maven的安装命令
# -T2C #支持多处理器或者处理器核数参数,加快构建速度,推荐Maven3.3及以上
# -Pinclude-hadoop  将 hadoop的 jar包,打入到lib/中
# -Pvendor-repos   # 使用cdh、hdp 的hadoop 需要添加该参数
# -Dscala-2.11     # 指定scala的版本为2.11
# -Dhadoop.version=2.6.0-cdh5.7.0  指定 hadoop 的版本

上一步编译执行后,会报错:由于错误信息我没有及时保存,这里我截取了网上的相关的信息,大致信息如下。

[ERROR] Failed to execute goal on project flink-hadoop-fs: Could not resolve dependencies for project org.apache.flink:flink-hadoop-fs:jar:1.9-SNAPSHOT: Failed to collect dependencies at org.apache.flink:flink-shaded-hadoop-2:jar:2.6.0-cdh5.16.1-7.0: Failed to read artifact descriptor for org.apache.flink:flink-shaded-hadoop-2:jar:2.6.0-cdh5.16.1-7.0: Could not transfer artifact org.apache.flink:flink-shaded-hadoop-2:pom:2.6.0-cdh5.16.1-7.0 from/to mapr-releases (https://repository.mapr.com/maven/): sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/DependencyResolutionException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :flink-hadoop-fs

原因官网上已经说得很明白。

If the used Hadoop version is not listed on the download page (possibly due to being a Vendor-specific version), then it is necessary to build flink-shaded against this version. You can find the source code for this project in the Additional Components section of the download page.

Flink最近的版本呢,不像1.7版本一样,有编译好的指定的hadoop 版本官网上可供下载,对于供应商的hadoop版本的编译,需要先编译安装flink-shaded这个项目。

git clone https://github.com/apache/flink-shaded.git

根据报错信息,2.6.0-cdh-5.7.0-9.0,自己缺少的版本切换对应的代码分支,这里我缺少的是9.0版本,执行切换

git checkout release-9.0

修改flink-shaded项目中的pom.xml 这里修改是为了加入cdh 等中央仓库,否则编译对应版本可能找不到cdh 相关的包。把下面内容加入到<profiles></profiles> 里面去。

<profile>
	<id>vendor-repos</id>
	<activation>
		<property>
			<name>vendor-repos</name>
		</property>
	</activation>
	<!-- Add vendor maven repositories -->
	<repositories>
		<!-- Cloudera -->
		<repository>
			<id>cloudera-releases</id>
			<url>https://repository.cloudera.com/artifactory/cloudera-repos</url>
			<releases>
				<enabled>true</enabled>
			</releases>
			<snapshots>
				<enabled>false</enabled>
			</snapshots>
		</repository>
		<!-- Hortonworks -->
		<repository>
			<id>HDPReleases</id>
			<name>HDP Releases</name>
			<url>https://repo.hortonworks.com/content/repositories/releases/</url>
			<snapshots><enabled>false</enabled></snapshots>
			<releases><enabled>true</enabled></releases>
		</repository>
		<repository>
			<id>HortonworksJettyHadoop</id>
			<name>HDP Jetty</name>
			<url>https://repo.hortonworks.com/content/repositories/jetty-hadoop</url>
			<snapshots><enabled>false</enabled></snapshots>
			<releases><enabled>true</enabled></releases>
		</repository>
		<!-- MapR -->
		<repository>
			<id>mapr-releases</id>
			<url>https://repository.mapr.com/maven/</url>
			<snapshots><enabled>false</enabled></snapshots>
			<releases><enabled>true</enabled></releases>
		</repository>
	</repositories>
</profile>

执行编译安装。

mvn  -T2C clean install -DskipTests -Pvendor-repos -Dhadoop.version=2.6.0-cdh5.7.0 -Dscala-2.11 -Drat.skip=true

再次编译Flink的时候。仍然报错,报错信息如下:

Exception in thread "main" java.lang.NoSuchMethodError: org.apache.commons.cli.Option.builder(Ljava/lang/String;)Lorg/apache/commons/cli/Option$Builder;

原因是项目打包后,依赖的 commons-cli 是1.2版本的,build 方法在该版本中不存在

这里的Option在Commons找不到。这里需要对flink-shade项目下面的.\flink-shaded\flink-shaded-hadoop-2-uber中的pom.xml增加依赖。

<dependency>
	<groupId>commons-cli</groupId>
	<artifactId>commons-cli</artifactId>
	<version>1.3.1</version>
</dependency>

再次先编译安装flink-shade项目。再次编译 Flink。

经过漫长的等待后。终于编译成功了!!

[INFO] force-shading ...................................... SUCCESS [  3.343 s]
[INFO] flink .............................................. SUCCESS [  3.328 s]
[INFO] flink-annotations .................................. SUCCESS [ 10.915 s]
[INFO] flink-shaded-curator ............................... SUCCESS [ 12.723 s]
[INFO] flink-test-utils-parent ............................ SUCCESS [  2.037 s]
[INFO] flink-test-utils-junit ............................. SUCCESS [ 12.610 s]
[INFO] flink-metrics ...................................... SUCCESS [  2.053 s]
[INFO] flink-metrics-core ................................. SUCCESS [  7.314 s]
[INFO] flink-core ......................................... SUCCESS [ 54.281 s]
[INFO] flink-java ......................................... SUCCESS [ 28.574 s]
[INFO] flink-queryable-state .............................. SUCCESS [  0.713 s]
[INFO] flink-queryable-state-client-java .................. SUCCESS [  7.552 s]
[INFO] flink-filesystems .................................. SUCCESS [  0.445 s]
[INFO] flink-hadoop-fs .................................... SUCCESS [ 11.991 s]
[INFO] flink-runtime ...................................... SUCCESS [02:15 min]
[INFO] flink-scala ........................................ SUCCESS [03:21 min]
[INFO] flink-mapr-fs ...................................... SUCCESS [  9.544 s]
[INFO] flink-filesystems :: flink-fs-hadoop-shaded ........ SUCCESS [ 24.518 s]
[INFO] flink-s3-fs-base ................................... SUCCESS [ 11.900 s]
[INFO] flink-s3-fs-hadoop ................................. SUCCESS [ 34.749 s]
[INFO] flink-s3-fs-presto ................................. SUCCESS [ 50.441 s]
[INFO] flink-swift-fs-hadoop .............................. SUCCESS [01:29 min]
[INFO] flink-oss-fs-hadoop ................................ SUCCESS [ 51.787 s]
[INFO] flink-azure-fs-hadoop .............................. SUCCESS [01:13 min]
[INFO] flink-optimizer .................................... SUCCESS [ 13.774 s]
[INFO] flink-clients ...................................... SUCCESS [ 13.645 s]
[INFO] flink-streaming-java ............................... SUCCESS [ 51.465 s]
[INFO] flink-test-utils ................................... SUCCESS [ 22.206 s]
[INFO] flink-runtime-web .................................. SUCCESS [06:03 min]
[INFO] flink-examples ..................................... SUCCESS [  1.362 s]
[INFO] flink-examples-batch ............................... SUCCESS [01:53 min]
[INFO] flink-connectors ................................... SUCCESS [  1.493 s]
[INFO] flink-hadoop-compatibility ......................... SUCCESS [ 56.124 s]
[INFO] flink-state-backends ............................... SUCCESS [  2.543 s]
[INFO] flink-statebackend-rocksdb ......................... SUCCESS [ 24.221 s]
[INFO] flink-tests ........................................ SUCCESS [01:05 min]
[INFO] flink-streaming-scala .............................. SUCCESS [02:05 min]
[INFO] flink-table ........................................ SUCCESS [  2.068 s]
[INFO] flink-table-common ................................. SUCCESS [ 25.012 s]
[INFO] flink-table-api-java ............................... SUCCESS [ 15.671 s]
[INFO] flink-table-api-java-bridge ........................ SUCCESS [ 14.925 s]
[INFO] flink-table-api-scala .............................. SUCCESS [ 48.459 s]
[INFO] flink-table-api-scala-bridge ....................... SUCCESS [ 40.836 s]
[INFO] flink-sql-parser ................................... SUCCESS [ 39.344 s]
[INFO] flink-libraries .................................... SUCCESS [  1.694 s]
[INFO] flink-cep .......................................... SUCCESS [01:03 min]
[INFO] flink-table-planner ................................ SUCCESS [08:15 min]
[INFO] flink-table-runtime-blink .......................... SUCCESS [ 43.890 s]
[INFO] flink-table-planner-blink .......................... SUCCESS [10:40 min]
[INFO] flink-jdbc ......................................... SUCCESS [  6.761 s]
[INFO] flink-hbase ........................................ SUCCESS [ 33.155 s]
[INFO] flink-hcatalog ..................................... SUCCESS [ 55.609 s]
[INFO] flink-metrics-jmx .................................. SUCCESS [ 11.073 s]
[INFO] flink-formats ...................................... SUCCESS [  0.775 s]
[INFO] flink-json ......................................... SUCCESS [  9.052 s]
[INFO] flink-connector-kafka-base ......................... SUCCESS [ 14.071 s]
[INFO] flink-connector-kafka-0.9 .......................... SUCCESS [ 15.904 s]
[INFO] flink-connector-kafka-0.10 ......................... SUCCESS [  8.821 s]
[INFO] flink-connector-kafka-0.11 ......................... SUCCESS [  9.534 s]
[INFO] flink-connector-elasticsearch-base ................. SUCCESS [ 34.457 s]
[INFO] flink-connector-elasticsearch2 ..................... SUCCESS [ 52.031 s]
[INFO] flink-connector-elasticsearch5 ..................... SUCCESS [ 52.003 s]
[INFO] flink-connector-elasticsearch6 ..................... SUCCESS [ 24.629 s]
[INFO] flink-connector-elasticsearch7 ..................... SUCCESS [16:27 min]
[INFO] flink-orc .......................................... SUCCESS [ 14.825 s]
[INFO] flink-csv .......................................... SUCCESS [  7.521 s]
[INFO] flink-connector-hive ............................... SUCCESS [ 46.159 s]
[INFO] flink-connector-rabbitmq ........................... SUCCESS [ 12.222 s]
[INFO] flink-connector-twitter ............................ SUCCESS [ 21.067 s]
[INFO] flink-connector-nifi ............................... SUCCESS [  3.687 s]
[INFO] flink-connector-cassandra .......................... SUCCESS [ 32.981 s]
[INFO] flink-avro ......................................... SUCCESS [ 24.088 s]
[INFO] flink-connector-filesystem ......................... SUCCESS [ 24.861 s]
[INFO] flink-connector-kafka .............................. SUCCESS [ 16.213 s]
[INFO] flink-connector-gcp-pubsub ......................... SUCCESS [ 23.427 s]
[INFO] flink-connector-kinesis ............................ SUCCESS [01:07 min]
[INFO] flink-sql-connector-elasticsearch7 ................. SUCCESS [  8.729 s]
[INFO] flink-sql-connector-elasticsearch6 ................. SUCCESS [ 21.027 s]
[INFO] flink-sql-connector-kafka-0.9 ...................... SUCCESS [  3.010 s]
[INFO] flink-sql-connector-kafka-0.10 ..................... SUCCESS [  2.831 s]
[INFO] flink-sql-connector-kafka-0.11 ..................... SUCCESS [  4.504 s]
[INFO] flink-sql-connector-kafka .......................... SUCCESS [  5.899 s]
[INFO] flink-connector-kafka-0.8 .......................... SUCCESS [ 12.933 s]
[INFO] flink-avro-confluent-registry ...................... SUCCESS [ 22.432 s]
[INFO] flink-parquet ...................................... SUCCESS [ 24.844 s]
[INFO] flink-sequence-file ................................ SUCCESS [ 12.293 s]
[INFO] flink-compress ..................................... SUCCESS [ 11.920 s]
[INFO] flink-examples-streaming ........................... SUCCESS [ 49.247 s]
[INFO] flink-examples-table ............................... SUCCESS [ 38.438 s]
[INFO] flink-examples-build-helper ........................ SUCCESS [  1.239 s]
[INFO] flink-examples-streaming-twitter ................... SUCCESS [  1.339 s]
[INFO] flink-examples-streaming-state-machine ............. SUCCESS [  0.892 s]
[INFO] flink-examples-streaming-gcp-pubsub ................ SUCCESS [  7.097 s]
[INFO] flink-container .................................... SUCCESS [ 11.617 s]
[INFO] flink-queryable-state-runtime ...................... SUCCESS [ 25.045 s]
[INFO] flink-end-to-end-tests ............................. SUCCESS [  2.139 s]
[INFO] flink-cli-test ..................................... SUCCESS [  7.297 s]
[INFO] flink-parent-child-classloading-test-program ....... SUCCESS [  7.494 s]
[INFO] flink-parent-child-classloading-test-lib-package ... SUCCESS [ 12.396 s]
[INFO] flink-dataset-allround-test ........................ SUCCESS [  3.435 s]
[INFO] flink-dataset-fine-grained-recovery-test ........... SUCCESS [  4.543 s]
[INFO] flink-datastream-allround-test ..................... SUCCESS [ 19.708 s]
[INFO] flink-batch-sql-test ............................... SUCCESS [  5.365 s]
[INFO] flink-stream-sql-test .............................. SUCCESS [  0.971 s]
[INFO] flink-bucketing-sink-test .......................... SUCCESS [  4.735 s]
[INFO] flink-distributed-cache-via-blob ................... SUCCESS [  7.361 s]
[INFO] flink-high-parallelism-iterations-test ............. SUCCESS [ 14.616 s]
[INFO] flink-stream-stateful-job-upgrade-test ............. SUCCESS [  9.280 s]
[INFO] flink-queryable-state-test ......................... SUCCESS [ 30.531 s]
[INFO] flink-local-recovery-and-allocation-test ........... SUCCESS [  9.527 s]
[INFO] flink-elasticsearch2-test .......................... SUCCESS [  8.645 s]
[INFO] flink-elasticsearch5-test .......................... SUCCESS [  9.080 s]
[INFO] flink-elasticsearch6-test .......................... SUCCESS [ 14.832 s]
[INFO] flink-quickstart ................................... SUCCESS [  4.619 s]
[INFO] flink-quickstart-java .............................. SUCCESS [ 10.722 s]
[INFO] flink-quickstart-scala ............................. SUCCESS [  9.455 s]
[INFO] flink-quickstart-test .............................. SUCCESS [  1.729 s]
[INFO] flink-confluent-schema-registry .................... SUCCESS [  8.752 s]
[INFO] flink-stream-state-ttl-test ........................ SUCCESS [ 28.726 s]
[INFO] flink-sql-client-test .............................. SUCCESS [  5.881 s]
[INFO] flink-streaming-file-sink-test ..................... SUCCESS [  6.598 s]
[INFO] flink-state-evolution-test ......................... SUCCESS [ 11.763 s]
[INFO] flink-rocksdb-state-memory-control-test ............ SUCCESS [  7.848 s]
[INFO] flink-mesos ........................................ SUCCESS [03:08 min]
[INFO] flink-kubernetes ................................... SUCCESS [01:08 min]
[INFO] flink-yarn ......................................... SUCCESS [ 20.863 s]
[INFO] flink-gelly ........................................ SUCCESS [ 59.235 s]
[INFO] flink-gelly-scala .................................. SUCCESS [01:06 min]
[INFO] flink-gelly-examples ............................... SUCCESS [ 49.456 s]
[INFO] flink-metrics-dropwizard ........................... SUCCESS [  5.215 s]
[INFO] flink-metrics-graphite ............................. SUCCESS [  1.976 s]
[INFO] flink-metrics-influxdb ............................. SUCCESS [  8.786 s]
[INFO] flink-metrics-prometheus ........................... SUCCESS [  7.207 s]
[INFO] flink-metrics-statsd ............................... SUCCESS [  5.106 s]
[INFO] flink-metrics-datadog .............................. SUCCESS [  6.658 s]
[INFO] flink-metrics-slf4j ................................ SUCCESS [  5.159 s]
[INFO] flink-cep-scala .................................... SUCCESS [ 47.012 s]
[INFO] flink-table-uber ................................... SUCCESS [ 18.249 s]
[INFO] flink-table-uber-blink ............................. SUCCESS [ 12.126 s]
[INFO] flink-sql-client ................................... SUCCESS [ 25.557 s]
[INFO] flink-state-processor-api .......................... SUCCESS [  4.940 s]
[INFO] flink-python ....................................... SUCCESS [01:08 min]
[INFO] flink-scala-shell .................................. SUCCESS [01:17 min]
[INFO] flink-dist ......................................... SUCCESS [ 22.082 s]
[INFO] flink-end-to-end-tests-common ...................... SUCCESS [  2.414 s]
[INFO] flink-metrics-availability-test .................... SUCCESS [  1.547 s]
[INFO] flink-metrics-reporter-prometheus-test ............. SUCCESS [  1.346 s]
[INFO] flink-heavy-deployment-stress-test ................. SUCCESS [ 43.268 s]
[INFO] flink-connector-gcp-pubsub-emulator-tests .......... SUCCESS [ 24.450 s]
[INFO] flink-streaming-kafka-test-base .................... SUCCESS [  7.905 s]
[INFO] flink-streaming-kafka-test ......................... SUCCESS [ 37.587 s]
[INFO] flink-streaming-kafka011-test ...................... SUCCESS [ 31.779 s]
[INFO] flink-streaming-kafka010-test ...................... SUCCESS [ 35.216 s]
[INFO] flink-plugins-test ................................. SUCCESS [  1.468 s]
[INFO] dummy-fs ........................................... SUCCESS [  3.177 s]
[INFO] another-dummy-fs ................................... SUCCESS [  3.209 s]
[INFO] flink-tpch-test .................................... SUCCESS [ 15.200 s]
[INFO] flink-streaming-kinesis-test ....................... SUCCESS [ 53.499 s]
[INFO] flink-elasticsearch7-test .......................... SUCCESS [  5.254 s]
[INFO] flink-end-to-end-tests-common-kafka ................ SUCCESS [  4.626 s]
[INFO] flink-tpcds-test ................................... SUCCESS [  5.091 s]
[INFO] flink-statebackend-heap-spillable .................. SUCCESS [  6.477 s]
[INFO] flink-contrib ...................................... SUCCESS [  0.996 s]
[INFO] flink-connector-wikiedits .......................... SUCCESS [ 10.093 s]
[INFO] flink-yarn-tests ................................... SUCCESS [  8.997 s]
[INFO] flink-fs-tests ..................................... SUCCESS [ 11.276 s]
[INFO] flink-docs ......................................... SUCCESS [  9.426 s]
[INFO] flink-ml-parent .................................... SUCCESS [  2.627 s]
[INFO] flink-ml-api ....................................... SUCCESS [  5.934 s]
[INFO] flink-ml-lib ....................................... SUCCESS [ 13.574 s]
[INFO] flink-walkthroughs ................................. SUCCESS [  3.133 s]
[INFO] flink-walkthrough-common ........................... SUCCESS [  6.456 s]
[INFO] flink-walkthrough-table-java ....................... SUCCESS [  7.219 s]
[INFO] flink-walkthrough-table-scala ...................... SUCCESS [  4.170 s]
[INFO] flink-walkthrough-datastream-java .................. SUCCESS [  4.685 s]
[INFO] flink-walkthrough-datastream-scala ................. SUCCESS [  4.413 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 41:37 min (Wall Clock)
[INFO] Finished at: 2020-02-26T14:01:39+08:00
[INFO] Final Memory: 578M/2161M
[INFO] ------------------------------------------------------------------------

在如下目录即是我们编译好的二进制文件

./flink\flink-dist\target\flink-1.10-SNAPSHOT-bin
  • 0
    点赞
  • 6
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值