在Pixel 4 XL上尝试具有对象检测功能的Android NNAPI ML加速器

As the requirements for more private and fast, low-latency machine learning increases, so does the need for more accessible and on-device solutions capable of performing well on the so-called “edge.” Two of these solutions are the Pixel Neural Core (PNC) hardware and its Edge TPU architecture currently available on the Google Pixel 4 mobile phone, and The Android Neural Networks API (NNAPI), an API designed for executing machine learning operations on Android devices.

随着对更加私有和快速,低延迟的机器学习的需求增加,对能够在所谓的“边缘”上表现良好的更易访问的设备上解决方案的需求也在增加。 这些解决方案中的两个是Google Pixel 4手机上当前可用的Pixel Neural Core (PNC)硬件及其Edge TPU架构,以及旨在在Android设备上执行机器学习操作的API Android神经​​网络API (NNAPI)。

In this article, I will show how I modified the TensorFlow Lite Object Detection demo for Android to use an Edge TPU optimized model running under the NNAPI on a Pixel 4 XL. Additionally, I want to present the changes I did to log the prediction latencies and compare those done using the default TensorFlow Lite API and the NNAPI. But before that, let me give a brief overview of the terms I’ve introduced so far.

在本文中,我将展示如何修改Android的TensorFlow Lite对象检测演示,以使用在Pixel 4 XL的NNAPI下运行的Edge TPU优化模型。 此外,我想展示所做的更改,以便记录预测延迟并比较使用默认TensorFlow Lite API和NNAPI完成的更改。 但是在此之前,让我简要介绍一下到目前为止已经介绍的术语。

像素神经核心,Edge TPU和NNAPI (Pixel Neural Core, Edge TPU and NNAPI)

The Pixel Neural Core, the successor of the previous Pixel Visual Core, is a domain-specific chip that’s part of the Pixel 4 hardware. It’s architecture, follows that of the Edge TPU (tensor processing unit), Google’s machine learning accelerator for edge computing devices. Being a chip designed for “the edge” means that it is smaller and more energy-efficient (it can perform 4 trillion operations per second while consuming just 2W) than its big counterparts you will find in Google’s cloud platform.

Pixel Neural Core是以前的Pixel Visual Core的后继产品,是特定于域的芯片,是Pixel 4硬件的一部分。 它的架构遵循Edge TPU(张量处理单元)的架构,Edge TPU是用于边缘计算设备的Google机器学习加速器。 作为专为“边缘”设计的芯片,它比Google的云平台中的同类产品更小,更节能( 每秒可执行4万亿次操作,而功耗仅为2W )。

The Edge TPU, however, is not an overall accelerator for all kinds of machine learning. The hardware is designed to improve forward-pass operations, meaning that it excels as an inference engine and not as a tool for training. That’s why you will mostly find applications where the model used on the device, was trained somewhere else.

但是,Edge TPU并不是所有机器学习的整体加速器。 该硬件旨在改善前向通行操作,这意味着它在推理引擎而非训练工具方面表现出色。 这就是为什么您会发现在其他地方训练过设备上使用的模型的应用程序的原因。

On the software side of things, we have the NNAPI. This Android API, written in C, provides acceleration for TensorFlow Lite models on devices that employ hardware accelerators such as the Pixel Visual Core and GPUs. The TensorFlow Lite framework for Android includes an NNAPI delegate, so don’t worry, we won’t write any C code.

在软件方面,我们拥有NNAPI。 该Android API用C编写,可在采用硬件加速器(例如Pixel Visual Core和GPU)的设备上为TensorFlow Lite模型提供加速。 适用于Android的TensorFlow Lite框架包含NNAPI委托,因此请放心,我们不会编写任何C代码。

Image for post
Figure 1. System architecture for Android Neural Networks API. Source: https://developer.android.com/ndk/guides/neuralnetworks
图1. Android神经​​网络API的系统架构。 资料来源: https : //developer.android.com/ndk/guides/neuralnetworks

该模型 (The model)

The model we will use for this project is the float32 version of the MobileDet object detection model optimized for the Edge TPU and trained on the COCO dataset (link). Let me quickly explain what these terms mean. MobileDet (Xiong et al.) is a very recent state-of-the-art family of lightweight object detection models for low computational power devices like mobile phones. This float32 variant means that it is not a quantized model, a model that has been transformed to reduce its size at the cost of model accuracy. On the other hand, a fully quantized model uses small weights based on 8 bits integers (source). Then, we have the COCO dataset, short for “Common Objects in Context” (Lin et al.). This collection of images has over 200k labeled images separated across 90 classes that include “bird, “cat,” “person,” and “car.”

我们将用于该项目的模型是MobileDet对象检测模型的float32版本,该模型针对Edge TPU优化并在COCO数据集( 链接 )上进行了训练。 让我快速解释一下这些术语的含义。 MobileDet (Xiong等人)是最新的最先进的轻量级对象检测模型系列,适用于诸如手机之类的低计算能力设备。 此float32变体表示它不是量化模型,该模型已进行了转换以减小其大小,但以降低模型精度为代价。 另一方面,完全量化的模型基于8位整数( )使用较小的权重。 然后,我们有了COCO数据集,它是“上下文中的公共对象”的缩写( Lin等人 )。 这组图像包含200,000张带有标签的图像,分为90类,其中包括“鸟”,“猫”,“人”和“汽车”。

Now, after that bit of theory, let’s take a look at the app.

现在,根据一些理论,让我们看一下该应用程序。

该应用程序 (The app)

The app I used is based on the object detection example app for Android provided in the TensorFlow repository. However, I altered it to use the NNAPI and log to a file the inference times, data I used to compare the NNAPI and default TFLITE API’s prediction time. Below is the DetectorActivity.java file, responsible for producing the detections — the complete source code is on my GitHub; I’m just showing this file since it has the most changes. In this file, I changed the name of the model (after adding the MobileDet model to the assets directory), changed the variable TF_OD_API_INPUT_SIZE to reflect the input size of MobileDet and set TF_OD_API_IS_QUANTIZED to false since the model is not quantized. Besides this, I added two lists to collect the inference times of the predictions (one list per API), and an override onStop method to dump the lists to a file once the use closes the app. Other small changes I had to do was changing NUM_DETECTIONS from TFLiteObjectDetectionAPIModel.java from 10 to 100 and adding the WRITE_EXTERNAL_STORAGE permission to the Android manifest so that the app could write the files to the Documents directory.

我使用的应用程序基于TensorFlow存储库中提供的Android对象检测示例应用程序。 但是,我对其进行了更改,以使用NNAPI并将推理时间(用于比较NNAPI和默认TFLITE API的预测时间的数据)记录到文件中。 下面是DetectorActivity.java文件,它负责产生检测结果-完整的源代码在我的GitHub上; 我只是显示此文件,因为它具有最多的更改。 在此文件中,我更改了模型的名称(在将MobileDet模型添加到资产目录之后),更改了变量TF_OD_API_INPUT_SIZE以反映MobileDet的输入大小,并将TF_OD_API_IS_QUANTIZED设置为false因为未对模型进行量化。 除此之外,我添加了两个列表来收集预测的推理时间(每个API一个列表),以及一个重写onStop方法,以便在使用关闭应用程序后将列表转储到文件中。 我要做的其他小更改是将NUM_DETECTIONSTFLiteObjectDetectionAPIModel.java从10更改为100, WRITE_EXTERNAL_STORAGE Android清单添加了WRITE_EXTERNAL_STORAGE权限,以便该应用可以将文件写入Documents目录。

/*
 * Copyright 2019 The TensorFlow Authors. All Rights Reserved.
 *
 * Licensed under the Apache License, Version 2.0 (the "License");
 * you may not use this file except in compliance with the License.
 * You may obtain a copy of the License at
 *
 *       http://www.apache.org/licenses/LICENSE-2.0
 *
 * Unless required by applicable law or agreed to in writing, software
 * distributed under the License is distributed on an "AS IS" BASIS,
 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 * See the License for the specific language governing permissions and
 * limitations under the License.
 */


package org.tensorflow.lite.examples.detection;


import android.graphics.Bitmap;
import android.graphics.Bitmap.Config;
import android.graphics.Canvas;
import android.graphics.Color;
import android.graphics.Matrix;
import android.graphics.Paint;
import android.graphics.Paint.Style;
import android.graphics.RectF;
import android.graphics.Typeface;
import android.media.ImageReader.OnImageAvailableListener;
import android.os.Environment;
import android.os.SystemClock;
import android.util.Size;
import android.util.TypedValue;
import android.widget.Toast;


import java.io.BufferedWriter;
import java.io.File;
import java.io.FileWriter;
import java.io.IOException;
import java.util.ArrayList;
import java.util.LinkedList;
import java.util.List;
import java.io.Writer;
import java.text.SimpleDateFormat;
import java.util.Calendar;
import java.util.Date;
import java.util.HashMap;
import java.util.Map;


import org.json.JSONObject;
import org.tensorflow.lite.examples.detection.customview.OverlayView;
import org.tensorflow.lite.examples.detection.customview.OverlayView.DrawCallback;
import org.tensorflow.lite.examples.detection.env.BorderedText;
import org.tensorflow.lite.examples.detection.env.ImageUtils;
import org.tensorflow.lite.examples.detection.env.Logger;
import org.tensorflow.lite.examples.detection.tflite.Classifier;
import org.tensorflow.lite.examples.detection.tflite.TFLiteObjectDetectionAPIModel;
import org.tensorflow.lite.examples.detection.tracking.MultiBoxTracker;


/**
 * An activity that uses a TensorFlowMultiBoxDetector and ObjectTracker to detect and then track
 * objects.
 */
public class DetectorActivity extends CameraActivity implements OnImageAvailableListener {
  private static final Logger LOGGER = new Logger();


  // Configuration values for the prepackaged SSD model.


  private static final int TF_OD_API_INPUT_SIZE = 320; //new
  private static final boolean TF_OD_API_IS_QUANTIZED = false; //new


  private static final String TF_OD_API_MODEL_FILE = "md_non_quant.tflite";


  private static final String TF_OD_API_LABELS_FILE = "file:///android_asset/labelmap.txt";
  private static final DetectorMode MODE = DetectorMode.TF_OD_API;
  // Minimum detection confidence to track a detection.
  private static final float MINIMUM_CONFIDENCE_TF_OD_API = 0.5f;
  private static final boolean MAINTAIN_ASPECT = false;
  private static final Size DESIRED_PREVIEW_SIZE = new Size(640, 480);
  private static final boolean SAVE_PREVIEW_BITMAP = false;
  private static final float TEXT_SIZE_DIP = 10;
  OverlayView trackingOverlay;
  private Integer sensorOrientation;


  private Classifier detector;


  private long lastProcessingTimeMs;
  private Bitmap rgbFrameBitmap = null;
  private Bitmap croppedBitmap = null;
  private Bitmap cropCopyBitmap = null;


  private boolean computingDetection = false;


  private long timestamp = 0;


  private Matrix frameToCropTransform;
  private Matrix cropToFrameTransform;


  private MultiBoxTracker tracker;


  private BorderedText borderedText;


  private boolean isUsingNNAPI = false;
  // Log detections
  private Map<String, Integer> DETECTIONS_OUTPUT_MAP = new HashMap<>();
  private ArrayList<Long> nnapiTimes = new ArrayList<Long>();
  private ArrayList<Long> nonNNAPITimes = new ArrayList<Long>();


  @Override
  public void onPreviewSizeChosen(final Size size, final int rotation) {
    final float textSizePx =
        TypedValue.applyDimension(
            TypedValue.COMPLEX_UNIT_DIP, TEXT_SIZE_DIP, getResources().getDisplayMetrics());
    borderedText = new BorderedText(textSizePx);
    borderedText.setTypeface(Typeface.MONOSPACE);


    tracker = new MultiBoxTracker(this);


    int cropSize = TF_OD_API_INPUT_SIZE;


    try {
      detector =
          TFLiteObjectDetectionAPIModel.create(
              getAssets(),
              TF_OD_API_MODEL_FILE,
              TF_OD_API_LABELS_FILE,
              TF_OD_API_INPUT_SIZE,
              TF_OD_API_IS_QUANTIZED);
      cropSize = TF_OD_API_INPUT_SIZE;
    } catch (final IOException e) {
      e.printStackTrace();
      LOGGER.e(e, "Exception initializing classifier!");
      Toast toast =
          Toast.makeText(
              getApplicationContext(), "Classifier could not be initialized", Toast.LENGTH_SHORT);
      toast.show();
      finish();
    }


    previewWidth = size.getWidth();
    previewHeight = size.getHeight();


    sensorOrientation = rotation - getScreenOrientation();
    LOGGER.i("Camera orientation relative to screen canvas: %d", sensorOrientation);


    LOGGER.i("Initializing at size %dx%d", previewWidth, previewHeight);
    rgbFrameBitmap = Bitmap.createBitmap(previewWidth, previewHeight, Config.ARGB_8888);
    croppedBitmap = Bitmap.createBitmap(cropSize, cropSize, Config.ARGB_8888);


    frameToCropTransform =
        ImageUtils.getTransformationMatrix(
            previewWidth, previewHeight,
            cropSize, cropSize,
            sensorOrientation, MAINTAIN_ASPECT);


    cropToFrameTransform = new Matrix();
    frameToCropTransform.invert(cropToFrameTransform);


    trackingOverlay = (OverlayView) findViewById(R.id.tracking_overlay);
    trackingOverlay.addCallback(
        new DrawCallback() {
          @Override
          public void drawCallback(final Canvas canvas) {
            tracker.draw(canvas);
            if (isDebug()) {
              tracker.drawDebug(canvas);
            }
          }
        });


    tracker.setFrameConfiguration(previewWidth, previewHeight, sensorOrientation);
  }


  @Override
  protected void processImage() {
    ++timestamp;
    final long currTimestamp = timestamp;
    trackingOverlay.postInvalidate();


    // No mutex needed as this method is not reentrant.
    if (computingDetection) {
      readyForNextImage();
      return;
    }
    computingDetection = true;


    rgbFrameBitmap.setPixels(getRgbBytes(), 0, previewWidth, 0, 0, previewWidth, previewHeight);


    readyForNextImage();


    final Canvas canvas = new Canvas(croppedBitmap);
    canvas.drawBitmap(rgbFrameBitmap, frameToCropTransform, null);
    // For examining the actual TF input.
    if (SAVE_PREVIEW_BITMAP) {
      ImageUtils.saveBitmap(croppedBitmap);
    }


    runInBackground(
        new Runnable() {
          @Override
          public void run() {
            //LOGGER.i("Running detection on image " + currTimestamp);
            final long startTime = SystemClock.uptimeMillis();
            final List<Classifier.Recognition> results = detector.recognizeImage(croppedBitmap);
            lastProcessingTimeMs = SystemClock.uptimeMillis() - startTime;


            cropCopyBitmap = Bitmap.createBitmap(croppedBitmap);
            final Canvas canvas = new Canvas(cropCopyBitmap);
            final Paint paint = new Paint();
            paint.setColor(Color.RED);
            paint.setStyle(Style.STROKE);
            paint.setStrokeWidth(2.0f);


            float minimumConfidence = MINIMUM_CONFIDENCE_TF_OD_API;
            switch (MODE) {
              case TF_OD_API:
                minimumConfidence = MINIMUM_CONFIDENCE_TF_OD_API;
                break;
            }


            final List<Classifier.Recognition> mappedRecognitions =
                new LinkedList<Classifier.Recognition>();


            for (final Classifier.Recognition result : results) {
              final RectF location = result.getLocation();
              if (location != null && result.getConfidence() >= minimumConfidence) {
                canvas.drawRect(location, paint);


                cropToFrameTransform.mapRect(location);


                result.setLocation(location);
                mappedRecognitions.add(result);


                if (!DETECTIONS_OUTPUT_MAP.containsKey(result.getTitle())) {
                  DETECTIONS_OUTPUT_MAP.put(result.getTitle(), 0);
                }


                DETECTIONS_OUTPUT_MAP.put(result.getTitle(), DETECTIONS_OUTPUT_MAP.get(result.getTitle()) + 1);
              }
            }


            tracker.trackResults(mappedRecognitions, currTimestamp);
            trackingOverlay.postInvalidate();


            computingDetection = false;
            if (isUsingNNAPI) {
              nnapiTimes.add(lastProcessingTimeMs);
            } else {
              nonNNAPITimes.add(lastProcessingTimeMs);
            }


            runOnUiThread(
                new Runnable() {
                  @Override
                  public void run() {
                    showFrameInfo(previewWidth + "x" + previewHeight);
                    showCropInfo(cropCopyBitmap.getWidth() + "x" + cropCopyBitmap.getHeight());
                    showInference(lastProcessingTimeMs + "ms");
                  }
                });
          }
        });
  }


  @Override
  protected int getLayoutId() {
    return R.layout.tfe_od_camera_connection_fragment_tracking;
  }


  @Override
  protected Size getDesiredPreviewFrameSize() {
    return DESIRED_PREVIEW_SIZE;
  }


  // Which detection model to use: by default uses Tensorflow Object Detection API frozen
  // checkpoints.
  private enum DetectorMode {
    TF_OD_API;
  }


  @Override
  protected void setUseNNAPI(final boolean isChecked) {
    isUsingNNAPI = isChecked;
    runInBackground(() -> detector.setUseNNAPI(isChecked));
  }


  @Override
  protected void setNumThreads(final int numThreads) {
    runInBackground(() -> detector.setNumThreads(numThreads));
  }


  @Override
  public synchronized void onStop() {
    JSONObject obj =new JSONObject(DETECTIONS_OUTPUT_MAP);
    try {
      Calendar cal = Calendar.getInstance();
      Date date = cal.getTime();
      SimpleDateFormat dateFormat = new SimpleDateFormat("yyyy-MM-dd_HH:mm:ss");
      String formattedDate = dateFormat.format(date);
      String filename = String.format("%s_%s.json", "detections", formattedDate);


      File f = new File(
              Environment.getExternalStoragePublicDirectory(
                      Environment.DIRECTORY_DOCUMENTS), "/TensorFlowLiteDetections");


      if (!f.exists()) {
        f.mkdirs();
      }


      // Write detections logs
      File file = new File(f, filename);
      Writer output = null;
      output = new BufferedWriter(new FileWriter(file));
      output.write(obj.toString());
      output.close();


      // Write NNAPI times
      filename = String.format("%s_%s.txt", "nnapi_times", formattedDate);
      file = new File(f, filename);
      output = new BufferedWriter(new FileWriter(file));
      for(Long l: nnapiTimes) {
        output.write(l + System.lineSeparator());
      }
      output.close();




      // Write non-NNAPI times
      filename = String.format("%s_%s.txt", "non_nnapi_times", formattedDate);
      file = new File(f, filename);
      output = new BufferedWriter(new FileWriter(file));
      for(Long l: nonNNAPITimes) {
        output.write(l + System.lineSeparator());
      }
      output.close();
    } catch (Exception e) {
      LOGGER.d("DetectorActivity", "Couldn't write file." + e);
    }
    LOGGER.d("Data written to directory");


    super.onStop();
  }
}

Another important change I did was commenting out — and thus, enabling — the toggle button that allows us to run the app using the NNAPI. By default, that part of code is commented, so it is not possible to activate the API from within the app; I might be wrong, though (please correct me if you find otherwise).

我所做的另一个重要更改是注释并启用了切换按钮,该按钮使我们能够使用NNAPI运行该应用程序。 默认情况下,该部分代码带有注释,因此无法从应用程序内激活API; 不过,我可能是错的(如果发现其他情况,请纠正我)。

You can find the complete source code behind the app in my GitHub repo at https://github.com/juandes/mobiledet-tflite-nnapi. To run it, open Android Studio, select “open an existing Android Studio project,” and select the project’s root directory. Once the project is opened, click the small green hammer icon to build it. Then, click the play icon to run it either on a virtual device or on an actual device (if one is connected). If possible, use a real device.

您可以在https://github.com/juandes/mobiledet-tflite-nnapi的 GitHub存储库中找到该应用程序背后的完整源代码。 要运行它,请打开Android Studio,选择“打开现有的Android Studio项目”,然后选择项目的根目录。 打开项目后,单击绿色的小锤子图标进行构建。 然后,单击播放图标以在虚拟设备或实际设备(如果已连接)上运行它。 如果可能,请使用真实的设备。

测量延迟 (Measuring the latency)

So, how fast is the app at detecting objects? To measure the latency, I added a small functionality that writes to files the prediction time (in milliseconds) of the inference made with the normal TFLITE and NNAPI. After that, I took the files and performed a little analysis in R to get the insights from the data. Below are the results.

那么,应用程序检测物体的速度有多快? 为了测量延迟,我添加了一个小的功能,该功能将使用普通TFLITE和NNAPI进行推理的预测时间(以毫秒为单位)写入文件。 之后,我拿起文件并在R中进行了一些分析,以从数据中获取见解。 以下是结果。

Image for post
Figure 2: Inference times of predictions done with the TFLITE API
图2:使用TFLITE API进行预测的推理时间
Image for post
Figure 3: Inference times of predictions done with the TFLITE API
图3:使用TFLITE API进行预测的推理时间

Figures 2 and 3 are histograms of the inference times under each API. The first one (n=909), corresponding to the default TFLITE API, has a peak around the 100 (ms.) mark and several extreme outliers at the higher end of the visualization. Figure 3 (n=1169), corresponding to predictions done using the NNAPI, has its peak around 50 ms. However, those extreme outlier values shift the mean value and the distribution towards the right. So, to better visualize the times, I removed these values and drew the same visualizations without them. Now, they look as follows:

图2和图3是每个API下推断时间的直方图。 第一个(n = 909),对应于默认的TFLITE API,在100(ms。)标记附近有一个峰值,在可视化效果的高端有几个极端的离群值。 图3(n = 1169)对应于使用NNAPI所做的预测,其峰值约为50毫秒。 但是,这些极端离群值使平均值和分布向右移动。 因此,为了更好地可视化时间,我删除了这些值,并在没有它们的情况下进行了相同的可视化。 现在,它们看起来如下:

Image for post
Figure 4: Inference times (without outliers) of predictions done with the TFLITE API
图4:使用TFLITE API进行的预测的推理时间(没有异常值)
Image for post
Figure 5: Inference times (without outliers) of predictions done with the NNAPI API
图5:使用NNAPI API进行的预测的推理时间(没有异常值)

Better, right? The black vertical lines on both plots indicate the mean value. For the TFLITE graph, the mean inference time is 103 ms., the median is 100 ms. On the NNAPI side, the average prediction takes 55 ms. with a median of 54. Almost twice as fast.

好一点吧? 两个图上的黑色垂直线均表示平均值。 对于TFLITE图,平均推理时间为103毫秒,中位数为100毫秒。 在NNAPI方面,平均预测需要55毫秒。 中位数为54。几乎快了一倍。

The following video shows the app in action. Here, I’m simply pointing the phone at my computer to detect objects from a video:

以下视频显示了该应用程序的运行情况。 在这里,我只是将手机对准计算机,以检测视频中的物体:

总结与总结 (Recap and conclusion)

The advances of machine learning and, generally AI, are truly fascinating. First, they took over our computers and cloud, and now they are on their way to our mobile devices. Yet, there’s a big difference between the latter and the platforms we traditionally use to deploy machine learning platforms. Smaller processors and battery dependencies are some of these. As a result, the development of frameworks and hardware specialized for this sort of device is increasing rapidly.

机器学习以及AI的进步确实令人着迷。 首先,他们接管了我们的计算机和云,现在,它们正在通往我们的移动设备。 但是,后者与我们传统上用于部署机器学习平台的平台之间存在很大差异。 其中更小的处理器和电池依赖性。 结果,专用于这类设备的框架和硬件的开发正在Swift增长。

Several of these tools are the Pixel Neural Core, Edge TPU, and NNAPI. This combination of hardware and software aims to bring highly efficient and accurate AI to our mobile devices. In this article, I presented an overview of these. Then, I showed how to update the TensorFlow Lite object detection example for Android to able the NNAPI and write to file the inference times. Using these files, I did a small analysis using R to visualize them and discovered that the predictions done with the NNAPI took around half the time that those done with the default API.

其中一些工具是Pixel Neural Core,Edge TPU和NNAPI。 硬件和软件的这种结合旨在为我们的移动设备带来高效,准确的AI。 在本文中,我对这些进行了概述。 然后,我展示了如何将适用于Android的TensorFlow Lite对象检测示例更新为能够使用NNAPI并写入文件推理时间。 使用这些文件,我使用R进行了一些小分析以可视化它们,发现使用NNAPI进行的预测大约花费了使用默认API进行的预测的一半时间。

Thanks for reading :)

谢谢阅读 :)

翻译自: https://towardsdatascience.com/trying-androids-nnapi-ml-accelerator-with-object-detection-on-a-pixel-4-xl-5217caea64d4

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值