计算机视觉论文-2021-06-25

本专栏是计算机视觉方向论文收集积累,时间:2021年6月25日,来源:paper digest

欢迎关注原创公众号 【计算机视觉联盟】,回复 【西瓜书手推笔记】 可获取我的机器学习纯手推笔记!

直达笔记地址:机器学习手推笔记(GitHub地址)

1, TITLE: Advancing Biological Super-resolution Microscopy Through Deep Learning: A Brief Review
AUTHORS: Tianjie Yang ; Yaoru Luo ; Wei Ji ; Ge Yang
CATEGORY: physics.bio-ph [physics.bio-ph, cs.CV, eess.IV]
HIGHLIGHT: In this brief Review, we survey recent advances in using deep learning to enhance performance of super-resolution microscopy.

2, TITLE: When Differential Privacy Meets Interpretability: A Case Study
AUTHORS: RAKSHIT NAIDU et. al.
CATEGORY: cs.CV [cs.CV, cs.CR]
HIGHLIGHT: We propose an extensive study into the effects of DP training on DNNs, especially on medical imaging applications, on the APTOS dataset.

3, TITLE: Deep Fake Detection: Survey of Facial Manipulation Detection Solutions
AUTHORS: Samay Pashine ; Sagar Mandiya ; Praveen Gupta ; Rashid Sheikh
CATEGORY: cs.CV [cs.CV, cs.LG]
HIGHLIGHT: In this paper, we analyze several such states of the art neural networks (MesoNet, ResNet-50, VGG-19, and Xception Net) and compare them against each other, to find an optimal solution for various scenarios like real-time deep fake detection to be deployed in online social media platforms where the classification should be made as fast as possible or for a small news agency where the classification need not be in real-time but requires utmost accuracy.

4, TITLE: Handwritten Digit Recognition Using Machine and Deep Learning Algorithms
AUTHORS: Samay Pashine ; Ritik Dixit ; Rishika Kushwah
CATEGORY: cs.CV [cs.CV, cs.AI, cs.LG]
HIGHLIGHT: Apparently, in this paper, we have performed handwritten digit recognition with the help of MNIST datasets using Support Vector Machines (SVM), Multi-Layer Perceptron (MLP) and Convolution Neural Network (CNN) models.

5, TITLE: IA-RED$^2$: Interpretability-Aware Redundancy Reduction for Vision Transformers
AUTHORS: BOWEN PAN et. al.
CATEGORY: cs.CV [cs.CV]
HIGHLIGHT: To address this limitation, this paper presents an Interpretability-Aware REDundancy REDuction framework (IA-RED$^2$).

6, TITLE: Florida Wildlife Camera Trap Dataset
AUTHORS: Crystal Gagne ; Jyoti Kini ; Daniel Smith ; Mubarak Shah
CATEGORY: cs.CV [cs.CV, eess.IV]
HIGHLIGHT: We introduce a challenging wildlife camera trap classification dataset collected from two different locations in Southwestern Florida, consisting of 104,495 images featuring visually similar species, varying illumination conditions, skewed class distribution, and including samples of endangered species, i.e. Florida panthers.

7, TITLE: A Simple and Strong Baseline: Progressively Region-based Scene Text Removal Networks
AUTHORS: Yuxin Wang ; Hongtao Xie ; Shancheng Fang ; Yadong Qu ; Yongdong Zhang
CATEGORY: cs.CV [cs.CV]
HIGHLIGHT: To handle these issues, this paper provides a novel ProgrEssively Region-based scene Text eraser (PERT), which introduces region-based modification strategy to progressively erase the pixels in only text region.

8, TITLE: Conditional Deformable Image Registration with Convolutional Neural Network
AUTHORS: Tony C. W. Mok ; Albert C. S. Chung
CATEGORY: cs.CV [cs.CV, eess.IV]
HIGHLIGHT: In this paper, we propose a conditional image registration method and a new self-supervised learning paradigm for deep deformable image registration.

9, TITLE: A Transformer-based Cross-modal Fusion Model with Adversarial Training for VQA Challenge 2021
AUTHORS: Ke-Han Lu ; Bo-Han Fang ; Kuan-Yu Chen
CATEGORY: cs.CV [cs.CV, cs.CL]
HIGHLIGHT: In this paper, inspired by the successes of visionlanguage pre-trained models and the benefits from training with adversarial attacks, we present a novel transformerbased cross-modal fusion modeling by incorporating the both notions for VQA challenge 2021.

10, TITLE: FaDIV-Syn: Fast Depth-Independent View Synthesis
AUTHORS: Andre Rochow ; Max Schwarz ; Michael Weinmann ; Sven Behnke
CATEGORY: cs.CV [cs.CV]
HIGHLIGHT: We introduce FaDIV-Syn, a fast depth-independent view synthesis method.

11, TITLE: Unsupervised Learning of Depth and Depth-of-Field Effect from Natural Images with Aperture Rendering Generative Adversarial Networks
AUTHORS: Takuhiro Kaneko
CATEGORY: cs.CV [cs.CV, cs.LG, eess.IV, stat.ML]
HIGHLIGHT: To complement these approaches, we propose aperture rendering generative adversarial networks (AR-GANs), which equip aperture rendering on top of GANs, and adopt focus cues to learn the depth and depth-of-field (DoF) effect of unlabeled natural images.

12, TITLE: Regularisation for PCA- and SVD-type Matrix Factorisations
AUTHORS: Abdolrahman Khoshrou ; Eric J. Pauwels
CATEGORY: cs.CV [cs.CV, cs.CE]
HIGHLIGHT: In this paper, we take another look at the problem of regularisation and show that different formulations of the minimisation problem lead to qualitatively different solutions.

13, TITLE: Sparse Needlets for Lighting Estimation with Spherical Transport Loss
AUTHORS: FANGNENG ZHAN et. al.
CATEGORY: cs.CV [cs.CV]
HIGHLIGHT: This paper presents NeedleLight, a new lighting estimation model that represents illumination with needlets and allows lighting estimation in both frequency domain and spatial domain jointly.

14, TITLE: Continual Novelty Detection
AUTHORS: Rahaf Aljundi ; Daniel Olmeda Reino ; Nikolay Chumerin ; Richard E. Turner
CATEGORY: cs.CV [cs.CV]
HIGHLIGHT: We formulate the Continual Novelty Detection problem and present a benchmark, where we compare several Novelty Detection methods under different Continual Learning settings.

15, TITLE: All You Need Is A Second Look: Towards Arbitrary-Shaped Text Detection
AUTHORS: Meng Cao ; Can Zhang ; Dongming Yang ; Yuexian Zou
CATEGORY: cs.CV [cs.CV]
HIGHLIGHT: In this paper, we propose a two-stage segmentation-based detector, termed as NASK (Need A Second looK), for arbitrary-shaped text detection.

16, TITLE: Class Agnostic Moving Target Detection By Color and Location Prediction of Moving Area
AUTHORS: Zhuang He ; Qi Li ; Huajun Feng ; Zhihai Xu
CATEGORY: cs.CV [cs.CV]
HIGHLIGHT: Therefore, we proposed a model free moving target detection algorithm.

17, TITLE: Human Activity Recognition Using Continuous Wavelet Transform and Convolutional Neural Networks
AUTHORS: Anna Nedorubova ; Alena Kadyrova ; Aleksey Khlyupin
CATEGORY: cs.CV [cs.CV, cs.AI]
HIGHLIGHT: The model we suggest is based on continuous wavelet transform (CWT) and convolutional neural networks (CNNs).

18, TITLE: ChaLearn Looking at People: Inpainting and Denoising Challenges
AUTHORS: SERGIO ESCALERA et. al.
CATEGORY: cs.CV [cs.CV]
HIGHLIGHT: This chapter describes the design of an academic competition focusing on inpainting of images and video sequences that was part of the competition program of WCCI2018 and had a satellite event collocated with ECCV2018.

19, TITLE: VOLO: Vision Outlooker for Visual Recognition
AUTHORS: Li Yuan ; Qibin Hou ; Zihang Jiang ; Jiashi Feng ; Shuicheng Yan
CATEGORY: cs.CV [cs.CV]
HIGHLIGHT: In this work, we aim to closethe performance gap and demonstrate that attention-basedmodels are indeed able to outperform CNNs.

20, TITLE: Exploring Corruption Robustness: Inductive Biases in Vision Transformers and MLP-Mixers
AUTHORS: Katelyn Morrison ; Benjamin Gilby ; Colton Lipchak ; Adam Mattioli ; Adriana Kovashka
CATEGORY: cs.CV [cs.CV, cs.LG]
HIGHLIGHT: Despite some works proposing that data augmentation remains essential for a model to be robust against corruptions, we propose to explore the impact that the architecture has on corruption robustness.

21, TITLE: What Makes Visual Place Recognition Easy or Hard?
AUTHORS: Stefan Schubert ; Peer Neubert
CATEGORY: cs.CV [cs.CV, cs.RO]
HIGHLIGHT: It is an active field of research and many different approaches have been proposed and evaluated in many different experiments.

22, TITLE: Feature Completion for Occluded Person Re-Identification
AUTHORS: RUIBING HOU et. al.
CATEGORY: cs.CV [cs.CV]
HIGHLIGHT: In this work, we propose an occlusion-robust block, Region Feature Completion (RFC), for occluded reID.

23, TITLE: MatchVIE: Exploiting Match Relevancy Between Entities for Visual Information Extraction
AUTHORS: GUOZHI TANG et. al.
CATEGORY: cs.CV [cs.CV, cs.AI]
HIGHLIGHT: To address this issue, in this paper we propose a novel key-value matching model based on a graph neural network for VIE (MatchVIE).

24, TITLE: Topological Semantic Mapping By Consolidation of Deep Visual Features
AUTHORS: Ygor C. N. Sousa ; Hansenclever F. Bassani
CATEGORY: cs.CV [cs.CV, cs.RO]
HIGHLIGHT: Many works in the recent literature introduce semantic mapping methods that use CNNs (Convolutional Neural Networks) to recognize semantic properties in images.

25, TITLE: Fast Monte Carlo Rendering Via Multi-Resolution Sampling
AUTHORS: Qiqi Hou ; Zhan Li ; Carl S Marshall ; Selvakumar Panneer ; Feng Liu
CATEGORY: cs.CV [cs.CV, cs.GR]
HIGHLIGHT: In this paper, we present a hybrid rendering method to speed up Monte Carlo rendering algorithms.

26, TITLE: Planetary UAV Localization Based on Multi-modal Registration with Pre-existing Digital Terrain Model
AUTHORS: Xue Wan ; Yuanbin Shao ; Shengyang Li
CATEGORY: cs.CV [cs.CV]
HIGHLIGHT: In this paper, we proposed a multi-modal registration based SLAM algorithm, which estimates the location of a planet UAV using a nadir view camera on the UAV compared with pre-existing digital terrain model.

27, TITLE: Driver-centric Risk Object Identification
AUTHORS: Chengxi Li ; Stanley H. Chan ; Yi-Ting Chen
CATEGORY: cs.CV [cs.CV, cs.RO]
HIGHLIGHT: In this work, we propose a novel driver-centric definition of risk, i.e., risky objects influence driver behavior.

28, TITLE: Handling Data Heterogeneity with Generative Replay in Collaborative Learning for Medical Imaging
AUTHORS: Liangqiong Qu ; Niranjan Balachandar ; Miao Zhang ; Daniel Rubin
CATEGORY: cs.CV [cs.CV]
HIGHLIGHT: In this paper, we present a novel generative replay strategy to address the challenge of data heterogeneity in collaborative learning methods.

29, TITLE: Depth Confidence-aware Camouflaged Object Detection
AUTHORS: JING ZHANG et. al.
CATEGORY: cs.CV [cs.CV]
HIGHLIGHT: To explore the contribution of depth for camouflage detection, we present a depth-guided camouflaged object detection network with pre-computed depth maps from existing monocular depth estimation methods.

30, TITLE: GaussiGAN: Controllable Image Synthesis with 3D Gaussians from Unposed Silhouettes
AUTHORS: YOUSSEF A. MEJJATI et. al.
CATEGORY: cs.CV [cs.CV]
HIGHLIGHT: We present an algorithm that learns a coarse 3D representation of objects from unposed multi-view 2D mask supervision, then uses it to generate detailed mask and image texture.

31, TITLE: Detection of Deepfake Videos Using Long Distance Attention
AUTHORS: WEI LU et. al.
CATEGORY: cs.CV [cs.CV]
HIGHLIGHT: In this paper, the problem is treated as a special fine-grained classification problem since the differences between fake and real faces are very subtle.

32, TITLE: Frequency Domain Convolutional Neural Network: Accelerated CNN for Large Diabetic Retinopathy Image Classification
AUTHORS: Ee Fey Goh ; ZhiYuan Chen ; Wei Xiang Lim
CATEGORY: cs.CV [cs.CV, cs.LG]
HIGHLIGHT: This research proposed Frequency Domain Convolution (FDC) and Frequency Domain Pooling (FDP) layers which were built with RFFT, kernel initialization strategy, convolution artifact removal and Channel Independent Convolution (CIC) to replace the conventional convolution and pooling layers.

33, TITLE: Multi-Modal 3D Object Detection in Autonomous Driving: A Survey
AUTHORS: YINGJIE WANG et. al.
CATEGORY: cs.CV [cs.CV]
HIGHLIGHT: After a detailed review, we discuss open challenges and point out possible solutions.

34, TITLE: Differential Morph Face Detection Using Discriminative Wavelet Sub-bands
AUTHORS: Baaria Chaudhary ; Poorya Aghdaie ; Sobhan Soleymani ; Jeremy Dawson ; Nasser M. Nasrabadi
CATEGORY: cs.CV [cs.CV]
HIGHLIGHT: In this paper, we propose a morph attack detection algorithm that leverages an undecimated 2D Discrete Wavelet Transform (DWT) for identifying morphed face images.

35, TITLE: Video Super-Resolution with Long-Term Self-Exemplars
AUTHORS: Guotao Meng ; Yue Wu ; Sijin Li ; Qifeng Chen
CATEGORY: cs.CV [cs.CV]
HIGHLIGHT: Based on this observation, we propose a video super-resolution method with long-term cross-scale aggregation that leverages similar patches (self-exemplars) across distant frames.

36, TITLE: Unsupervised Deep Image Stitching: Reconstructing Stitched Features to Images
AUTHORS: Lang Nie ; Chunyu Lin ; Kang Liao ; Shuaicheng Liu ; Yao Zhao
CATEGORY: cs.CV [cs.CV]
HIGHLIGHT: To address the above limitations, we propose an unsupervised deep image stitching framework consisting of two stages: unsupervised coarse image alignment and unsupervised image reconstruction.

37, TITLE: Learning By Planning: Language-Guided Global Image Editing
AUTHORS: JING SHI et. al.
CATEGORY: cs.CV [cs.CV]
HIGHLIGHT: Hence, we propose a novel operation planning algorithm to generate possible editing sequences from the target image as pseudo ground truth.

38, TITLE: Exploring Stronger Feature for Temporal Action Localization
AUTHORS: ZHIWU QING et. al.
CATEGORY: cs.CV [cs.CV]
HIGHLIGHT: In this technical report, we explored classic convolution-based backbones and the recent surge of transformer-based backbones.

39, TITLE: HyperNeRF: A Higher-Dimensional Representation for Topologically Varying Neural Radiance Fields
AUTHORS: KEUNHONG PARK et. al.
CATEGORY: cs.CV [cs.CV, cs.GR]
HIGHLIGHT: We evaluate our method on two tasks: (i) interpolating smoothly between "moments", i.e., configurations of the scene, seen in the input images while maintaining visual plausibility, and (ii) novel-view synthesis at fixed moments.

40, TITLE: AutoAdapt: Automated Segmentation Network Search for Unsupervised Domain Adaptation
AUTHORS: Xueqing Deng ; Yi Zhu ; Yuxin Tian ; Shawn Newsam
CATEGORY: cs.CV [cs.CV]
HIGHLIGHT: In this paper, we perform neural architecture search (NAS) to provide architecture-level perspective and analysis for domain adaptation.

41, TITLE: Towards Automatic Speech to Sign Language Generation
AUTHORS: Parul Kapoor ; Rudrabha Mukhopadhyay ; Sindhu B Hegde ; Vinay Namboodiri ; C V Jawahar
CATEGORY: cs.CV [cs.CV]
HIGHLIGHT: We aim to solve the highly challenging task of generating continuous sign language videos solely from speech segments for the first time. Since the current datasets are inadequate for generating sign language directly from speech, we collect and release the first Indian sign language dataset comprising speech-level annotations, text transcripts, and the corresponding sign-language videos.

42, TITLE: FitVid: Overfitting in Pixel-Level Video Prediction
AUTHORS: MOHAMMAD BABAEIZADEH et. al.
CATEGORY: cs.CV [cs.CV, cs.LG]
HIGHLIGHT: In this paper, we argue that the inefficient use of parameters in the current video models is the main reason for underfitting.

43, TITLE: Video Swin Transformer
AUTHORS: ZE LIU et. al.
CATEGORY: cs.CV [cs.CV, cs.AI, cs.LG]
HIGHLIGHT: In this paper, we instead advocate an inductive bias of locality in video Transformers, which leads to a better speed-accuracy trade-off compared to previous approaches which compute self-attention globally even with spatial-temporal factorization.

44, TITLE: Attention Toward Neighbors: A Context Aware Framework for High Resolution Image Segmentation
AUTHORS: Fahim Faisal Niloy ; M. Ashraful Amin ; Amin Ahsan Ali ; AKM Mahbubur Rahman
CATEGORY: cs.CV [cs.CV]
HIGHLIGHT: To overcome these limitations, in this paper, we propose a novel framework to segment a particular patch by incorporating contextual information from its neighboring patches.

45, TITLE: SGTBN: Generating Dense Depth Maps from Single-Line LiDAR
AUTHORS: Hengjie Lu ; Shugong Xu ; Shan Cao
CATEGORY: cs.CV [cs.CV]
HIGHLIGHT: Therefore, we propose a method to tackle the problem of single-line depth completion, in which we aim to generate a dense depth map from the single-line LiDAR info and the aligned RGB image.

46, TITLE: Evaluation of Deep Lift Pose Models for 3D Rodent Pose Estimation Based on Geometrically Triangulated Data
AUTHORS: INDRANI SARKAR et. al.
CATEGORY: cs.CV [cs.CV, q-bio.NC, q-bio.QM]
HIGHLIGHT: Here we propose the usage of lift-pose models that allow for robust 3D pose estimation of freely moving rodents from a single view camera view.

47, TITLE: Self-Supervised Monocular Depth Estimation of Untextured Indoor Rotated Scenes
AUTHORS: Benjamin Keltjens ; Tom van Dijk ; Guido de Croon
CATEGORY: cs.CV [cs.CV, cs.LG]
HIGHLIGHT: In an effort to extend self-supervised learning to more generalised environments we propose two additions.

48, TITLE: Relationship Between Pulmonary Nodule Malignancy and Surrounding Pleurae, Airways and Vessels: A Quantitative Study Using The Public LIDC-IDRI Dataset
AUTHORS: YULEI QIN et. al.
CATEGORY: cs.CV [cs.CV, eess.IV, physics.med-ph, stat.AP]
HIGHLIGHT: Computer algorithms were developed to segment pulmonary structures and quantify the distances to pleural surface, airways and vessels, as well as the counting number and normalized volume of airways and vessels near a nodule.

49, TITLE: Symmetric Wasserstein Autoencoders
AUTHORS: Sun Sun ; Hongyu Guo
CATEGORY: cs.LG [cs.LG, cs.AI, cs.CV]
HIGHLIGHT: Leveraging the framework of Optimal Transport, we introduce a new family of generative autoencoders with a learnable prior, called Symmetric Wasserstein Autoencoders (SWAEs).

50, TITLE: DCoM: A Deep Column Mapper for Semantic Data Type Detection
AUTHORS: Subhadip Maji ; Swapna Sourav Rout ; Sudeep Choudhary
CATEGORY: cs.LG [cs.LG, cs.AI, cs.CV, stat.ML]
HIGHLIGHT: In this paper, we introduce DCoM, a collection of multi-input NLP-based deep neural networks to detect semantic data types where instead of extracting large number of features from the data, we feed the raw values of columns (or instances) to the model as texts.

51, TITLE: Towards Fully Interpretable Deep Neural Networks: Are We There Yet?
AUTHORS: Sandareka Wickramanayake ; Wynne Hsu ; Mong Li Lee
CATEGORY: cs.LG [cs.LG, cs.CV]
HIGHLIGHT: This paper provides a review of existing methods to develop DNNs with intrinsic interpretability, with a focus on Convolutional Neural Networks (CNNs).

52, TITLE: AudioCLIP: Extending CLIP to Image, Text and Audio
AUTHORS: Andrey Guzhov ; Federico Raue ; J�rn Hees ; Andreas Dengel
CATEGORY: cs.SD [cs.SD, cs.CV, eess.AS]
HIGHLIGHT: In this work, we present an extension of the CLIP model that handles audio in addition to text and images.

53, TITLE: A Systematic Collection of Medical Image Datasets for Deep Learning
AUTHORS: JOHANN LI et. al.
CATEGORY: eess.IV [eess.IV, cs.CV, cs.LG]
HIGHLIGHT: Thus, as comprehensive as possible, this paper provides a collection of medical image datasets with their associated challenges for deep learning research.

54, TITLE: VinDr-SpineXR: A Deep Learning Framework for Spinal Lesions Detection and Classification from Radiographs
AUTHORS: HIEU T. NGUYEN et. al.
CATEGORY: eess.IV [eess.IV, cs.CV, cs.LG]
HIGHLIGHT: This work aims at developing and evaluating a deep learning-based framework, named VinDr-SpineXR, for the classification and localization of abnormalities from spine X-rays. First, we build a large dataset, comprising 10,468 spine X-ray images from 5,000 studies, each of which is manually annotated by an experienced radiologist with bounding boxes around abnormal findings in 13 categories.

55, TITLE: Q-space Conditioned Translation Networks for Directional Synthesis of Diffusion Weighted Images from Multi-modal Structural MRI
AUTHORS: Mengwei Ren ; Heejong Kim ; Neel Dey ; Guido Gerig
CATEGORY: eess.IV [eess.IV, cs.CV, cs.LG]
HIGHLIGHT: We propose a generative adversarial translation framework for high-quality DWI synthesis with arbitrary $q$-space sampling given commonly acquired structural images (e.g., B0, T1, T2).

56, TITLE: A Global Appearance and Local Coding Distortion Based Fusion Framework for CNN Based Filtering in Video Coding
AUTHORS: Jian Yue ; Yanbo Gao ; Shuai Li ; Hui Yuan ; Fr�d�ric Dufaux
CATEGORY: eess.IV [eess.IV, cs.CV]
HIGHLIGHT: Therefore, in this paper, we address the filtering problem from two aspects, global appearance restoration for disrupted texture and local coding distortion restoration caused by fixed pipeline of coding.

57, TITLE: AVHYAS: A Free and Open Source QGIS Plugin for Advanced Hyperspectral Image Analysis
AUTHORS: ROSLY BOY LYNGDOH et. al.
CATEGORY: eess.IV [eess.IV, cs.CV]
HIGHLIGHT: AVHYAS: A Free and Open Source QGIS Plugin for Advanced Hyperspectral Image Analysis

58, TITLE: High-resolution Image Registration of Consecutive and Re-stained Sections in Histopathology
AUTHORS: Johannes Lotz ; Nick Weiss ; Jeroen van der Laak ; StefanHeldmann
CATEGORY: eess.IV [eess.IV, cs.CV]
HIGHLIGHT: We present a fully-automatic algorithm for non-parametric (nonlinear) image registration and apply it to a previously existing dataset from the ANHIR challenge (230 slide pairs, consecutive sections) and a new dataset (hybrid re-stained and consecutive, 81 slide pairs, ca. 3000 landmarks) which is made publicly available.

59, TITLE: Rate Distortion Characteristic Modeling for Neural Image Compression
AUTHORS: Chuanmin Jia ; Ziqing Ge ; Shanshe Wang ; Siwei Ma ; Wen Gao
CATEGORY: eess.IV [eess.IV, cs.CV, cs.LG]
HIGHLIGHT: In this paper, we consider the problem of R-D characteristic analysis and modeling for NIC.

60, TITLE: ATP-Net: An Attention-based Ternary Projection Network For Compressed Sensing
AUTHORS: Guanxiong Nie ; Yajian Zhou
CATEGORY: eess.SP [eess.SP, cs.CV, eess.IV]
HIGHLIGHT: In this paper, a ternary sampling matrix-based method with attention mechanism is proposed with the purpose to solve the problem that the CS sampling matrices in most cases are random matrices, which are irrelative to the sampled signal and need a large storage space.

  • 0
    点赞
  • 3
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值