人工智能 | ShowMeAI资讯日报 #2022.06.21

ShowMeAI日报系列全新升级!覆盖AI人工智能 工具&框架 | 项目&代码 | 博文&分享 | 数据&资源 | 研究&论文 等方向。点击查看 历史文章列表,在公众号内订阅话题 #ShowMeAI资讯日报,可接收每日最新推送。点击 专题合辑&电子月刊 快速浏览各专题全集。点击 这里 回复关键字 日报 免费获取AI电子月刊与资料包。

1.工具&框架

工具:妙言 - 轻灵的 Markdown 笔记本

GitHub: https://github.com/tw93/MiaoYan

工具:ktop - Kubernetes集群的类似top的资源查看工具

‘ktop - A top-like tool for your Kubernetes clusters’ by Vladimir Vivien

GitHub: https://github.com/vladimirvivien/ktop

工具:PyScript CLI - PyScript的命令行界面

PyScript是可以在HTML页面嵌入和执行Python代码的JS库

‘PyScript CLI - A CLI for PyScript’

GitHub: https://github.com/pyscript/pyscript-cli

工具:NoiseTorch - Linux下的麦克风实时噪声抑制

‘NoiseTorch - Real-time microphone noise suppression on Linux.’ by lawl

GitHub: https://github.com/noisetorch/NoiseTorch

工具库:morfeus - 用于计算分子特征的Python包

‘morfeus - A Python package for calculating molecular features’ by Kjell Jorner

GitHub: https://github.com/kjelljorner/morfeus

2.博文&分享

视频分享:李沐大佬的项目《深度学习论文精读》系列视频

GitHub: https://github.com/mli/paper-reading

视频也在B站和知乎更新。论文选取的原则是10年内深度学习里有影响力文章(必读文章),或者近期比较有意思的文章。

3.数据&资源

资源分享:AndroidReverseStudy - 安卓逆向学习

GitHub: https://github.com/heyhu/AndroidReverseStudy

资源列表:少样本增量学习相关文献资源列表

‘Awesome Few-Shot Class-Incremental Learning’ by Da-Wei Zhou

GitHub: https://github.com/zhoudw-zdw/Awesome-Few-Shot-Class-Incremental-Learning

资源列表:软件工程面试相关资源列表

‘Awesome Software Engineering Interview’ by imkgarg

GitHub: https://github.com/imkgarg/Awesome-Software-Engineering-Interview

资源列表:AI-research-tools - 科研工具列表

地址: https://github.com/bighuang624/AI-research-tools

包括论文查找、阅读、写作类的各种工具。有通用的工具,专业方面的以计算机为主。

4.研究&论文

可以点击 这里 回复关键字 日报,免费获取整理好的6月论文合辑。

论文:Prioritized Training on Points that are Learnable, Worth Learning, and Not Yet Learnt

论文标题:Prioritized Training on Points that are Learnable, Worth Learning, and Not Yet Learnt

论文时间:14 Jun 2022

所属领域:深度学习

对应任务:损失函数优化

论文地址https://arxiv.org/abs/2206.07137

代码实现https://github.com/oatml/rho-loss

论文作者:Sören Mindermann, Jan Brauner, Muhammed Razzak, Mrinank Sharma, Andreas Kirsch, Winnie Xu, Benedikt Höltgen, Aidan N. Gomez, Adrien Morisot, Sebastian Farquhar, Yarin Gal

论文简介:But most computation and time is wasted on redundant and noisy points that are already learnt or not learnable./但是大多数计算和时间都浪费在已经学习或不可学习的冗余和嘈杂点上。

论文摘要:Training on web-scale data can take months. But most computation and time is wasted on redundant and noisy points that are already learnt or not learnable. To accelerate training, we introduce Reducible Holdout Loss Selection (RHO-LOSS), a simple but principled technique which selects approximately those points for training that most reduce the model’s generalization loss. As a result, RHO-LOSS mitigates the weaknesses of existing data selection methods: techniques from the optimization literature typically select ‘hard’ (e.g. high loss) points, but such points are often noisy (not learnable) or less task-relevant. Conversely, curriculum learning prioritizes ‘easy’ points, but such points need not be trained on once learned. In contrast, RHO-LOSS selects points that are learnable, worth learning, and not yet learnt. RHO-LOSS trains in far fewer steps than prior art, improves accuracy, and speeds up training on a wide range of datasets, hyperparameters, and architectures (MLPs, CNNs, and BERT). On the large web-scraped image dataset Clothing-1M, RHO-LOSS trains in 18x fewer steps and reaches 2% higher final accuracy than uniform data shuffling.

互联网大规模数据的训练可能需要几个月的时间。但是大多数计算和时间都浪费在已经学习或不可学习的冗余和嘈杂点上。为了加速训练,我们引入了 Reducible Holdout Loss Selection (RHO-LOSS),这是一种简单但有原则的技术,可以近似地选择那些最能减少模型泛化损失的点进行训练。因此,RHO-LOSS 减轻了现有数据选择方法的弱点:优化文献中的技术通常选择“难”(例如高损失)点,但这些点通常是嘈杂的(不可学习的)或与任务相关性较低。普通训练优先考虑“简单”点,但这些点一旦学习就不需要训练。相比之下,RHO-LOSS 选择可学习、值得学习和尚未学习的点。 RHO-LOSS 的训练步骤比现有技术少得多,提高了准确性,并加快了对各种数据集、超参数和架构(MLP、CNN 和 BERT)的训练。在大型网络抓取图像数据集 Clothing-1M 上,RHO-LOSS 的训练步数减少了 18 倍,最终准确度比统一数据混洗高 2%。

论文:Online Segmentation of LiDAR Sequences: Dataset and Algorithm

论文标题:Online Segmentation of LiDAR Sequences: Dataset and Algorithm

论文时间:16 Jun 2022

所属领域:计算机视觉

对应任务:Autonomous Vehicles,LIDAR Semantic Segmentation,Semantic Segmentation,自动驾驶汽车,激光雷达语义分割,语义分割

论文地址https://arxiv.org/abs/2206.08194

代码实现https://github.com/romainloiseau/Helix4D

论文作者:Romain Loiseau, Mathieu Aubry, Loïc Landrieu

论文简介:Helix4D operates on acquisition slices that correspond to a fraction of a full rotation of the sensor, significantly reducing the total latency./Helix4D 在对应于传感器完整旋转的一小部分的采集切片上运行,显着降低了总延迟。

论文摘要:Roof-mounted spinning LiDAR sensors are widely used by autonomous vehicles, driving the need for real-time processing of 3D point sequences. However, most LiDAR semantic segmentation datasets and algorithms split these acquisitions into 360∘ frames, leading to acquisition latency that is incompatible with realistic real-time applications and evaluations. We address this issue with two key contributions. First, we introduce HelixNet, a 10 billion point dataset with fine-grained labels, timestamps, and sensor rotation information that allows an accurate assessment of real-time readiness of segmentation algorithms. Second, we propose Helix4D, a compact and efficient spatio-temporal transformer architecture specifically designed for rotating LiDAR point sequences. Helix4D operates on acquisition slices that correspond to a fraction of a full rotation of the sensor, significantly reducing the total latency. We present an extensive benchmark of the performance and real-time readiness of several state-of-the-art models on HelixNet and SemanticKITTI. Helix4D reaches accuracy on par with the best segmentation algorithms with a reduction of more than 5× in terms of latency and 50× in model size. Code and data are available at https://romainloiseau.fr/helixnet

安装在顶部的旋转式 LiDAR 传感器被自动驾驶汽车广泛使用,推动了对 3D 点序列实时处理的需求。然而,大多数 LiDAR 语义分割数据集和算法将这些采集分割成 360帧,导致采集延迟与现实的实时应用和评估不兼容。我们通过两个关键贡献来解决这个问题。首先,我们介绍 HelixNet,这是一个 100 亿点的数据集,具有细粒度的标签、时间戳和传感器旋转信息,可以准确评估分割算法的实时准备情况。其次,我们提出了 Helix4D,这是一种专为旋转 LiDAR 点序列设计的紧凑且高效的时空变换器架构。 Helix4D 在对应于传感器完整旋转的一小部分的采集切片上运行,显着降低了总延迟。我们在 HelixNet 和 SemanticKITTI 上展示了几个最先进模型的性能和实时准备情况的广泛基准。 Helix4D 达到了与最佳分割算法相当的精度,延迟减少了 5 倍以上,模型大小减少了 50 倍。代码和数据可在以下网址获得 https://romainloiseau.fr/helixnet

论文:Trajectory-guided Control Prediction for End-to-end Autonomous Driving: A Simple yet Strong Baseline

论文标题:Trajectory-guided Control Prediction for End-to-end Autonomous Driving: A Simple yet Strong Baseline

论文时间:16 Jun 2022

所属领域:计算机视觉

对应任务:Autonomous Driving,Trajectory Planning,自动驾驶,轨迹规划

论文地址https://arxiv.org/abs/2206.08129

代码实现https://github.com/OpenPerceptionX/TCP

论文作者:Penghao Wu, Xiaosong Jia, Li Chen, Junchi Yan, Hongyang Li, Yu Qiao

论文简介:The two branches are connected so that the control branch receives corresponding guidance from the trajectory branch at each time step./两个分支连接起来,使得控制分支在每个时间步都从轨迹分支接收到相应的引导。

论文摘要:Current end-to-end autonomous driving methods either run a controller based on a planned trajectory or perform control prediction directly, which have spanned two separately studied lines of research. Seeing their potential mutual benefits to each other, this paper takes the initiative to explore the combination of these two well-developed worlds. Specifically, our integrated approach has two branches for trajectory planning and direct control, respectively. The trajectory branch predicts the future trajectory, while the control branch involves a novel multi-step prediction scheme such that the relationship between current actions and future states can be reasoned. The two branches are connected so that the control branch receives corresponding guidance from the trajectory branch at each time step. The outputs from two branches are then fused to achieve complementary advantages. Our results are evaluated in the closed-loop urban driving setting with challenging scenarios using the CARLA simulator. Even with a monocular camera input, the proposed approach ranks first on the official CARLA Leaderboard, outperforming other complex candidates with multiple sensors or fusion mechanisms by a large margin. The source code and data will be made publicly available at https://github.com/OpenPerceptionX/TCP

当前的端到端自动驾驶方法要么根据规划的轨迹运行控制器,要么直接执行控制预测,这已经跨越了两个独立研究的研究方向。看到它们潜在的互惠作用,本文探索这两种方式的结合优化。具体来说,我们的综合方法有两个分支,分别用于轨迹规划和直接控制。轨迹分支预测未来轨迹,而控制分支涉及一种新颖的多步预测方案,以便可以推断当前动作和未来状态之间的关系。将两个分支连接起来,使得控制分支在每个时间步都接收到来自轨迹分支的相应引导。然后融合两个分支的输出以实现优势互补。使用 CARLA 模拟器在具有挑战性场景的闭环城市驾驶环境中评估我们的结果。即使使用单目相机输入,所提出的方法在官方 CARLA 排行榜上也排名第一,大大优于其他具有多个传感器或融合机制的复杂候选方法。源代码和数据将在 https://github.com/OpenPerceptionX/TCP 上公开。

论文:General-purpose, long-context autoregressive modeling with Perceiver AR

论文标题:General-purpose, long-context autoregressive modeling with Perceiver AR

论文时间:15 Feb 2022

所属领域:深度学习

对应任务:Density Estimation,密度估计,自回归模型

论文地址https://arxiv.org/abs/2202.07765

代码实现https://github.com/google-research/perceiver-ar

论文作者:Curtis Hawthorne, Andrew Jaegle, Cătălina Cangea, Sebastian Borgeaud, Charlie Nash, Mateusz Malinowski, Sander Dieleman, Oriol Vinyals, Matthew Botvinick, Ian Simon, Hannah Sheahan, Neil Zeghidour, Jean-Baptiste Alayrac, João Carreira, Jesse Engel

论文简介:Real-world data is high-dimensional: a book, image, or musical performance can easily contain hundreds of thousands of elements even after compression./现实世界的数据是高维的:即使经过压缩,一本书、图像或音乐表演也可以轻松包含数十万个元素。

论文摘要:Real-world data is high-dimensional: a book, image, or musical performance can easily contain hundreds of thousands of elements even after compression. However, the most commonly used autoregressive models, Transformers, are prohibitively expensive to scale to the number of inputs and layers needed to capture this long-range structure. We develop Perceiver AR, an autoregressive, modality-agnostic architecture which uses cross-attention to map long-range inputs to a small number of latents while also maintaining end-to-end causal masking. Perceiver AR can directly attend to over a hundred thousand tokens, enabling practical long-context density estimation without the need for hand-crafted sparsity patterns or memory mechanisms. When trained on images or music, Perceiver AR generates outputs with clear long-term coherence and structure. Our architecture also obtains state-of-the-art likelihood on long-sequence benchmarks, including 64 x 64 ImageNet images and PG-19 books.

现实世界的数据是高维的:即使经过压缩,一本书、图像或音乐表演也可以轻松包含数十万个元素。然而,最常用的自回归模型 Transformers 很难扩展到捕获这种远程结构所需的输入和层数。我们开发了 Perceiver AR,这是一种自回归、与模态无关的架构,它使用交叉注意力将远程输入映射到少量潜在对象,同时还保持端到端的因果屏蔽。 Perceiver AR 可以直接处理超过十万个token,实现实用的长上下文密度估计不需要手工制作的稀疏模式或内存机制。在对图像或音乐进行训练时,Perceiver AR 生成的输出具有清晰的长期连贯性和结构。我们的架构还在长序列基准上获得了最先进的可能性,包括 64 x 64 ImageNet 图像和 PG-19 书籍。

论文:Spatially-Adaptive Multilayer Selection for GAN Inversion and Editing

论文标题:Spatially-Adaptive Multilayer Selection for GAN Inversion and Editing

论文时间:CVPR 2022

所属领域:计算机视觉

论文地址https://arxiv.org/abs/2206.08357

代码实现https://github.com/adobe-research/sam_inversion

论文作者:Gaurav Parmar, Yijun Li, Jingwan Lu, Richard Zhang, Jun-Yan Zhu, Krishna Kumar Singh

论文简介:We propose a new method to invert and edit such complex images in the latent space of GANs, such as StyleGAN2./我们提出了一种在 GAN 的潜在空间中反转和编辑此类复杂图像的新方法,例如 StyleGAN2。

论文摘要:Existing GAN inversion and editing methods work well for aligned objects with a clean background, such as portraits and animal faces, but often struggle for more difficult categories with complex scene layouts and object occlusions, such as cars, animals, and outdoor images. We propose a new method to invert and edit such complex images in the latent space of GANs, such as StyleGAN2. Our key idea is to explore inversion with a collection of layers, spatially adapting the inversion process to the difficulty of the image. We learn to predict the “invertibility” of different image segments and project each segment into a latent layer. Easier regions can be inverted into an earlier layer in the generator’s latent space, while more challenging regions can be inverted into a later feature space. Experiments show that our method obtains better inversion results compared to the recent approaches on complex categories, while maintaining downstream editability. Please refer to our project page at https://www.cs.cmu.edu/~SAMInversion

现有的 GAN 反转和编辑方法适用于具有干净背景的对齐对象,例如肖像和动物面孔,但通常难以处理具有复杂场景布局和对象遮挡的更困难的类别,例如汽车、动物和户外图像。我们提出了一种在 GAN 的潜在空间中反转和编辑此类复杂图像的新方法,例如 StyleGAN2。我们的关键思想是通过一系列层来探索反演,使反演过程在空间上适应图像的难度。我们学习预测不同图像片段的“可逆性”,并将每个片段投影到一个潜在层中。更容易的区域可以反转到生成器的潜在空间中的较早层,而更具挑战性的区域可以反转到较晚的特征空间中。实验表明,与最近的复杂类别方法相比,我们的方法获得了更好的反演结果,同时保持了下游的可编辑性。请参阅我们的项目页面 https://www.cs.cmu.edu/~SAMInversion

论文:Heterogeneous Information Network based Default Analysis on Banking Micro and Small Enterprise Users

论文标题:Heterogeneous Information Network based Default Analysis on Banking Micro and Small Enterprise Users

论文时间:24 Apr 2022

所属领域:深度学习

对应任务:Feature Engineering,特征工程

论文地址https://arxiv.org/abs/2204.11849

代码实现https://github.com/adlington/hidam

论文作者:Zheng Zhang, Yingsheng Ji, Jiachen Shen, Xi Zhang, Guangwen Yang

论文简介:Risk assessment is a substantial problem for financial institutions that has been extensively studied both for its methodological richness and its various practical applications./风险评估是金融机构面临的一个重大问题,因其方法丰富性和各种实际应用而受到广泛研究。

论文摘要:Risk assessment is a substantial problem for financial institutions that has been extensively studied both for its methodological richness and its various practical applications. With the expansion of inclusive finance, recent attentions are paid to micro and small-sized enterprises (MSEs). Compared with large companies, MSEs present a higher exposure rate to default owing to their insecure financial stability. Conventional efforts learn classifiers from historical data with elaborate feature engineering. However, the main obstacle for MSEs involves severe deficiency in credit-related information, which may degrade the performance of prediction. Besides, financial activities have diverse explicit and implicit relations, which have not been fully exploited for risk judgement in commercial banks. In particular, the observations on real data show that various relationships between company users have additional power in financial risk analysis. In this paper, we consider a graph of banking data, and propose a novel HIDAM model for the purpose. Specifically, we attempt to incorporate heterogeneous information network with rich attributes on multi-typed nodes and links for modeling the scenario of business banking service. To enhance feature representation of MSEs, we extract interactive information through meta-paths and fully exploit path information. Furthermore, we devise a hierarchical attention mechanism respectively to learn the importance of contents inside each meta-path and the importance of different metapahs. Experimental results verify that HIDAM outperforms state-of-the-art competitors on real-world banking data.

风险评估是金融机构面临的一个重要问题,因其方法丰富性和各种实际应用而受到广泛研究。随着普惠金融的发展,近年来小微企业受到关注。与大公司相比,由于财务稳定性不稳,小微企业的违约风险较高。传统的努力通过精心设计的特征工程从历史数据中学习分类器。然而,小微企业的主要障碍是信用相关信息的严重缺乏,这可能会降低预测的性能。此外,金融活动具有多种显性和隐性关系,商业银行尚未充分利用这些关系进行风险判断。特别是对真实数据的观察表明,公司用户之间的各种关系在财务风险分析中具有额外的能力。在本文中,我们考虑了银行“图”数据,并为此提出了一种新颖的 HIDAM 模型。具体而言,我们尝试在多类型节点和链路上结合具有丰富属性的异构信息网络,对商业银行服务场景进行建模。为了增强 MSE 的特征表示,我们通过元路径提取交互信息并充分利用路径信息。此外,我们分别设计了一种分层注意机制来学习每个元路径内内容的重要性以及不同元路径的重要性。实验结果验证了 HIDAM 在真实银行数据上的表现优于最先进的竞争对手。

论文:HaGRID – HAnd Gesture Recognition Image Dataset

论文标题:HaGRID – HAnd Gesture Recognition Image Dataset

论文时间:16 Jun 2022

所属领域:计算机视觉

对应任务:Gesture Recognition,Hand Detection,Hand Gesture Recognition,Hand-Gesture Recognition,手势识别,手部检测,手势识别,手势识别

论文地址https://arxiv.org/abs/2206.08219

代码实现https://github.com/hukenovs/hagrid

论文作者:Alexander Kapitanov, Andrew Makhlyarchuk, Karina Kvanchiani

论文简介:In this paper, we introduce an enormous dataset HaGRID (HAnd Gesture Recognition Image Dataset) for hand gesture recognition (HGR) systems./在本文中,我们介绍了一个用于手势识别 (HGR) 系统的庞大数据集 HaGRID (HAnd Gesture Recognition Image Dataset)。

论文摘要:In this paper, we introduce an enormous dataset HaGRID (HAnd Gesture Recognition Image Dataset) for hand gesture recognition (HGR) systems. This dataset contains 552,992 samples divided into 18 classes of gestures. The annotations consist of bounding boxes of hands with gesture labels and markups of leading hands. The proposed dataset allows for building HGR systems, which can be used in video conferencing services, home automation systems, the automotive sector, services for people with speech and hearing impairments, etc. We are especially focused on interaction with devices to manage them. That is why all 18 chosen gestures are functional, familiar to the majority of people, and may be an incentive to take some action. In addition, we used crowdsourcing platforms to collect the dataset and took into account various parameters to ensure data diversity. We describe the challenges of using existing HGR datasets for our task and provide a detailed overview of them. Furthermore, the baselines for the hand detection and gesture classification tasks are proposed.

在本文中,我们介绍了一个用于手势识别(HGR)系统的庞大数据集 HaGRID(HAnd Gesture Recognition Image Dataset)。该数据集包含 552,992 个样本,分为 18 类手势。注释由带有手势标签的手的边界框和领先手的标记组成。提议的数据集允许构建 HGR 系统,该系统可用于视频会议服务、家庭自动化系统、汽车行业、为有语言和听力障碍的人提供的服务等。我们特别关注与设备的交互来管理它们。这就是为什么选择的所有 18 个手势都是功能性的、大多数人熟悉的,并且可能是驱动某些行动的标志。此外,我们使用众包平台收集数据集并考虑各种参数以确保数据的多样性。我们描述了为我们的任务使用现有 HGR 数据集的挑战,并提供了它们的详细概述。此外,还提出了手部检测和手势分类任务的基线。

论文:Exploring Smoothness and Class-Separation for Semi-supervised Medical Image Segmentation

论文标题:Exploring Smoothness and Class-Separation for Semi-supervised Medical Image Segmentation

论文时间:2 Mar 2022

所属领域:医疗科技

对应任务:Medical Image Segmentation,Semantic Segmentation,Semi-supervised Medical Image Segmentation,医学图像分割,语义分割,半监督医学图像分割

论文地址https://arxiv.org/abs/2203.01324

代码实现https://github.com/HiLab-git/SSL4MIS , https://github.com/ycwu1997/ss-net

论文作者:Yicheng Wu, Zhonghua Wu, Qianyi Wu, ZongYuan Ge, Jianfei Cai

论文简介:The pixel-level smoothness forces the model to generate invariant results under adversarial perturbations./像素级平滑度迫使模型在对抗性扰动下生成不变的结果。

论文摘要:Semi-supervised segmentation remains challenging in medical imaging since the amount of annotated medical data is often limited and there are many blurred pixels near the adhesive edges or low-contrast regions. To address the issues, we advocate to firstly constrain the consistency of samples with and without strong perturbations to apply sufficient smoothness regularization and further encourage the class-level separation to exploit the unlabeled ambiguous pixels for the model training. Particularly, in this paper, we propose the SS-Net for semi-supervised medical image segmentation tasks, via exploring the pixel-level Smoothness and inter-class Separation at the same time. The pixel-level smoothness forces the model to generate invariant results under adversarial perturbations. Meanwhile, the inter-class separation constrains individual class features should approach their corresponding high-quality prototypes, in order to make each class distribution compact and separate different classes. We evaluated our SS-Net against five recent methods on the public LA and ACDC datasets. The experimental results under two semi-supervised settings demonstrate the superiority of our proposed SS-Net, achieving new state-of-the-art (SOTA) performance on both datasets. The code is available at https://github.com/ycwu1997/SS-Net

半监督分割在医学成像中仍然具有挑战性,因为带注释的医学数据量通常是有限的,并且在粘合边缘或低对比度区域附近有许多模糊像素。为了解决这一问题,我们主张首先在有和无强扰动的情况下约束样本的一致性,以应用足够的平滑正则化,并进一步鼓励类级分离,以利用未标记的模糊像素进行模型训练。特别是,在本文中,我们通过同时探索像素级平滑度和类间分离,提出了用于半监督医学图像分割任务的SS网络。像素级的平滑度迫使模型在对抗性扰动下生成不变的结果。同时,类间分离约束单个类特征应接近其相应的高质量原型,以使每个类分布紧凑,分离不同的类。我们根据公共LA和ACDC数据集上的五种最新方法评估了SS网络。在两种半监督设置下的实验结果证明了我们提出的SS网络的优越性,在两种数据集上都实现了最新的性能(SOTA)。该代码可在 https://github.com/ycwu1997/SS-Net 获得。

我们是 ShowMeAI,致力于传播AI优质内容,分享行业解决方案,用知识加速每一次技术成长!点击查看 历史文章列表,在公众号内订阅话题 #ShowMeAI资讯日报,可接收每日最新推送。点击 专题合辑&电子月刊 快速浏览各专题全集。点击 这里 回复关键字 日报 免费获取AI电子月刊与资料包。

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

ShowMeAI

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值