综述类slam论文A Survey of Simultaneous Localization and Mapping(一)——激光雷达部分

A Survey of Simultaneous Localization and Mapping

同步定位和地图绘制(SLAM)实现了基于自我感知的同时定位和地图构建的目的。本文对SLAM进行了概述,包括激光雷达SLAM,视觉SLAM及其融合。对于激光雷达或视觉SLAM,该文章调查说明了传感器的基本类型和产品,开源系统的分类和历史,嵌入的深度学习,挑战和未来。另外,补充了视觉惯性里程计的介绍。对于激光雷达和视觉融合SLAM,本文重点介绍了多传感器校准,硬件,数据,任务层中的融合。用悬而未决的问题和前瞻性思考结束了本文。本文的贡献可归纳如下:本文提供了SLAM的高质量和全面概述。对于这篇文章对于新的研究人员来说是非常友好的,可以帮助他们掌握SLAM的发展并非常透彻地学习它。同样,该论文也可以被有经验的研究人员视为研究和寻找新的兴趣方向的字典。

索引词-调查,SLAM(同时定位和地图构建),激光雷达SLAM,视觉SLAM,激光与视觉融合,用户指南。

1、介绍

SLAM是“同时定位和建图”的缩写,它包含两个主要任务,即定位和映射。 在移动机器人中,这是一个重大的开放性问题:要精确移动,移动机器人必须具有准确的环境图;但是,要构建准确的地图,必须准确知道移动机器人的感应位置[1]。通过这种方式,可以将同时地图构建和定位看作一个经典的问题,首先出现的是鸡还是鸡蛋? (地图还是运动?)

1990年,[2]首次提出使用EKF(扩展卡尔曼滤波器)来逐步估计机器人姿势的后验分布以及地标的位置。实际上,机器人是从未知环境的未知位置开始,通过反复观察运动过程中的环境特征来定位自己的位置和姿态,然后根据自身的位置绘制周围环境的增量图,从而达到同时定位和地图构建的目的。定位是近年来非常复杂和热点。定位技术取决于环境以及对成本,精度,频率和鲁棒性的需求,这可以通过GPS(全球定位系统),IMU(惯性测量单元)和无线信号等来实现。[3] [4]。但是GPS只能在户外工作,IMU系统具有累积误差[5]。无线技术作为一种主动系统,无法在成本和准确性之间取得平衡。随着快速发展,配备激光雷达,摄像头,IMU和其他传感器的SLAM于去年问世。

从基于过滤器的SLAM开始,现在基于图的SLAM发挥了主导作用。该算法从KF(卡尔曼滤波器),EKF和PF(粒子滤波器)派生到基于图的优化。并且单线程已被多线程取代。SLAM的技术也从最早的军事用途原型转变为后来的多传感器融合的机器人应用。

本文的组织结构可以概括如下:在第二部分中,将介绍Lidar SLAM,包括Lidar传感器,开源Lidar SLAM系统,Lidar中的深度学习以及挑战和未来。第三部分重点介绍了视觉SLAM,包括相机传感器,不同密度的开源视觉SLAM系统,视觉惯性里程法SLAM,视觉SLAM中的深度学习以及未来。在第四部分中,将演示激光雷达与视觉的融合。最后,本文确定了SLAM未来研究的几个方向,并为SLAM的新研究人员提供了高质量和全面的用户指南。

2、激光雷达SLAM

1991年,[1]使用多个伺服安装式声纳传感器和EKF滤波器为机器人配备SLAM系统。从声纳传感器开始,激光雷达的诞生使SLAM系统更加可靠和耐用。

A、激光雷达传感器

激光雷达传感器可以分为2D激光雷达和3D激光雷达,这由激光雷达光束的数量定义。 在生产工艺方面,激光雷达还可分为机械激光雷达,混合式固态激光雷达如MEMS(微机电)和固态激光雷达。固态激光雷达可以通过相控阵和闪存技术生产。

  • Velodyne:在机械激光雷达中,它具有VLP-16,HDL-32E和HDL-64E。在混合固态激光雷达中,它具有32E的Ultra Puck Auto。
  • SLAMTEC:它具有低成本的激光雷达和机器人平台,例如RPLIDAR A1,A2和R3。
  • Ouster:它具有16至128通道的机械激光雷达。
  • Quanergy:S3是世界上第一个发布的固态激光雷达,M8是机械激光雷达。 S3-QI是微固态激光雷达。
  • Ibeo:它具有机械激光雷达中的Lux 4L和Lux 8L,并与法雷奥(Valeo)合作发布了混合固态激光雷达,名为Scala。

如今趋势是,小型轻量化的固态激光雷达将占领市场并满足大多数应用的需求。 其他激光雷达公司包括但不限于sick,Hokuyo, HESAI, RoboSense, LeddarTech, ISureStar,benewake, Livox, Innovusion, Innoviz, Trimble, Leishen Intelligent System.

B、激光雷达SLAM系统

激光雷达SLAM系统在理论和技术上都是可靠的。[6] 举例说明了数学上有关如何基于概率使用2D激光雷达进行同时定位和制图的理论。并且,[7]对2D Lidar SLAM系统进行了调查。

1)2D SLAM

  • Gmapping:基于RBPF(Rao-Blackwellisation Partical Filter)方法的在机器人中使用最多的SLAM包。它增加了扫描匹配方法来估计位置[8] [6]。它是基于FastSLAM [9] [10]的有网格地图的改进版本。
  • HectorSlam:它结合了2D SLAM系统、具有扫描匹配技术以及惯性传感系统的3D导航[11]。
  • KartoSLAM:它是一个基于图的SLAM系统[12]。
  • LagoSLAM:它的基础是基于图的SLAM,它是非线性非凸代价函数的最小化[13]。
  • CoreSLAm:这是一种性能损失最小的算法[14]。
  • Cartographer:它是Google [15]推出的SLAM系统。它采用了子地图和回环功能,可以在产品级上获得更好的性能。该算法可以跨多个平台和传感器配置以提供2D和3DSLAM。

2)3D SLAM

  • Loam:它是一种使用3D Lidar进行状态估计和建图的实时方法[16]。 它还具有来回旋转版本和连续扫描2D激光雷达版本。
  • Lego-Loam:它从Velodyne VLP-16激光雷达(水平放置)中的点云和可选的IMU数据作为输入。该系统实时输出6D姿态估计,并具有全局优化和闭环功能[17]。
  • Cartographer:它支持2D和3D SLAM [15]。
  • IMLS-SLAM:它提出了一种新的低漂移SLAM算法,它仅使用基于扫描到模型匹配框架的3D LiDAR数据[18]。

3)激光雷达中的深度学习

  • Feature and Detection:PointNetVLAD [19]允许端到端训练和推理来从给定的3D点云中提取全局描述器,以解决基于检索的点云位置识别。VoxelNet [20]是一种通用的3D检测网络,它将特征提取和边界框预测统一为一个单阶段,端到端的可训练深度网络。其他工作可以在BirdNet [21]中看到。 LMNet [22]描述了一种有效的单级深度卷积神经网络,用于检测对象并输出对象图和每个点的边界框偏移值。 PIXOR [23]是一种无建议的单级检测器,可输出从像素级神经网络预测解码而来的定向3D对象估计。Yolo3D [24]建立在2D透视图像空间中单次回归元架构的成功基础之上,并将其从LiDAR点云扩展以生成定向的3D对象边界框。PointCNN [25]建议从输入点学习X变换。 X变换是通过典型卷积运算符的逐元素乘积和求和运算来应用的。 MV3D [26]是一种传感器融合框架,将激光雷达点云和RGB图像作为输入并预测定向的3D边界框。 PU-GAN [27]提出了一种基于生成对抗网络(GAN)的新的点云上采样网络。其他类似的工作可以在CVPR2018的最佳论文中看到,但不仅限于[28]。
  • Recognition and Segmentation:实际上,3D点云的分割方法可以分为基于边的,区域增长,模型拟合,混合方法,机器学习应用和深度学习[29]。这里的重点是深度学习的方法。PointNet[30]设计了一种直接使用点云的新型神经网络,它具有分类,分割和语义分析的功能。PointNet ++ [31]通过增加上下文尺度学习层次特征。 VoteNet [32]基于PointNet ++构建了点云的3D检测管道,作为端到端3D对象检测网络。SegMap[33]是基于定位和制图问题的地图表示解决方案,在3D点云中提取分割。SqueezeSeg [34] [35] [36]是具有递归CRF(条件随机场)的卷积神经网络,用于从3d激光雷达点云实时分割道路目标。 PointSIFT [37]是3D点云的语义分割框架。它基于一个简单的模块,该模块从八个方向的相邻点提取特征。 PointWise [38]提出了一种卷积神经网络,用于使用3D点云进行语义分割和对象识别。 3P-RNN [39]是一种新颖的端到端方法,用于沿两个水平方向进行非结构化的点云语义分割,以利用固有的上下文特征。可以看到其他类似的工作,但不仅限于SPG [40]和其评价[29]。 SegMatch [41]是一种基于3D片段的检测和匹配的闭环方法。 Kd Network [42]专为3D模型识别任务而设计,可与非结构化点云一起使用。 DeepTempo ralSeg [43]提出了一种深度卷积神经网络(DCNN),用于对具有时间一致性的LiDAR扫描进行语义分割。 LU-Net [44]实现了语义分割的功能,而不是应用某些全局3D分割方法。可以看到其他类似的工作,但不仅限于PointRCNN [45]。
  • Localization:L3-Net [46]是一种新颖的基于学习的LiDAR定位系统,可实现厘米级的定位精度。 SuMa ++ [47]在整个扫描过程中以点标记来计算语义分割结果,从而使我们能够构建带有surfel标记的语义丰富的地图,并通过语义约束来改进投影的扫描匹配。

C、挑战和未来

1)成本和适应性
激光雷达的优势在于它可以提供3D信息,并且不受夜晚和光照变化的影响。 另外,视角比较大,可以达到360度。 但是,激光雷达的技术门槛很高,导致开发周期长,成本高昂。 未来,小型化,合理的成本,固态化以及实现高可靠性和适应性是趋势。

2)低纹理信息和动态环境
大多数SLAM系统只能在固定的环境中工作,但物体会不断变化。 此外,低纹理的环境(如长走廊和大管道)将给激光雷达SLAM带来麻烦。 [48]使用IMU协助2D SLAM解决上述障碍。 此外,[49]将时间维度纳入建图过程中,以使机器人能够在动态环境中运作时维持准确的地图。应该更加深入地考虑如何使Lidar SLAM在低纹理和动态环境下更强大,以及如何使地图保持最新状态。

3)对抗性传感器攻击
深度神经网络很容易受到对抗样本的攻击,这在基于相机的感知中也得到了证明,但是在基于激光雷达的感知中,它非常重要但尚未探索。通过中继攻击,[50]首先欺骗了激光雷达,干扰了输出数据和距离估计。 新颖的饱和度攻击完全无法使基于Velodynes VLP-16的激光雷达感测到某个方向。 [51]探索了策略性地控制欺骗性攻击以欺骗机器学习模型的可能性。本文将任务作为优化问题,针对输入扰动函数和目标函数设计建模方法,将攻击成功率提高到75%左右。对抗性传感器攻击将欺骗基于激光雷达点云的SLAM系统,该系统几乎很难发现和防御,因此是隐形的。 在这种情况下,如何防止激光雷达SLAM系统受到对抗性传感器攻击的研究应该成为一个新课题。

参考文献

[1] John J Leonard and Hugh F Durrant-Whyte. Simultaneous map building and localization for an autonomous mobile robot. In ProceedingsIROS’91: IEEE/RSJ International Workshop on Intelligent Robots and Systems’ 91, pages 1442–1447. Ieee, 1991.
[2] Randall Smith, Matthew Self, and Peter Cheeseman. Estimating un?certain spatial relationships in robotics. In Autonomous robot vehicles,pages 167–193. Springer, 1990.
[3] Baichuan Huang, Jingbin Liu, Wei Sun, and Fan Yang. A robust indoor positioning method based on bluetooth low energy with separate channel information. Sensors, 19(16):3487, 2019.
[4] Jingbin Liu, Ruizhi Chen, Yuwei Chen, Ling Pei, and Liang Chen.iparking: An intelligent indoor location-based smartphone parking service. Sensors, 12(11):14612–14629, 2012.
[5] Jingbin Liu, Ruizhi Chen, Ling Pei, Robert Guinness, and Heidi Kuusniemi. A hybrid smartphone indoor positioning solution for mobile lbs. Sensors, 12(12):17208–17233, 2012.
[6] Sebastian Thrun, Wolfram Burgard, and Dieter Fox. Probabilistic robotics. MIT press, 2005.
[7] Joao Machado Santos, David Portugal, and Rui P Rocha. An evaluation of 2d slam techniques available in robot operating system. In 2013 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR), pages 1–6. IEEE, 2013.
[8] Giorgio Grisetti, Cyrill Stachniss, Wolfram Burgard, et al. Improved techniques for grid mapping with rao-blackwellized particle filters.IEEE transactions on Robotics, 23(1):34, 2007.
[9] Michael Montemerlo, Sebastian Thrun, Daphne Koller, Ben Wegbreit,et al. Fastslam: A factored solution to the simultaneous localization and mapping problem. Aaai/iaai, 593598, 2002.
[10] Michael Montemerlo, Sebastian Thrun, Daphne Koller, Ben Wegbreit,et al. Fastslam 2.0: An improved particle filtering algorithm for simultaneous localization and mapping that provably converges. In IJCAI, pages 1151–1156, 2003.
[11] Stefan Kohlbrecher, Oskar Von Stryk, Johannes Meyer, and Uwe Klingauf. A flexible and scalable slam system with full 3d motion estimation. In 2011 IEEE International Symposium on Safety, Security,and Rescue Robotics, pages 155–160. IEEE, 2011.
[12] Kurt Konolige, Giorgio Grisetti, Rainer K ¨ummerle, Wolfram Burgard,Benson Limketkai, and Regis Vincent. Efficient sparse pose adjustment for 2d mapping. In 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems, pages 22–29. IEEE, 2010.
[13] Luca Carlone, Rosario Aragues, Jos′e A Castellanos, and Basilio Bona.A linear approximation for graph-based simultaneous localization and mapping. Robotics: Science and Systems VII, pages 41–48, 2012.
[14] B Steux and O TinySLAM El Hamzaoui. A slam algorithm in less than 200 lines c-language program. Proceedings of the Control Automation Robotics & Vision (ICARCV), Singapore, pages 7–10, 2010.
[15] Wolfgang Hess, Damon Kohler, Holger Rapp, and Daniel Andor. Realtime loop closure in 2d lidar slam. In 2016 IEEE International Conference on Robotics and Automation (ICRA), pages 1271–1278.IEEE, 2016.
[16] Ji Zhang and Sanjiv Singh. Loam: Lidar odometry and mapping in real-time. In Robotics: Science and Systems, volume 2, page 9, 2014.
[17] Tixiao Shan and Brendan Englot. Lego-loam: Lightweight and ground?optimized lidar odometry and mapping on variable terrain. In 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems(IROS), pages 4758–4765. IEEE, 2018.
[18] Jean-Emmanuel Deschaud. Imls-slam: scan-to-model matching based on 3d data. In 2018 IEEE International Conference on Robotics and Automation (ICRA), pages 2480–2485. IEEE, 2018.
[19] Mikaela Angelina Uy and Gim Hee Lee. Pointnetvlad: Deep point cloud based retrieval for large-scale place recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4470–4479, 2018.
[20] Yin Zhou and Oncel Tuzel. Voxelnet: End-to-end learning for point cloud based 3d object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4490–4499,2018.
[21] Jorge Beltr′an, Carlos Guindel, Francisco Miguel Moreno, Daniel Cruzado, Fernando Garcia, and Arturo De La Escalera. Birdnet: a 3d object detection framework from lidar information. In 2018 21st International Conference on Intelligent Transportation Systems (ITSC),pages 3517–3523. IEEE, 2018.
[22] Kazuki Minemura, Hengfui Liau, Abraham Monrroy, and Shinpei Kato.Lmnet: Real-time multiclass object detection on cpu using 3d lidar.10 In 2018 3rd Asia-Pacific Conference on Intelligent Robot Systems(ACIRS), pages 28–34. IEEE, 2018.
[23] Bin Yang, Wenjie Luo, and Raquel Urtasun. Pixor: Real-time 3d object detection from point clouds. In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, pages 7652–7660, 2018.
[24] Waleed Ali, Sherif Abdelkarim, Mahmoud Zidan, Mohamed Zahran,and Ahmad El Sallab. Yolo3d: End-to-end real-time 3d oriented object bounding box detection from lidar point cloud. In Proceedings of the European Conference on Computer Vision (ECCV), pages 0–0, 2018.
[25] Yangyan Li, Rui Bu, Mingchao Sun, Wei Wu, Xinhan Di, and Baoquan Chen. Pointcnn: Convolution on x-transformed points. In Advances in Neural Information Processing Systems, pages 820–830, 2018.
[26] Xiaozhi Chen, Huimin Ma, Ji Wan, Bo Li, and Tian Xia. Multi-view 3d object detection network for autonomous driving. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,pages 1907–1915, 2017.
[27] Ruihui Li, Xianzhi Li, Chi-Wing Fu, Daniel Cohen-Or, and PhengAnn Heng. Pu-gan: A point cloud upsampling adversarial network. In Proceedings of the IEEE International Conference on Computer Vision,pages 7203–7212, 2019.
[28] Hang Su, Varun Jampani, Deqing Sun, Subhransu Maji, Evangelos Kalogerakis, Ming-Hsuan Yang, and Jan Kautz. Splatnet: Sparse lattice networks for point cloud processing. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2530–2539, 2018.
[29] E Grilli, F Menna, and F Remondino. A review of point clouds segmentation and classification algorithms. The International Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences,42:339, 2017.
[30] Charles R Qi, Hao Su, Kaichun Mo, and Leonidas J Guibas. Pointnet:Deep learning on point sets for 3d classification and segmentation.arXiv preprint arXiv:1612.00593, 2016.
[31] Charles R Qi, Li Yi, Hao Su, and Leonidas J Guibas. Pointnet++: Deep hierarchical feature learning on point sets in a metric space. arXiv preprint arXiv:1706.02413, 2017.
[32] Charles R Qi, Or Litany, Kaiming He, and Leonidas J Guibas. Deep hough voting for 3d object detection in point clouds. arXiv preprint arXiv:1904.09664, 2019.
[33] Renaud Dube, Andrei Cramariuc, Daniel Dugas, Juan Nieto, Roland Siegwart, and Cesar Cadena. SegMap: 3d segment mapping using data-driven descriptors. In Robotics: Science and Systems (RSS), 2018.
[34] Bichen Wu, Alvin Wan, Xiangyu Yue, and Kurt Keutzer. Squeezeseg:Convolutional neural nets with recurrent crf for real-time road-object segmentation from 3d lidar point cloud. ICRA, 2018.
[35] Bichen Wu, Xuanyu Zhou, Sicheng Zhao, Xiangyu Yue, and Kurt Keutzer. Squeezesegv2: Improved model structure and unsupervised domain adaptation for road-object segmentation from a lidar point cloud. In ICRA, 2019.
[36] Xiangyu Yue, Bichen Wu, Sanjit A Seshia, Kurt Keutzer, and Alberto L Sangiovanni-Vincentelli. A lidar point cloud generator: from a virtual world to autonomous driving. In ICMR, pages 458–464. ACM, 2018.
[37] Mingyang Jiang, Yiran Wu, Tianqi Zhao, Zelin Zhao, and Cewu Lu.Pointsift: A sift-like network module for 3d point cloud semantic segmentation. arXiv preprint arXiv:1807.00652, 2018.
[38] Binh-Son Hua, Minh-Khoi Tran, and Sai-Kit Yeung. Pointwise convolutional neural networks. In Computer Vision and Pattern Recognition(CVPR), 2018.
[39] Xiaoqing Ye, Jiamao Li, Hexiao Huang, Liang Du, and Xiaolin Zhang.3d recurrent neural networks with context fusion for point cloud semantic segmentation. In Proceedings of the European Conference on Computer Vision (ECCV), pages 403–417, 2018.
[40] Loic Landrieu and Martin Simonovsky. Large-scale point cloud semantic segmentation with superpoint graphs. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages4558–4567, 2018.
[41] Renaud Dub′e, Daniel Dugas, Elena Stumm, Juan Nieto, Roland Siegwart, and Cesar Cadena. Segmatch: Segment based place recognition in 3d point clouds. In 2017 IEEE International Conference on Robotics and Automation (ICRA), pages 5266–5272. IEEE, 2017.
[42] Roman Klokov and Victor Lempitsky. Escape from cells: Deep kdnetworks for the recognition of 3d point cloud models. In Proceedings of the IEEE International Conference on Computer Vision, pages 863–872, 2017.
[43] Ayush Dewan and Wolfram Burgard. Deeptemporalseg: Temporally consistent semantic segmentation of 3d lidar scans. arXiv preprint arXiv:1906.06962, 2019.
[44] Pierre Biasutti, Vincent Lepetit, Jean-Franois Aujol, Mathieu Brdif,and Aurlie Bugeau. Lu-net: An efficient network for 3d lidar point cloud semantic segmentation based on end-to-end-learned 3d features and u-net. 08 2019.
[45] Shaoshuai Shi, Xiaogang Wang, and Hongsheng Li. Pointrcnn: 3d object proposal generation and detection from point cloud. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,pages 770–779, 2019.
[46] Lu Weixin, Zhou Yao, Wan Guowei, Hou Shenhua, and Song Shiyu.L3-net: Towards learning based lidar localization for autonomous driving. In IEEE Conference on Computer Vision and Pattern Recognition(CVPR), 2019.
[47] Chen Xieyuanli, Milioto Andres, and Emanuelea Palazzolo. Suma++:Efficient lidar-based semantic slam. In 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2019.
[48] Zhongli Wang, Yan Chen, Yue Mei, Kuo Yang, and Baigen Cai. Imuassisted 2d slam method for low-texture and dynamic environments.Applied Sciences, 8(12):2534, 2018.
[49] Aisha Walcott-Bryant, Michael Kaess, Hordur Johannsson, and John J Leonard. Dynamic pose graph slam: Long-term mapping in low dynamic environments. In 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems, pages 1871–1878. IEEE, 2012.
[50] Hocheol Shin, Dohyun Kim, Yujin Kwon, and Yongdae Kim. Illusion and dazzle: Adversarial optical channel exploits against lidars for automotive applications. In International Conference on Cryptographic Hardware and Embedded Systems, pages 445–467. Springer, 2017.
[51] Yulong Cao, Chaowei Xiao, Benjamin Cyr, Yimeng Zhou, Won Park, Sara Rampazzi, Qi Alfred Chen, Kevin Fu, and Z Morley Mao.Adversarial sensor attack on lidar-based perception in autonomous driving. arXiv preprint arXiv:1907.06826, 2019.

  • 1
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

Laney_Midory

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值