【YOLOv8改进 】 AKConv(可改变核卷积):任意数量的参数和任意采样形状的即插即用的卷积 (论文笔记+引入代码) | https://blog.csdn.net/shangyanaf/article/details/135661842 | CONV |
【YOLOv8改进】动态蛇形卷积(Dynamic Snake Convolution)用于管状结构分割任务 (论文笔记+引入代码) | https://blog.csdn.net/shangyanaf/article/details/135668961 | CONV |
【YOLOv8改进】SCConv :即插即用的空间和通道重建卷积 (论文笔记+引入代码) | https://blog.csdn.net/shangyanaf/article/details/135742727 | CONV |
【YOLOv8改进】RFAConv:感受野注意力卷积,创新空间注意力 (论文笔记+引入代码) | https://blog.csdn.net/shangyanaf/article/details/135815075 | CONV |
【YOLOv8改进】骨干网络: SwinTransformer (基于位移窗口的层次化视觉变换器)(论文笔记+引入代码) | https://blog.csdn.net/shangyanaf/article/details/135867187 | |
【YOLOv8改进】Inner-IoU: 基于辅助边框的IoU损失(论文笔记+引入代码) | https://blog.csdn.net/shangyanaf/article/details/135904930 | 损失函数 |
【YOLOv8改进】Shape-IoU:考虑边框形状与尺度的指标(论文笔记+引入代码) | https://blog.csdn.net/shangyanaf/article/details/135927712 | 损失函数 |
【YOLOv8改进】MPDIoU:有效和准确的边界框损失回归函数 (论文笔记+引入代码) | https://blog.csdn.net/shangyanaf/article/details/135948703 | 损失函数 |
【YOLOv8改进】BiFPN:加权双向特征金字塔网络 (论文笔记+引入代码) | https://blog.csdn.net/shangyanaf/article/details/136021981 | 特征融合 |
【YOLOv8改进】 AFPN :渐进特征金字塔网络 (论文笔记+引入代码).md | https://blog.csdn.net/shangyanaf/article/details/136025499 | 特征融合 |
【YOLOv8改进】 SPD-Conv空间深度转换卷积,处理低分辨率图像和小对象问题 (论文笔记+引入代码) | https://blog.csdn.net/shangyanaf/article/details/136051327 | CONV |
【YOLOv8改进】MSCA: 多尺度卷积注意力 (论文笔记+引入代码).md | https://blog.csdn.net/shangyanaf/article/details/136057088 | 注意力 |
【YOLOv8改进】 YOLOv8 更换骨干网络之 GhostNet :通过低成本操作获得更多特征 (论文笔记+引入代码).md | https://blog.csdn.net/shangyanaf/article/details/136151800 | |
【YOLOv8改进】 YOLOv8 更换骨干网络之GhostNetV2 长距离注意力机制增强廉价操作,构建更强端侧轻量型骨干 (论文笔记+引入代码) | https://blog.csdn.net/shangyanaf/article/details/136170972 | |
【YOLOv8改进】MCA:用于图像识别的深度卷积神经网络中的多维协作注意力 (论文笔记+引入代码) | https://blog.csdn.net/shangyanaf/article/details/136205065 | 注意力 |
【YOLOv8改进】 MSDA:多尺度空洞注意力 (论文笔记+引入代码) | https://blog.csdn.net/shangyanaf/article/details/136215149 | 注意力 |
【YOLOv8改进】iRMB: 倒置残差移动块 (论文笔记+引入代码) | https://blog.csdn.net/shangyanaf/article/details/136658166 | |
【YOLOv8改进】CoordAttention: 用于移动端的高效坐标注意力机制 (论文笔记+引入代码) | https://blog.csdn.net/shangyanaf/article/details/136824282 | 注意力机制 |
【YOLOv8改进】MobileNetV3替换Backbone (论文笔记+引入代码) | https://blog.csdn.net/shangyanaf/article/details/136891204 | 主干 |
【YOLOv8改进】MobileViT 更换主干网络: 轻量级、通用且适合移动设备的视觉变压器 (论文笔记+引入代码) | https://blog.csdn.net/shangyanaf/article/details/136962297 | 主干 |
【YOLOv8改进】MSBlock : 分层特征融合策略 (论文笔记+引入代码) | https://blog.csdn.net/shangyanaf/article/details/137029177 | CONV |
【YOLOv8改进】Polarized Self-Attention: 极化自注意力 (论文笔记+引入代码) | https://blog.csdn.net/shangyanaf/article/details/137295765 | 注意力机制 |
【YOLOv8改进】LSKNet(Large Selective Kernel Network ):空间选择注意力 (论文笔记+引入代码) | https://blog.csdn.net/shangyanaf/article/details/137614259 | 注意力机制 |
【YOLOv8改进】Explicit Visual Center: 中心化特征金字塔模块(论文笔记+引入代码) | https://blog.csdn.net/shangyanaf/article/details/137645622 | 特征融合篇 |
【YOLOv8改进】Non-Local:基于非局部均值去噪滤波的自注意力模型 (论文笔记+引入代码) | https://blog.csdn.net/shangyanaf/article/details/139105131 | 注意力机制 |
【YOLOv8改进】STA(Super Token Attention) 超级令牌注意力机制 (论文笔记+引入代码)) | https://blog.csdn.net/shangyanaf/article/details/139113660 | 注意力机制 |
【YOLOv8改进】HAT(Hybrid Attention Transformer,)混合注意力机制 (论文笔记+引入代码)) | https://blog.csdn.net/shangyanaf/article/details/139142532 | 注意力机制 |
【YOLOv8改进】ACmix(Mixed Self-Attention and Convolution) (论文笔记+引入代码) | https://blog.csdn.net/shangyanaf/article/details/139167656 | 混合卷积注意力机制 |
【YOLOv8改进】EMA(Efficient Multi-Scale Attention):基于跨空间学习的高效多尺度注意力 (论文笔记+引入代码) | https://blog.csdn.net/shangyanaf/article/details/139160226 | 注意力机制 |
【YOLOv8改进】CPCA(Channel prior convolutional attention)中的通道注意力,增强特征表征能力 (论文笔记+引入代码) | https://blog.csdn.net/shangyanaf/article/details/139186904 | 注意力机制 |
【YOLOv8改进】DAT(Deformable Attention):可变性注意力 (论文笔记+引入代码) | https://blog.csdn.net/shangyanaf/article/details/139193465 | 注意力机制 |
【YOLOv8改进】D-LKA Attention:可变形大核注意力 (论文笔记+引入代码) | https://blog.csdn.net/shangyanaf/article/details/139212227 | 注意力机制 |
【YOLOv8改进】LSKA(Large Separable Kernel Attention):大核分离卷积注意力模块 (论文笔记+引入代码) | https://blog.csdn.net/shangyanaf/article/details/139249202 | 注意力机制 |
【YOLOv8改进】CoTAttention:上下文转换器注意力(论文笔记+引入代码) | https://blog.csdn.net/shangyanaf/article/details/139261641 | 注意力机制 |
【YOLOv8改进】MLCA(Mixed local channel attention):混合局部通道注意力(论文笔记+引入代码) | https://blog.csdn.net/shangyanaf/article/details/139279527 | 注意力机制 |
【YOLOv8改进】CAFM(Convolution and Attention Fusion Module):卷积和注意力融合模块 | https://blog.csdn.net/shangyanaf/article/details/139305822 | 混合卷积注意力机制 |
【YOLOv8改进】MSFN(Multi-Scale Feed-Forward Network):多尺度前馈网络 | https://blog.csdn.net/shangyanaf/article/details/139306250 | 其他 |
【YOLOv8改进】BRA(bi-level routing attention ):双层路由注意力(论文笔记+引入代码) | https://blog.csdn.net/shangyanaf/article/details/139307690 | 注意力机制 |
【YOLOv8改进】 ODConv(Omni-Dimensional Dynamic Convolution):全维度动态卷积 | https://blog.csdn.net/shangyanaf/article/details/139389091 | CONV |
【YOLOv8改进】 SAConv(Switchable Atrous Convolution):可切换的空洞卷积 | https://blog.csdn.net/shangyanaf/article/details/139393928 | CONV |
【YOLOv8改进】 ParameterNet:DynamicConv(Dynamic Convolution):2024最新动态卷积 | https://blog.csdn.net/shangyanaf/article/details/139395420 | CONV |
【YOLOv8改进】 RFB (Receptive Field Block):多分支卷积块 | https://blog.csdn.net/shangyanaf/article/details/139431807 | CONV |
【YOLOv8改进】 OREPA(Online Convolutional Re-parameterization):在线卷积重参数化 | https://blog.csdn.net/shangyanaf/article/details/139465775 | CONV |
【YOLOv8改进】DualConv( Dual Convolutional):用于轻量级深度神经网络的双卷积核 | https://blog.csdn.net/shangyanaf/article/details/139477420 | CONV |
【YOLOv8改进】SlideLoss损失函数,解决样本不平衡问题 | https://blog.csdn.net/shangyanaf/article/details/139483941 | 损失函数 |
【YOLOv8改进】 YOLOv8自带损失函数CIoU / DIoU / GIoU 详解,以及如何切换损失函数 | https://blog.csdn.net/shangyanaf/article/details/139509783 | 损失函数 |
【YOLOv8改进】YOLOv8 更换损失函数之 SIoU EIoU WIoU _ Focal_*IoU CIoU DIoU ShapeIoU MPDIou | https://blog.csdn.net/shangyanaf/article/details/139512620 | 损失函数 |
| | |