word中通过宏对某章节下图片批量插入题注

提前定义好题注的样式,一般为图-章节编号-序号 题注标题

将鼠标焦点选中到在某一个章节内容的第一段,然后执行以下宏脚本,即可自动批量生成图片的题注,题注的标题自动为该章节的标题,并根据图片的顺序在标题中自动加序号。

宏脚本如下:

‘获取当前位置的章节标题

Function GetListString()

    Dim lngNumOfParagraphs As Long
    Dim strListValue As String
     
    On Error Resume Next
     
    Do
        If Err.Number Then Exit Do
         
        lngNumOfParagraphs = lngNumOfParagraphs + 1
        
        If (Selection.Previous(wdParagraph, lngNumOfParagraphs).Paragraphs.OutlineLevel <> wdOutlineLevelBodyText) Then
        
        
            strListValue = Selection.Previous(wdParagraph, lngNumOfParagraphs).Paragraphs(1).Range.Text
            GoTo Report_ListValue
        
        End If
        
         
    Loop
     
    Exit Function
     
Report_ListValue:
   ' MsgBox "The selected table is in chapter: " & strListValue
    strListValue = Left(strListValue, Len(strListValue) - 1)
    GetListString = strListValue
    
End Function

Sub Example()

Dim titleA As String '初始章节标题

Dim titleB As String '当前图片所在章节标题

Dim s

Dim t

t = 0

titleA = GetListString()

For s = 1 To 10

    
    
    
    '查找下一个图片
    With Selection.Find
        .ClearFormatting
        .Text = "^g"
        .Execute Forward:=True
    End With
    
    titleB = GetListString()
    
    
    '如果图片已切换章节,则退出
    If (titleB <> titleA) Then
       Exit For 
      '  t = 0
      '  titleA = titleB
        
        
    End If
    

     '判断是否为图片
    If Selection.Type = 7 Then
    
        '在图片下方换行
        Selection.MoveRight
        Selection.TypeParagraph
        
        '输入题注标题
        t = t + 1
        Selection.TypeText titleA & t
        
        '插入题注
        Selection.HomeKey Unit:=wdLine
        Selection.InsertCaption Label:="图", TitleAutoText:="InsertCaption3", _
        title:="", Position:=wdCaptionPositionBelow, ExcludeLabel:=0
             
        '题注居中
        Selection.ParagraphFormat.Alignment = wdAlignParagraphCenter

        
    End If
    
    

Next s


End Sub
 

### Context Aggregation Technique for Improving YOLOv10 Performance Context aggregation refers to a method that enhances the model's ability to capture contextual information from an image, which is crucial for accurate object detection. In earlier versions of YOLO (e.g., YOLOv3), context aggregation techniques were not fully exploited compared to more recent architectures like YOLOv4 or EfficientDet[^1]. To improve the performance of YOLOv10 using context aggregation methods, several strategies can be employed: #### 1. Feature Pyramid Networks (FPN) Feature Pyramid Networks are widely used in modern object detectors such as Faster R-CNN and RetinaNet. By integrating FPN into YOLOv10, multi-scale feature maps can be generated at different levels of abstraction. This allows the network to better detect objects across various scales while maintaining high-speed inference capabilities. The integration process involves adding lateral connections between top-down pathways and bottom-up pathways within the backbone network structure[^3]. ```python def build_fpn(features): p5_upsampled = tf.keras.layers.UpSampling2D(size=(2, 2))(features['p5']) p4_combined = tf.keras.layers.Add()([features['p4'], p5_upsampled]) p4_upsampled = tf.keras.layers.UpSampling2D(size=(2, 2))(p4_combined) p3_combined = tf.keras.layers.Add()([features['p3'], p4_upsampled]) outputs = { 'p3': p3_combined, 'p4': p4_combined, 'p5': features['p5'] } return outputs ``` #### 2. Spatial Attention Mechanism Spatial attention mechanisms focus on enhancing the spatial relationships among pixels in the input images. These mechanisms allow the detector to emphasize important regions while suppressing irrelevant ones. For instance, CBAM (Convolutional Block Attention Module) has been successfully applied in many computer vision tasks due to its simplicity and effectiveness. Incorporating spatial attention modules could significantly boost the accuracy of small-object detections without increasing computational costs excessively. #### 3. Global Context Information via Non-local Blocks Non-local blocks enable long-range dependencies modeling by computing pairwise interactions over all positions in the feature map. Adding non-local operations helps aggregate global context information effectively, leading to improved localization precision especially when dealing with occluded or partially visible targets. However, it should also be noted that introducing too much complexity might degrade real-time processing speed—a critical factor considered during design phases of lightweight models including YOLO series variants. #### Summary Code Example Combining All Techniques Above: Below demonstrates how these three approaches may coexist harmoniously inside one unified pipeline tailored specifically towards boosting overall efficiency & robustness against challenging scenarios faced under practical applications involving diverse datasets spanning multiple domains simultaneously! ```python import tensorflow as tf class ContextAggregator(tf.keras.Model): def __init__(self, **kwargs): super(ContextAggregator, self).__init__(**kwargs) # Define components here... def call(self, inputs): fpn_output = self.build_fpn(inputs["backbone_features"]) attended_maps = self.apply_spatial_attention(fpn_output) final_context = self.integrate_global_context(attended_maps) return final_context @staticmethod def apply_spatial_attention(feature_dict): pass # Implement this function based on chosen mechanism. @staticmethod def integrate_global_context(spatially_attentive_map): pass # Introduce non-local block logic accordingly. ```
评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值