论文写作——2 语句的逻辑关系与表达(2)


语句的逻辑关系与表达(续)

一 总述

1)写文档(包括论文),我的习惯是第一版先把自己的想法堆上去,毕竟第一版就想把语句写的很精炼、准确是一件难事。
2)修改初稿,建议从:整篇文章的框架布局 -> 段落相互服务关系(参考introduction的故事思路) -> 语句上下文服务关系(本文),这样的顺序去修改
3)不管写英语句子还是中文句子,写好每一句话都不容易,需要作者字斟句酌、反复推敲。不要觉得别人就一步到位了,谁都不易。
4)为了不让自己写的句子比较饶,尽量不要在一句话中,表达太多意思。学会拆分长句需要下功夫。
5)从整段甚至整篇文章的角度看,写好当前语句的另一个重要前提是瞻前顾后。

二 举例说明(续)

在这里插入图片描述
在这里插入图片描述

图1 原文第5段,基于第4段提到的不足,本段提出本文的工作
7. To alleviate this issue, in this paper, we propose a novel algorithm called attention and self-attention fusion for multiple instance learning (ASMI). 图1的①:

1)提出自己的工作,直接说就好了。科技论文不需要华丽的辞藻,也不需要拐弯抹角
2)a novel algorithm放在这也有些多余,你提出的就是一个新的算法
3)一句话两个介词短语,尽量避免
修改:
  ->
  In this paper, we propose an attention and self-attention fusion for multiple instance learning algorithm (ASMI) to address this issue.
  ->issue 是比较中性,drawback, problem 属于贬义。
   所以一般搭配是: to address this issue. 或 to alleviate this drawback.

8.Specifically, we firstly obtain bag fused vectors using traditional attention block.
Furthermore, we exploit the higher level information of original instances to enhance the representativeness of bag fused vectors with a new self-attention block.
Therefore, the output of self-attention block will be more representative than that of the attention block.

问题:
1) 把图1的 ②-⑤句(第5段)和图2(第6段)对比体会一下。
  属于“段落相互服务,逻辑支撑”的错误。
  第5段在说:本文题出ASMI来解决上述问题。然后你这么做的…
  第6段在说:你画了一个fig.1 来说明ASMI的实现流程。
  显然他们都在简述ASMI的策略,重复了。

在这里插入图片描述

图2 原文第6段 对开局一张图fig.1的说明

修改
  ->
  with the bag fused vectors, we introduce a new self-attention block and two fullconnected network to achieve desired bag classification accuracy.
意图:
  a)本文提出ASMI算法来解决第4段提到的不足,引入了两个新的部分来获得更好的算法效果(即:从结构的角度,点明与第4段提到的attention网络的不同)。而具体做法,就交给第6段罗。
  b)with the bag fused vectors: 体现了第4和第5段间的承上启下。
  即:利用第4段提到的attention网络的输出bag fused vectors(如图3),我们引入了…
在这里插入图片描述

图3 原文第4段提到注意力网络最终把包转换为a fused vector

9. And the whole process can be simply divided into the following three parts:
图2的第①句:
1)这里和前一句没有必然的连接关系,and就没必要了
2)whole显得有点口语化了
3)process 对应的 stage,不适用于用part
Yang修改
  -->
  The process can be simply divided into the following three stage:
Min修改
   ->
   The process can be divided into the following three stage:
   simply 一般也不用了,我后来理解,simply有些多余

10. In the constructed attention layer, a learnable linear transformation is applied to each input instance and the weights (attention coefficients) of each instance are computed by a single layer network.
图2的第②部分:
1)长句问题
2)居中用layer 与你的fig1用的block不匹配.
   注意,论文的示意图中,模块、变量等的名字,都应该和正文中的描述保持一致。
在这里插入图片描述

图4 fig.1的部分图
  修改

  -->
    In the attention block, a learnable linear transformation is first applied to the instances in a given bag.
    Then a single layer network is employed to learn the instance attention weights.

11.Then the bag fused vector can be calculated as the sum of dot products between transformed instances and their corresponding weights.

图2的第②部分续
这句话顺着前面,改成如下:
  -》Accordingly, a fused vector is calculated as the sum of dot products between transformed instances and their corresponding weights.
  Accordingly续前一句话,说明a fused vector是根据attention block的输出,进一步计算得到的,在与句之间起承接作用。同时,省去了再次说权重、线性变换向量的同时,让句子变短,读者顺着读下来也很清楚。

12. Two instance-level networks serve as feature extraction parts to obtain both higher level information of the fused vector and original instances by embedding them into a relatively low dimensional space.
问题:句子太长,想一口气说的信息太多了
Yang修改
  -->
  Two instance-level networks serve as feature extraction parts to embed each bag and its fused vector to a relatively low dimensional space.
  They are merged to form the input of the self-attention block.
Min修改
  ->
  其实还可以进一步分成三句:
   One instance-level network …
   Another network …
   They are merged to form the input of the self-attention block.

13. Self-attention is performed to utilize embedded instances to enhance the representativeness of embed bag features by aggregating them into an enhanced fused vector.
Notably, this aggregation process also demands weights which can be calculated by the same form in attention layer but using different data input method.

图2的第④部分
Self-attention是本文的创新点,但此处描述混乱,完全忘了Min辅导的摘要。
  ->
  With embedded information, self-attention mechanism introduces two advantages.
  One is the ability to reduce the impact of inconsistent instances that are assigned large weights.
  The other is the high-quality feature extraction capability obtained by the influence of each instance to its fused vector.
  Therefore, the output of selfattention block is the enhanced fused vector which has higher representation power than that of the attention block.
  本来应该尽量换一下说明,但是摘要这部分Min改得太好,实在想不出可以达到等同效应的替换语句
  还好Min的反馈是: 正文中的话可以适当与摘要的有重复,呵呵。

14.We propose to apply two types of attention mechanisms to MIL scenario for the very first time and depict our motivation in detail.
论文中描述的贡献1:
问题:
1)MIL 不是场景,是机器学期的一个方向
2)depict our motivation in detail.——描述动机不算贡献
修改
  ->
  a)we propose a framework that fuse attention and self-attention mechanism for the MIL task.
  b)自己再稍微展开描述下,如:self-attention如何利用attention的信息的等即可。——introduction的前面其实已经说到(比如第10条的修改语句,fig1涉及这里的描述等),自己再倒腾、简化说一下就是了。

15. We analyze the rationality and benefits of employing these two attention mechanisms in terms of characteristics and meaning, i.e., one for aggregation and another for modification and enhancement.
-》
论文中描述的贡献2:
我建议第二个贡献应该是回应摘要说的如下问题:
  However, a large weight may be assigned to an instance whose information is inconsistent with its bag.
  Consequently, the representativeness of the fused vector will be decreased.
修改如下:
  -》
  We alleviate the weak representativeness of the bag fused vecotr.
  This is caused by the fact that a single attention network may assigns large weights to instances whose information is inconsistent with its bags.
  The introduced self-attention block effectively weaken the influence.
  Furthermore, using the influence of each instance to its fused vector, ASMI has the high-quality feature extraction capability.
当然,改出来后还要读下整段,捋下语句的逻辑关系。

16.The generality and effectiveness of our proposed algorithm is demonstrated by extensive experiments on multiple datasets, and the performance will be evaluated using accuracy and F1-score.
论文中描述的贡献3:
1)实验验证算法的有效性,不算贡献
2)建议的第三个贡献:ASMI是通过增加网络的层数去提高效果,因此算法不受数据的维度和规模的影响
  -》
  The ASMI algorithm has excellent versatility and adaptability.
  By the strategy of adding a self-attention and feature extract layer, ASMI is not sensitive to the dimensionality and size of the data set.

三 结语

1 论文布局 -> 段落与段落间的相互呼应与支撑 -> 语句与语句之间的相互呼应与支撑 ,在一篇文章中要做到,也不是一两遍就搞定的,需要作者反复推敲,琢磨,用心、用情、用智慧外加耐心!
2 长句改短句,我从这一次的修改中得到的体会是:
   a)前一句交代清楚了的信息,下一段尽量不要重复说,从而较少下一句的信息量。比如用代词(this it they 但不能用太多)、伴随状语(with、…)、连接副词(accordingly、consequently、specially…)来承上启下
  b)必须要说那么多信息,那就只有想办法尽可能拆开说。
3 没改完一遍,还需要再通读。然后顺藤摸瓜,通篇改,这样每一版论文就会逐渐提升。
4 建议结合韬第17版论文阅读“论文写作——2 语句的逻辑关系与表达(1)”和“论文写作——2 语句的逻辑关系与表达(2)”,可以更好体会。

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值