4-Method

Methods

本章节既可以叫 Methods(加不加s都行),或者叫 Our Approach

第一段大致介绍一下结构

多强调你的方法的特点,与众不同之处,让审稿人一定能看到。

第一句:预热一下
We now present our xx framework for xx.
In this section, we provide a more detailed description of the presented approach, xxx.
Given xx, we aim to achieve xxx (你文章是干什么的)
Our goal is to …
This section introduces the overall working mechanism and specific technical implementations of the proposed xx.
如果你的方法由多个部分构成的话
Our xx process is split into two parts: a xx branch, which xxx, and a xx branch that xxx.
章节结构安排
In the following, we first introduce xx in Sec. 3.1, before proceeding to xx in Sec 3.2.
The organization of this section is as follows: We introduce xx in Sec. 3.1, including xx. In Sec. 3.2, we first describe xx. Then we present xx.
To begin with, we review the basic idea and pipeline of xxx.
To facilitate understanding, we begin with an overview of our framework xx

插补一些有用的连词:Thereafter

最后一句:介绍核心的流程图
Fig. x gives an overview of the training pipeline.
Our model is illustrated in Fig. x.
In essence, our method extends, …, as demonstrated in Fig. 1
Fig.1 depicts xx
Fig.1 shows the framework of the proposed method.
schematic illustrating

开始写方法

比较具体,这里就不再过多介绍。

Training Details

训练细节
We inherit the training procedure from xx with minimal changes.
The optimization is performed by Adam with learning rate of 0.002 and betas of 0 and 0.99
Further details can be found in the source code.

其他

有的时候会为了简单起见,省略一些符号标记
for the sake of simplicity, we
For ease of notation, we will use latent codes c to denote the concatenation of all latent variables ci.
对公式的描述
where λ is a positive constant trading off the importance of the first and second terms of the loss
一些不错的说辞
we want to avoid investing effort into a component that is not a part of the actual solution.

数学公式相关

just show many examples

f j ′ = [ ∇ x f j ( x 0 ) ] t d f_{j}^{\prime}=\left[\nabla_{\mathbf{x}} f_{j}\left(\mathbf{x}_{0}\right)\right]^{t} \mathbf{d} fj=[xfj(x0)]td

where ∇ x \nabla_{\mathbf{x}} x is the symbol for the gradient w.r.t. x \mathbf{x} x and the superscript t ^{t} t stands for transposition. One seeks for a vector d \mathbf{d} d such that the scalar product of any objective gradient ∇ x f j ( x 0 ) \nabla_{x} f_{j}\left(\mathbf{x}_{0}\right) xfj(x0) with the vector d remains strictly positive f j ′ > 0 f_{j}^{\prime} > 0 fj>0

we use xx to denote xx

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值