python损失函数实现_使用python层Caffe实现Bhattacharyya损失函数

Trying to implement my custom loss layer using python layer,caffe. I've used this example as the guide and have wrote the forward function as follow:

def forward(self,bottom,top):

score = 0;

self.mult[...] = np.multiply(bottom[0].data,bottom[1].data)

self.multAndsqrt[...] = np.sqrt(self.mult)

top[0].data[...] = -math.log(np.sum(self.multAndsqrt))

However, the second task, that is implementing the backward function is kinda much difficult for me as I'm totally unfamiliar with python. So please help me with coding the backward section.

Here is the cost function and its derivative for stocashtic gradient decent to be implemented:

Thanks in advance.

Note that p[i] in the table indicates the ith output neuron value.

解决方案

Lets say bottom[0].data is p, bottom\[1].data is q and Db(p,q) denotes the Bhattacharyya Distance between p and q.

The only thing you need to do in your backward function is to compute the partial derivatives of Db with respect to its inputs (p and q), and store them in the respective bottom diff blobs:

So your backward function would look something like:

def backward(self, top, propagate_down, bottom):

if propagate_down[0]:

bottom[0].diff[...] = # calculate dDb(p,q)/dp

if propagate_down[1]:

bottom[1].diff[...] = # calculate dDb(p,q)/dq

Note that you normally use the average (instead of the total) error of your batch. Then you would end up with something like this:

def forward(self,bottom,top):

self.mult[...] = np.multiply(bottom[0].data,bottom[1].data)

self.multAndsqrt[...] = np.sqrt(self.mult)

top[0].data[...] = -math.log(np.sum(self.multAndsqrt)) / bottom[0].num

def backward(self, top, propagate_down, bottom):

if propagate_down[0]:

bottom[0].diff[...] = # calculate dDb(p,q)/dp

/ bottom[0].num

if propagate_down[1]:

bottom[1].diff[...] = # calculate dDb(p,q)/dq

/ bottom[1].num

Once you calculated the partial derivatives of Db, you can insert them in the templates above as you did for the function of the forward pass.

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值