转自:https://github.com/BVLC/caffe/blob/master/examples/pycaffe/layers/pyloss.py
Euclidean Loss层用来计算两个输入差值的平方和:
对于caffe里存储的数据格式以及Blob解读:
-
blob是caffe的数据类型,格式为(n,c,h,w),即batchsize,channels,height,width。也是一个行优先存储的四维矩阵
含有的属性有shape_(存储blob的shape)、data_(存储ndarray的四维矩阵---即原始数据)、diff_(存储的是梯度)、count_(存储的是数据的总量----等于 n*c )。
-
type(bottom): caffe._caffe.RawBlobVec
-
type(bottom[0]): caffe._caffe.Blob
-
type(bottom[0].data): numpy.ndarray
-
bottom[0].data.shape: (n,c,h,w)
-
bottom[0].num: batchsize即n
-
bottom[0].count 数据的容量n*c
-
dim=bottom[0].count/bottom[0].num 数据的维度c
import caffe
import numpy as np
class EuclideanLossLayer(caffe.Layer):
"""
Compute the Euclidean Loss in the same manner as the C++ EuclideanLossLayer
to demonstrate the class interface for developing layers in Python.
"""
def setup(self, bottom, top):
# check input pair 检查输入是否是两个
if len(bottom) != 2:
raise Exception("Need two inputs to compute distance.")
def reshape(self, bottom, top):
# check input dimensions match 检查两个输入的维度是否匹配,bottom[0].count 是元素的个数
if bottom[0].count != bottom[1].count:
raise Exception("Inputs must have the same dimension.")
# difference is shape of inputs 梯度的的维度和输入相同,存储中间计算结果,辅助变量
self.diff = np.zeros_like(bottom[0].data, dtype=np.float32)
#我们还可以继续定义一些其他的辅助变量
# loss output is scalar 设置输出形状为一个标量
top[0].reshape(1)
def forward(self, bottom, top): #前向传播,计算输出的总损失
self.diff[...] = bottom[0].data - bottom[1].data
top[0].data[...] = np.sum(self.diff**2) / bottom[0].num / 2.
def backward(self, top, propagate_down, bottom):
#反向传播
#propagate_down参数与train.prototxt中的损失层中的propagate_down参数相对应。
# propagete_down的个数与bottom的个数相同,表示是否反向传播回每个输入。
#例如:在train.prototxt的python层对应的EuclideanLossLayer中列出:
#propagate_down:0 ,表示bottom[0]不进行反向传播
#propagate_down:0,表示bottom[0]也不进行反向传播
for i in range(2):
if not propagate_down[i]:
continue
if i == 0:
sign = 1
else:
sign = -1
bottom[i].diff[...] = sign * self.diff / bottom[i].num
参考:https://blog.csdn.net/raby_gyl/article/details/77387402
https://github.com/BVLC/caffe/blob/master/examples/pycaffe/layers/pyloss.py