关闭

Faster RCNN minibatch.py解读

标签: faster-r-cnn
339人阅读 评论(0) 收藏 举报
分类:

minibatch.py 的功能是: Compute minibatch blobs for training a Fast R-CNN network. 与roidb不同的是, minibatch中存储的并不是完整的整张图像图像,而是从图像经过转换后得到的四维blob以及从图像中截取的proposals,以及与之对应的labels等

在整个faster rcnn训练中,有两处用到了minibatch.py,一处是rpn的开始数据输入,另一处自然是fast rcnn的数据输入。分别见stage1_rpn_train.pt与stage1_fast_rcnn_train.py的最前面,如下:
stage1_rpn_train.pt:

layer {
  name: 'input-data'
  type: 'Python'
  top: 'data'
  top: 'im_info'
  top: 'gt_boxes'
  python_param {
    module: 'roi_data_layer.layer'
    layer: 'RoIDataLayer'
    param_str: "'num_classes': 21"
  }
}

stage1_fast_rcnn_train.py:

name: "VGG_CNN_M_1024"
layer {
  name: 'data'
  type: 'Python'
  top: 'data'
  top: 'rois'
  top: 'labels'
  top: 'bbox_targets'
  top: 'bbox_inside_weights'
  top: 'bbox_outside_weights'
  python_param {
    module: 'roi_data_layer.layer'
    layer: 'RoIDataLayer'
    param_str: "'num_classes': 21"
  }
}

如上,共同的数据定义层为roi_data_layer.layer,在layer.py中,观察前向传播:

 def forward(self, bottom, top):
        """Get blobs and copy them into this layer's top blob vector."""
        blobs = self._get_next_minibatch()

        for blob_name, blob in blobs.iteritems():
            top_ind = self._name_to_top_map[blob_name]
            # Reshape net's input blobs
            top[top_ind].reshape(*(blob.shape))
            # Copy data into net's input blobs
            top[top_ind].data[...] = blob.astype(np.float32, copy=False)
 def _get_next_minibatch(self):
        """Return the blobs to be used for the next minibatch.

        If cfg.TRAIN.USE_PREFETCH is True, then blobs will be computed in a
        separate process and made available through self._blob_queue.
        """
        if cfg.TRAIN.USE_PREFETCH:
            return self._blob_queue.get()
        else:
            db_inds = self._get_next_minibatch_inds()
            minibatch_db = [self._roidb[i] for i in db_inds]
            return get_minibatch(minibatch_db, self._num_classes)

这时我们发现了get_minibatch,此函数出现在minibatch.py中。

在看这份代码的时候,建议从get_minibatch开始。下面我们开始:
get_minibatch中,【输入】:roidb是一个list,list中的每个元素是一个字典,每个字典对应一张图片的信息,其中的主要信息有:


这里写图片描述

num_classes在pascal_voc中为21.

def get_minibatch(roidb, num_classes):
    """Given a roidb, construct a minibatch sampled from it."""
    # 给定一个roidb,这个roidb中存储的可能是多张图片,也可能是单张或者多张图片,
    num_images = len(roidb) 
    # Sample random scales to use for each image in this batch
    random_scale_inds = npr.randint(0, high=len(cfg.TRAIN.SCALES),
                                    size=num_images)
    assert(cfg.TRAIN.BATCH_SIZE % num_images == 0), \
        'num_images ({}) must divide BATCH_SIZE ({})'. \
        format(num_images, cfg.TRAIN.BATCH_SIZE)
    rois_per_image = cfg.TRAIN.BATCH_SIZE / num_images  #这里在fast rcnn中,为128/2=64
    fg_rois_per_image = np.round(cfg.TRAIN.FG_FRACTION * rois_per_image)#这里比例为0.25=1/4

    # Get the input image blob, formatted for caffe
    #将给定的roidb经过预处理(resize以及resize的scale),
    #然后再利用im_list_to_blob函数来将图像转换成caffe支持的数据结构,即 N * C * H * W的四维结构
    im_blob, im_scales = _get_image_blob(roidb, random_scale_inds)

    blobs = {'data': im_blob}

    if cfg.TRAIN.HAS_RPN:#用在rpn
        assert len(im_scales) == 1, "Single batch only"
        assert len(roidb) == 1, "Single batch only"
        # gt boxes: (x1, y1, x2, y2, cls)
        gt_inds = np.where(roidb[0]['gt_classes'] != 0)[0]
        gt_boxes = np.empty((len(gt_inds), 5), dtype=np.float32)
        gt_boxes[:, 0:4] = roidb[0]['boxes'][gt_inds, :] * im_scales[0]
        gt_boxes[:, 4] = roidb[0]['gt_classes'][gt_inds]
        blobs['gt_boxes'] = gt_boxes
        #首先解释im_info。对于一副任意大小PxQ图像,传入Faster RCNN前首先reshape到固定MxN,im_info=[M, N, scale_factor]则保存了此次缩放的所有信息。
        blobs['im_info'] = np.array( 
            [[im_blob.shape[2], im_blob.shape[3], im_scales[0]]],
            dtype=np.float32)
    else: # not using RPN ,用在fast rcnn
        # Now, build the region of interest and label blobs
        rois_blob = np.zeros((0, 5), dtype=np.float32)
        labels_blob = np.zeros((0), dtype=np.float32)
        bbox_targets_blob = np.zeros((0, 4 * num_classes), dtype=np.float32)
        bbox_inside_blob = np.zeros(bbox_targets_blob.shape, dtype=np.float32)
        # all_overlaps = []
        for im_i in xrange(num_images):
       # 遍历给定的roidb中的每张图片,随机组合sample of RoIs, 来生成前景样本和背景样本。
         # 返回包括每张图片中的roi(proposal)的坐标,所属的类别,bbox回归目标,bbox的inside_weight等                        
            labels, overlaps, im_rois, bbox_targets, bbox_inside_weights \
                = _sample_rois(roidb[im_i], fg_rois_per_image, rois_per_image,
                               num_classes)

            # Add to RoIs blob
            # _sample_rois返回的im_rois并没有缩放,所以这里要先缩放
            rois = _project_im_rois(im_rois, im_scales[im_i])
            batch_ind = im_i * np.ones((rois.shape[0], 1))
            rois_blob_this_image = np.hstack((batch_ind, rois))# 加上图片的序号,共5列(index,x1,y1,x2,y2)
            rois_blob = np.vstack((rois_blob, rois_blob_this_image))
            # 将所有的盒子竖着摆放,如下:
            # n  x1  y1  x2  y2
            # 0  ..  ..  ..  ..
            # 0  ..  ..  ..  ..
            # :   :   :   :   :
            # 1   ..  ..  ..  ..
            # 1   ..  ..  ..  ..

            # Add to labels, bbox targets, and bbox loss blobs
            labels_blob = np.hstack((labels_blob, labels))# 水平向量,一维向量
            bbox_targets_blob = np.vstack((bbox_targets_blob, bbox_targets))
            bbox_inside_blob = np.vstack((bbox_inside_blob, bbox_inside_weights))
            # 将所有的bbox_targets_blob竖着摆放,如下: N*4k ,只有对应的类非0
            #   tx1  ty1  wx1  wy1   tx2  ty2  wx2  wy2    tx3  ty3  wx3  wy3
            #    0     0    0   0     0     0    0   0       0     0    0   0
            #    0     0    0   0     0.2   0.3  1.0 0.5     0     0    0   0
            #    0     0    0   0     0     0    0   0       0     0    0   0
            #    0     0    0   0     0     0    0   0       0.5   0.5  1.0  1.0
            #    0     0    0   0     0     0    0   0       0     0    0   0
            # 对于bbox_inside_blob ,与bbox_targets_blob 规模相同,只不过把上面非0的元素换成1即可。
            # all_overlaps = np.hstack((all_overlaps, overlaps))

        # For debug visualizations
        # _vis_minibatch(im_blob, rois_blob, labels_blob, all_overlaps)

        blobs['rois'] = rois_blob
        blobs['labels'] = labels_blob

        if cfg.TRAIN.BBOX_REG:
            blobs['bbox_targets'] = bbox_targets_blob
            blobs['bbox_inside_weights'] = bbox_inside_blob
            blobs['bbox_outside_weights'] = \
                np.array(bbox_inside_blob > 0).astype(np.float32)
#对于bbox_outside_weights,此处看来与bbox_inside_blob 相同。
    return blobs

在 def get_minibatch(roidb, num_classes) 中调用此函数,传进来的实参为单张图像的roidb ,该函数主要功能是随机组合sample of RoIs, 来生成前景样本和背景样本。这里很重要,
因为一般来说,生成的proposal背景类比较多,所以我们生成前景与背景的比例选择为1:3,所以
这里每张图片选取了1/4*64=16个前景,选取了3/4*64=48个背景box.


还有一个值得注意的是随机采样中,前景box可能会包含ground truth box.可能会参与分类,但是不会参加回归,因为其回归量为0. 是不是可以将

fg_inds = np.where(overlaps >= cfg.TRAIN.FG_THRESH)[0]

改为:

fg_inds = np.where(overlaps >= cfg.TRAIN.FG_THRESH && overlaps <1.0)[0]

会更合适呢,这样就可以提取的全部是rpn的 proposal。

def _sample_rois(roidb, fg_rois_per_image, rois_per_image, num_classes):
    """Generate a random sample of RoIs comprising foreground and background
    examples.
    """
    # label = class RoI has max overlap with
    labels = roidb['max_classes']
    overlaps = roidb['max_overlaps']
    rois = roidb['boxes']

    # Select foreground RoIs as those with >= FG_THRESH overlap
    fg_inds = np.where(overlaps >= cfg.TRAIN.FG_THRESH)[0]
    # Guard against the case when an image has fewer than fg_rois_per_image
    # foreground RoIs
    fg_rois_per_this_image = np.minimum(fg_rois_per_image, fg_inds.size)
    # Sample foreground regions without replacement
    if fg_inds.size > 0:
        fg_inds = npr.choice(
                fg_inds, size=fg_rois_per_this_image, replace=False)

    # Select background RoIs as those within [BG_THRESH_LO, BG_THRESH_HI)
    bg_inds = np.where((overlaps < cfg.TRAIN.BG_THRESH_HI) &
                       (overlaps >= cfg.TRAIN.BG_THRESH_LO))[0]
    # Compute number of background RoIs to take from this image (guarding
    # against there being fewer than desired)
    bg_rois_per_this_image = rois_per_image - fg_rois_per_this_image
    bg_rois_per_this_image = np.minimum(bg_rois_per_this_image,
                                        bg_inds.size)
    # Sample foreground regions without replacement
    if bg_inds.size > 0:
        bg_inds = npr.choice(
                bg_inds, size=bg_rois_per_this_image, replace=False)

    # The indices that we're selecting (both fg and bg)
    keep_inds = np.append(fg_inds, bg_inds)
    # Select sampled values from various arrays:
    labels = labels[keep_inds]
    # Clamp labels for the background RoIs to 0
    labels[fg_rois_per_this_image:] = 0
    overlaps = overlaps[keep_inds]
    rois = rois[keep_inds]
    # 调用_get_bbox_regression_labels函数,生成bbox_targets 和 bbox_inside_weights,
    #它们都是N * 4K 的ndarray,N表示keep_inds的size,也就是minibatch中样本的个数;bbox_inside_weights 
    #也随之生成
    bbox_targets, bbox_inside_weights = _get_bbox_regression_labels(
            roidb['bbox_targets'][keep_inds, :], num_classes)

    return labels, overlaps, rois, bbox_targets, bbox_inside_weights

def _get_bbox_regression_labels(bbox_target_data, num_classes):
该函数主要是获取bbox_target_data中回归目标的的4个坐标编码作为bbox_targets,同时生成bbox_inside_weights,它们都是N * 4K 的ndarray,N表示keep_inds的size,也就是minibatch中样本的个数。
bbox_target_data: N*5 ,每一行为(c,tx,ty,tw,th)

def _get_bbox_regression_labels(bbox_target_data, num_classes):
    """Bounding-box regression targets are stored in a compact form in the
    roidb.

    This function expands those targets into the 4-of-4*K representation used
    by the network (i.e. only one class has non-zero targets). The loss weights
    are similarly expanded.

    Returns:
        bbox_target_data (ndarray): N x 4K blob of regression targets
        bbox_inside_weights (ndarray): N x 4K blob of loss weights
    """
    clss = bbox_target_data[:, 0]
    bbox_targets = np.zeros((clss.size, 4 * num_classes), dtype=np.float32)
    bbox_inside_weights = np.zeros(bbox_targets.shape, dtype=np.float32)
    inds = np.where(clss > 0)[0] # 取前景框
    for ind in inds:
        cls = clss[ind]
        start = 4 * cls
        end = start + 4
        bbox_targets[ind, start:end] = bbox_target_data[ind, 1:]
        bbox_inside_weights[ind, start:end] = cfg.TRAIN.BBOX_INSIDE_WEIGHTS
    return bbox_targets, bbox_inside_weights

对于roidb的图像进行对应的缩放操作,并返回统一的blob数据,即 N * C * H * W(这里为2*3*600*1000)的四维结构

def _get_image_blob(roidb, scale_inds):
    """Builds an input blob from the images in the roidb at the specified
    scales.
    """
    num_images = len(roidb)
    processed_ims = []
    im_scales = []
    for i in xrange(num_images):
        im = cv2.imread(roidb[i]['image'])  #shape:h*w*c
        if roidb[i]['flipped']:
            im = im[:, ::-1, :]   # 水平翻转
        target_size = cfg.TRAIN.SCALES[scale_inds[i]]
        im, im_scale = prep_im_for_blob(im, cfg.PIXEL_MEANS, target_size,
                                        cfg.TRAIN.MAX_SIZE)
        im_scales.append(im_scale)
        processed_ims.append(im)

    # Create a blob to hold the input images
    blob = im_list_to_blob(processed_ims)

    return blob, im_scales

以上im_list_to_blob中将一系列的图像转化为标准的4维矩阵,进行了填0的补全操作,使得所有的图片的大小相同。

prep_im_for_blob 进行尺寸变化,使得最小的边长为target_size,最大的边长不超过cfg.TRAIN.MAX_SIZE,并且返回缩放的比例。

def prep_im_for_blob(im, pixel_means, target_size, max_size):
    """Mean subtract and scale an image for use in a blob."""
    im = im.astype(np.float32, copy=False)
    im -= pixel_means
    im_shape = im.shape
    im_size_min = np.min(im_shape[0:2])
    im_size_max = np.max(im_shape[0:2])
    im_scale = float(target_size) / float(im_size_min)
    # Prevent the biggest axis from being more than MAX_SIZE
    if np.round(im_scale * im_size_max) > max_size:
        im_scale = float(max_size) / float(im_size_max)
    im = cv2.resize(im, None, None, fx=im_scale, fy=im_scale,
                    interpolation=cv2.INTER_LINEAR)

    return im, im_scale

所以对于原始的图片,要缩放到标准的roidb的data的格式,实际上只需要乘以im_scale即可。
反之,如果回到原始的图片,则只需要除以im_scale即可。

参考文献

  1. http://blog.csdn.net/iamzhangzhuping/article/details/51393032
  2. faster-rcnn 之 基于roidb get_minibatch(数据准备操作)
  3. faster rcnn源码解读(六)之minibatch
0
0
查看评论

faster rcnn源码解读(六)之minibatch

faster rcnn用python版本的https://github.com/rbgirshick/py-faster-rcnn minibatch源码:https://github.com/rbgirshick/py-faster-rcnn/blob/master/lib/roi_data_l...
  • u010668907
  • u010668907
  • 2016-07-18 21:57
  • 5276

faster-rcnn 之 基于roidb get_minibatch(数据准备操作)

【说明】:欢迎加入:faster-rcnn 交流群 238138700,这个函数,输入是roidb,根据roidb中给出的图片的信息,读取图片的源文件,然后整理成blobs,供给网络训练使用; def get_minibatch(roidb, num_classes): 这个函数会根据roidb中...
  • sloanqin
  • sloanqin
  • 2016-06-08 15:05
  • 2893

Faster RCNN minibatch.py

def _sample_rois(roidb, fg_rois_per_image, rois_per_image, num_classes) 在 def get_minibatch(roidb, num_classes) 中调用此函数,传进来的实参为单张图像的roidb ,该函数主要功能是随机...
  • iamzhangzhuping
  • iamzhangzhuping
  • 2016-05-15 19:40
  • 2684

faster rcnn源码理解

理解faster rcnn的源码有几个关键点 1.算法原理、网络结构、训练过程这是基本 2.要弄懂源码里训练数据数据是怎么组织起来的,imdb,roidb,blob很关键,弄清它们的数据结构以及各个阶段是如何产生的 3.一定的python、numpy基础知识 rpn_train.pt #s...
  • u014568921
  • u014568921
  • 2016-12-12 23:07
  • 8925

Caffe小玩意(3)-利用py-faster-rcnn自定义输入数据

Caffe小玩意(3)-利用py-faster-rcnn自定义输入数据\quad众所周知,caffe是现有deep learning framework中最为自动化的,我们甚至可以只定义prototxt文件而不需要写代码,就完成整个网络的训练。正是由于它的高度自动化,当我们想要修改其中的模块,就不是...
  • u014510375
  • u014510375
  • 2016-07-01 12:20
  • 1824

numpy.zeros(np.zeros)使用方法--python学习笔记31

翻译: 用法:zeros(shape, dtype=float, order='C') 返回:返回来一个给定形状和类型的用0填充的数组; 参数:shape:形状             dtype:数据类型,可选参数,默认n...
  • qq_26948675
  • qq_26948675
  • 2017-01-10 23:29
  • 49236

faster-rcnn训练和测试自己的数据(VGG/ResNet)以及遇到的问题

http://www.cnblogs.com/caffeaoto/p/6536482.html 主要参照这个教程改的 需要准备的文件:Annotation文件,图片,用来训练的图片名称list.txt 训练: 需要改的文件: lib/datasets/下的pascal_voc.py,fact...
  • gbbb1234
  • gbbb1234
  • 2017-06-28 23:34
  • 4653

深度学习—加快梯度下降收敛速度(一):mini-batch、Stochastic gradient descent

在深层神经网络那篇博客中讲了,深层神经网络的局部最优解问题,深层神经网络中存在局部极小点的可能性比较小,大部分是鞍点。因为鞍面上的梯度接近于0,在鞍面上行走是非常缓慢的。因此,必须想办法加速收敛速度,使其更快找到全局最优解。本文将介绍mini-batch与Stochastic gradient de...
  • hdg34jk
  • hdg34jk
  • 2017-12-21 15:14
  • 181

神经网络算法学习---mini-batch

Batch_Size(批尺寸)是机器学习中一个重要参数,涉及诸多矛盾,下面逐一展开。 首先,为什么需要有 Batch_Size 这个参数? Batch 的选择,首先决定的是下降的方向。如果数据集比较小,完全可以采用全数据集 ( Full Batch Learning )的形式,这样做至少...
  • fuwenyan
  • fuwenyan
  • 2016-12-28 21:48
  • 10770

梯度下降(BGD)、随机梯度下降(SGD)、Mini-batch Gradient Descent、带Mini-batch的SGD

一、回归函数及目标函数 以均方误差作为目标函数(损失函数),目的是使其值最小化,用于优化上式。 二、优化方式(Gradient Descent) 1、最速梯度下降法 a、对目标函数求导 b、沿导数相反方向移动theta 原因: (1)对于目标函数,theta的移动量应当如下,其中a为步长,...
  • llx1990rl
  • llx1990rl
  • 2015-03-01 12:05
  • 16983
    个人资料
    • 访问:309441次
    • 积分:4352
    • 等级:
    • 排名:第8257名
    • 原创:118篇
    • 转载:82篇
    • 译文:8篇
    • 评论:107条
    个人网站
    最新评论