TensorFlow和Keras实现Mask RCNN(2)

环境:ubantu16.04 +cudnn7.0+cuda_9.0.176
git代码:
https://github.com/matterport/Mask_RCNN

检查安装需要
numpy
scipy
Pillow
cython
matplotlib
scikit-image
tensorflow>=1.3.0
keras>=2.0.8
opencv-python
h5py
imgaug
IPython[all]

发现没有安装 imgaug,执行命令安装pip install imguag,发现也直接更新了numpy版本为1.16.2了

一、数据集准备

1、要训练自己的数据集,在TensorFlow版本的Mask RCNN中使用的Abyssinian数据集,其中需要说明的是用程序生成的json文件夹中的label.png图片就是8位的,这与很多博客写的不一样(别的博客生成的都是24位的而且看起来全黑的label.png图片,而且给出了matlab或者c++代码才能将24位转化为8位的label.png,过程非常繁琐。为什么出现这种状况呢?查看labelme源码,这是由于labelme版本不同。发现截至到写博客日期最新的版本与他们使用的版本不同)
在这里插入图片描述
2、 这就需要从生成的所有json文件夹中提取所有的label.png图片。
代码如下:


import cv2 as cv
import random
import glob
import os
from PIL import Image
import shutil
def get_samples(foldername, savePath):
    print('savePath:', savePath)
    if os.path.exists(savePath) is False:
        os.makedirs(savePath)
    filenames = os.listdir(foldername)
    for filename in filenames:
        full_path = os.path.join(foldername, filename)
        new_name = filename[:-5] + '.png'
        label_png = os.listdir(full_path)[2]
        # os.rename(os.path.join(filename, label_png),os.path.join(filename, name))
        shutil.copy(os.path.join(full_path, label_png), os.path.join(savePath, label_png))
        os.rename(os.path.join(savePath, label_png), os.path.join(savePath, new_name))
        # print(os.listdir(filename))
savePath = '/home/yuxin/Mask_RCNN-master/datasets/Abyssinian/Abyssinian_mask/'
get_samples('/home/yuxin/Mask_RCNN-master/datasets/Abyssinian/json/', savePath)

3、数据集组成为:

1、Abyssinian_pic(放置原图像)
datasetsAbyssinian2、annotations(放置所有labelme标记生成的json标签文件)
3、json(放置所有json转成的数据文件,包括img.png,info.yaml,label.png,label_names.txt,label_viz.png)
4、Abyssinian_mask(放置所有在json数据文件夹中提取出的label.png图像)

二、编译

在根目录下直接运行

$ python setup.py install

三、下载权重mask_rcnn_coco.h5到根目录

四、训练

在samples文件夹中新建文件夹Abyssinian。
新建train_model.py
在这里插入图片描述
1、训练代码如下:

# -*- coding: utf-8 -*-
import os
import sys
import random
import math
import re
import time
import numpy as np
import cv2
import matplotlib
import matplotlib.pyplot as plt
import tensorflow as tf
from mrcnn.config import Config
#import utils
from mrcnn import model as modellib,utils
from mrcnn import visualize
import yaml
from mrcnn.model import log
from PIL import Image
#os.environ["CUDA_VISIBLE_DEVICES"] = "0"
# Root directory of the project
ROOT_DIR = os.getcwd()
#ROOT_DIR = os.path.abspath("../")
# Directory to save logs and trained model
MODEL_DIR = os.path.join(ROOT_DIR, "logs")
iter_num=0
# Local path to trained weights file
COCO_MODEL_PATH = os.path.join(ROOT_DIR, "mask_rcnn_coco.h5")
# Download COCO trained weights from Releases if needed
if not os.path.exists(COCO_MODEL_PATH):
    utils.download_trained_weights(COCO_MODEL_PATH)
class ShapesConfig(Config):
    """Configuration for training on the toy shapes dataset.
    Derives from the base Config class and overrides values specific
    to the toy shapes dataset.
    """
    # Give the configuration a recognizable name
    NAME = "shapes"
    # Train on 1 GPU and 8 images per GPU. We can put multiple images on each
    # GPU because the images are small. Batch size is 8 (GPUs * images/GPU).
    GPU_COUNT = 1
    IMAGES_PER_GPU = 1
    # Number of classes (including background)
    NUM_CLASSES = 1 + 1  # background + 1 shapes
    # Use small images for faster training. Set the limits of the small side
    # the large side, and that determines the image shape.
    IMAGE_MIN_DIM = 480
    IMAGE_MAX_DIM = 640
    # Use smaller anchors because our image and objects are small
    RPN_ANCHOR_SCALES = (8 * 6, 16 * 6, 32 * 6, 64 * 6, 128 * 6)  # anchor side in pixels
    # Reduce training ROIs per image because the images are small and have
    # few objects. Aim to allow ROI sampling to pick 33% positive ROIs.
    TRAIN_ROIS_PER_IMAGE = 100
    # Use a small epoch since the data is simple
    STEPS_PER_EPOCH = 100
    # use small validation steps since the epoch is small
    VALIDATION_STEPS = 50
config = ShapesConfig()
config.display()
class DrugDataset(utils.Dataset):
    # 得到该图中有多少个实例(物体)
    def get_obj_index(self, image):
        n = np.max(image)
        return n
    # 解析labelme中得到的yaml文件,从而得到mask每一层对应的实例标签
    def from_yaml_get_class(self, image_id):
        info = self.image_info[image_id]
        with open(info['yaml_path']) as f:
            temp = yaml.load(f.read())
            labels = temp['label_names']
            del labels[0]
        return labels
    # 重新写draw_mask
    def draw_mask(self, num_obj, mask, image,image_id):
        #print("draw_mask-->",image_id)
        #print("self.image_info",self.image_info)
        info = self.image_info[image_id]
        #print("info-->",info)
        #print("info[width]----->",info['width'],"-info[height]--->",info['height'])
        for index in range(num_obj):
            for i in range(info['width']):
                for j in range(info['height']):
                    #print("image_id-->",image_id,"-i--->",i,"-j--->",j)
                    #print("info[width]----->",info['width'],"-info[height]--->",info['height'])
                    at_pixel = image.getpixel((i, j))
                    if at_pixel == index + 1:
                        mask[j, i, index] = 1
        return mask
    # 重新写load_shapes,里面包含自己的自己的类别
    # 并在self.image_info信息中添加了path、mask_path 、yaml_path
    # yaml_pathdataset_root_path = "/tongue_dateset/"
    # img_floder = dataset_root_path + "rgb"
    # mask_floder = dataset_root_path + "mask"
    # dataset_root_path = "/tongue_dateset/"
    def load_shapes(self, count, img_floder, mask_floder, imglist, dataset_root_path):
        """Generate the requested number of synthetic images.
        count: number of images to generate.
        height, width: the size of the generated images.
        """
        # Add classes
        self.add_class("shapes", 1, "Abyssinian")
        #self.add_class("shapes", 2, "leg")
        #self.add_class("shapes", 3, "well")
        for i in range(count):
            # 获取图片宽和高
            filestr = imglist[i].split(".")[0]
            #print(imglist[i],"-->",cv_img.shape[1],"--->",cv_img.shape[0])
            #print("id-->", i, " imglist[", i, "]-->", imglist[i],"filestr-->",filestr)
            # filestr = filestr.split("_")[1]
            mask_path = mask_floder + "/" + filestr + ".png"
            yaml_path = dataset_root_path + "json/" + filestr + "_json/info.yaml"
            print(dataset_root_path + "json/" + filestr + "_json/img.png")
            cv_img = cv2.imread(dataset_root_path + "json/" + filestr + "_json/img.png")
            self.add_image("shapes", image_id=i, path=img_floder + "/" + imglist[i],
                           width=cv_img.shape[1], height=cv_img.shape[0], mask_path=mask_path, yaml_path=yaml_path)
    # 重写load_mask
    def load_mask(self, image_id):
        """Generate instance masks for shapes of the given image ID.
        """
        global iter_num
        #print("image_id",image_id)
        info = self.image_info[image_id]
        count = 1  # number of object
        img = Image.open(info['mask_path'])
        num_obj = self.get_obj_index(img)
        mask = np.zeros([info['height'], info['width'], num_obj], dtype=np.uint8)
        mask = self.draw_mask(num_obj, mask, img,image_id)
        #occlusion = np.logical_not(mask[:, :]).astype(np.uint8)
        occlusion = np.logical_not(mask[:, :, -1]).astype(np.uint8)  
        for i in range(count - 2, -1, -1):
            mask[:, :, i] = mask[:, :, i] * occlusion
            occlusion = np.logical_and(occlusion, np.logical_not(mask[:, :, i]))
        labels = []
        labels = self.from_yaml_get_class(image_id)
        labels_form = []
        for i in range(len(labels)):
            if labels[i].find("Abyssinian") != -1:
                # print "car"
                labels_form.append("Abyssinian")
            #elif labels[i].find("leg") != -1:
                # print "leg"
                #labels_form.append("leg")
           # elif labels[i].find("well") != -1:
                # print "well"
                #labels_form.append("well")
        class_ids = np.array([self.class_names.index(s) for s in labels_form])
        return mask, class_ids.astype(np.int32)
def get_ax(rows=1, cols=1, size=8):
    """Return a Matplotlib Axes array to be used in
    all visualizations in the notebook. Provide a
    central point to control graph sizes.
    Change the default size attribute to control the size
    of rendered images
    """
    _, ax = plt.subplots(rows, cols, figsize=(size * cols, size * rows))
    return ax
#基础设置
dataset_root_path="/home/yuxin/Mask_RCNN-master-Abyssinian/datasets/Abyssinian/"
img_floder = dataset_root_path + "Abyssinian_pic"
mask_floder = dataset_root_path + "Abyssinian_mask"
#yaml_floder = dataset_root_path
imglist = os.listdir(img_floder)
count = len(imglist)
#train与val数据集准备
dataset_train = DrugDataset()
dataset_train.load_shapes(count, img_floder, mask_floder, imglist,dataset_root_path)
dataset_train.prepare()
#print("dataset_train-->",dataset_train._image_ids)
dataset_val = DrugDataset()
dataset_val.load_shapes(count, img_floder, mask_floder, imglist,dataset_root_path)
dataset_val.prepare()
#print("dataset_val-->",dataset_val._image_ids)
# Load and display random samples
#image_ids = np.random.choice(dataset_train.image_ids, 4)
#for image_id in image_ids:
#    image = dataset_train.load_image(image_id)
#    mask, class_ids = dataset_train.load_mask(image_id)
#    visualize.display_top_masks(image, mask, class_ids, dataset_train.class_names)
# Create model in training mode
model = modellib.MaskRCNN(mode="training", config=config,
                          model_dir=MODEL_DIR)
# Which weights to start with?
init_with = "coco"  # imagenet, coco, or last
if init_with == "imagenet":
    model.load_weights(model.get_imagenet_weights(), by_name=True)
elif init_with == "coco":
    # Load weights trained on MS COCO, but skip layers that
    # are different due to the different number of classes
    # See README for instructions to download the COCO weights
    # print(COCO_MODEL_PATH)
    model.load_weights(COCO_MODEL_PATH, by_name=True,
                       exclude=["mrcnn_class_logits", "mrcnn_bbox_fc",
                                "mrcnn_bbox", "mrcnn_mask"])
elif init_with == "last":
    # Load the last model you trained and continue training
    model.load_weights(model.find_last()[1], by_name=True)
# Train the head branches
# Passing layers="heads" freezes all layers except the head
# layers. You can also pass a regular expression to select
# which layers to train by name pattern.
model.train(dataset_train, dataset_val,
            learning_rate=config.LEARNING_RATE,
            epochs=10,
            layers='heads')
# Fine tune all layers
# Passing layers="all" trains all layers. You can also
# pass a regular expression to select which layers to
# train by name pattern.
model.train(dataset_train, dataset_val,
            learning_rate=config.LEARNING_RATE / 10,
            epochs=30,
            layers="all")

2、训练结果:
在这里插入图片描述
训练过程产生一系列的数据文件:
在这里插入图片描述

五、测试

在samples/Abyssinian文件夹下新建test_model.py
1、测试代码:

import os
import sys
import random
import skimage.io
from mrcnn.config import Config
from datetime import datetime
# Root directory of the project
ROOT_DIR = os.getcwd()
# Import Mask RCNN
sys.path.append(ROOT_DIR)  # To find local version of the library
from mrcnn import utils
import mrcnn.model as modellib
from mrcnn import visualize
# Directory to save logs and trained model
MODEL_DIR = os.path.join(ROOT_DIR, "logs/shapes20190416T1241")
# Local path to trained weights file
COCO_MODEL_PATH = os.path.join(MODEL_DIR ,"mask_rcnn_shapes_0030.h5")
# Download COCO trained weights from Releases if needed
if not os.path.exists(COCO_MODEL_PATH):
    utils.download_trained_weights(COCO_MODEL_PATH)
# Directory of images to run detection on
IMAGE_DIR = os.path.join(ROOT_DIR, "images1")
class ShapesConfig(Config):
    """Configuration for training on the toy shapes dataset.
    Derives from the base Config class and overrides values specific
    to the toy shapes dataset.
    """
    # Give the configuration a recognizable name
    NAME = "shapes"
    # Train on 1 GPU and 8 images per GPU. We can put multiple images on each
    # GPU because the images are small. Batch size is 8 (GPUs * images/GPU).
    GPU_COUNT = 1
    IMAGES_PER_GPU = 1
    # Number of classes (including background)
    NUM_CLASSES = 1 + 1  # background + 3 shapes
    # Use small images for faster training. Set the limits of the small side
    # the large side, and that determines the image shape.
    IMAGE_MIN_DIM = 480
    IMAGE_MAX_DIM = 640
    # Use smaller anchors because our image and objects are small
    RPN_ANCHOR_SCALES = (8 * 6, 16 * 6, 32 * 6, 64 * 6, 128 * 6)  # anchor side in pixels
    # Reduce training ROIs per image because the images are small and have
    # few objects. Aim to allow ROI sampling to pick 33% positive ROIs.
    TRAIN_ROIS_PER_IMAGE =100
    # Use a small epoch since the data is simple
    STEPS_PER_EPOCH = 100
    # use small validation steps since the epoch is small
    VALIDATION_STEPS = 50
#import train_tongue
#class InferenceConfig(coco.CocoConfig):
class InferenceConfig(ShapesConfig):
    # Set batch size to 1 since we'll be running inference on
    # one image at a time. Batch size = GPU_COUNT * IMAGES_PER_GPUtime
    GPU_COUNT = 1
    IMAGES_PER_GPU = 1
config = InferenceConfig()
model = modellib.MaskRCNN(mode="inference", model_dir=MODEL_DIR, config=config)
# Create model object in inference mode.
model = modellib.MaskRCNN(mode="inference", model_dir=MODEL_DIR, config=config)
# Load weights trained on MS-COCO
model.load_weights(COCO_MODEL_PATH, by_name=True)
# COCO Class names
# Index of the class in the list is its ID. For example, to get ID of
# the teddy bear class, use: class_names.index('teddy bear')
class_names = ['BG','Abyssinian']
# Load a random image from the images folder
file_names = next(os.walk(IMAGE_DIR))[2]
'''此段位单张图测试且是随机的
image = skimage.io.imread(os.path.join(IMAGE_DIR, random.choice(file_names)))
a=datetime.now()
# Run detection
results = model.detect([image], verbose=1)
b=datetime.now()
# Visualize results
print("time",(b-a).seconds)
r = results[0]
visualize.display_instances(image, r['rois'], r['masks'], r['class_ids'],
                            class_names, r['scores'])
'''
#此段为批量测试
for x in range(len(file_names)):
    image = skimage.io.imread(os.path.join(IMAGE_DIR, file_names[x]))
# Run detection
    results = model.detect([image], verbose=1)
# Visualize results
    r = results[0]
    visualize.display_instances(image, r['rois'], r['masks'], r['class_ids'],
                            class_names, r['scores'])                      

2、测试结果:
在这里插入图片描述
在这里插入图片描述
在这里插入图片描述

六、训练测试遇到的error

 在建train_model.py之前,新建了Abyssinian.py文件

运行:python Abyssinian.py train --dataset=Mask_RCNN-master/datasets/Abyssiniang --weights=coco
1、Error:ImportError

Traceback (most recent call last):
File “Abyssinian.py”, line 35, in
import skimage.draw
File “/home/yuxin/anaconda3/lib/python3.6/site-packages/skimage/init.py”, line 158, in
from .util.dtype import *
File “/home/yuxin/anaconda3/lib/python3.6/site-packages/skimage/util/init.py”, line 7, in
from .arraycrop import crop
File “/home/yuxin/anaconda3/lib/python3.6/site-packages/skimage/util/arraycrop.py”, line 8, in
from numpy.lib.arraypad import _validate_lengths
ImportError: cannot import name ‘_validate_lengths’

solution:
numpy的版本高了是1.16.2 换成了1.15.0
2、警告:

/home/yuxin/anaconda3/lib/python3.6/importlib/_bootstrap.py:219: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility. Expected 96, got 88
return f(*args, **kwds)
/home/yuxin/anaconda3/lib/python3.6/importlib/_bootstrap.py:219: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility. Expected 96, got 88
return f(*args, **kwds)
/home/yuxin/anaconda3/lib/python3.6/importlib/_bootstrap.py:219: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility. Expected 96, got 88
return f(*args, **kwds)
Using TensorFlow backend.
/home/yuxin/anaconda3/lib/python3.6/importlib/_bootstrap.py:219: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility. Expected 96, got 88
return f(*args, **kwds)
/home/yuxin/anaconda3/lib/python3.6/importlib/_bootstrap.py:219: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility
return f(*args, **kwds)

solution:
numpy的版本低了,换成了1.15.1警告消失

3、IndexError:

boolean index did not match indexed array along dimension 0; dimension
is 1 but corresponding boolean dimension is 38

solution:
png8位图像问题,上面已经说了,json转化成数据文件得到的.png图像就是最后的使用图像,不用再转化成8位图像了,如果再用程序转化就会报错
4、StopIteration

Re-starting from epoch 30 Traceback (most recent call last): File
“test_model.py”, line 85, in
file_names = next(os.walk(IMAGE_DIR))[2]
StopIteration

测试图片的路径有问题

参考博客:

http://keep.01ue.com/?pi=299139&_a=app&_c=index&_m=p
https://blog.csdn.net/u012746060/article/details/82143285
https://blog.csdn.net/heiheiya/article/details/81532914
https://blog.csdn.net/xiongchao99/article/details/79106588

  • 0
    点赞
  • 3
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值