一、了解MobileNet
推荐:轻量化网络ShuffleNet MobileNet v1/v2 解析
这位知乎博主讲的挺好的,很好理解!不知道的请移步学习!!
下面我们来测试一下分类的效果如何?
二、实际测试一下分类效果
方法一:会自动下载训练好的网络模型,方法二也是一样
from mxnet.gluon.model_zoo import vision
#加载网络参数
mobilenet_v1 = vision.MobileNet()
print(mobilenet_v1.output.weight)
Parameter mobilenet4_dense0_weight (shape=(1000, 0), dtype=float32)
#查看全部的网络参数
print(mobilenet_v1.summary)
<bound method Block.summary of MobileNet( (features): HybridSequential( (0): Conv2D(None -> 32, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False) (1): BatchNorm(axis=1, eps=1e-05, momentum=0.9, fix_gamma=False, use_global_stats=False, in_channels=None) (2): Activation(relu) (3): Conv2D(None -> 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=32, bias=False) (4): BatchNorm(axis=1, eps=1e-05, momentum=0.9, fix_gamma=False, use_global_stats=False, in_channels=None) (5): Activation(relu) (6): Conv2D(None -> 64, kernel_size=(1, 1), stride=(1, 1), bias=False) (7): BatchNorm(axis=1, eps=1e-05, momentum=0.9, fix_gamma=False, use_global_stats=False, in_channels=None) (8): Activation(relu) (9): Conv2D(None -> 64, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), groups=64, bias=False) (10): BatchNorm(axis=1, eps=1e-05, momentum=0.9, fix_gamma=False, use_global_stats=False, in_channels=None) (11): Activation(relu) (12): Conv2D(None -> 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (13): BatchNorm(axis=1, eps=1e-05, momentum=0.9, fix_gamma=False, use_global_stats=False, in_channels=None) (14): Activation(relu) (15): Conv2D(None -> 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=128, bias=False) (16): BatchNorm(axis=1, eps=1e-05, momentum=0.9, fix_gamma=False, use_global_stats=False, in_channels=None) (17): Activation(relu) (18): Conv2D(None -> 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (19): BatchNorm(axis=1, eps=1e-05, momentum=0.9, fix_gamma=False, use_global_stats=False, in_channels=None) (20): Activation(relu) (21): Conv2D(None -> 128, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), groups=128, bias=False) (22): BatchNorm(axis=1, eps=1e-05, momentum=0.9, fix_gamma=False, use_global_stats=False, in_channels=None) (23): Activation(relu) (24): Conv2D(None -> 256, kernel_size=(1, 1), stride=(1, 1), bias=False) (25): BatchNorm(axis=1, eps=1e-05, momentum=0.9, fix_gamma=False, use_global_stats=False, in_channels=None) (26): Activation(relu) (27): Conv2D(None -> 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=256, bias=False) (28): BatchNorm(axis=1, eps=1e-05, momentum=0.9, fix_gamma=False, use_global_stats=False, in_channels=None) (29): Activation(relu) (30): Conv2D(None -> 256, kernel_size=(1, 1), stride=(1, 1), bias=False) (31): BatchNorm(axis=1, eps=1e-05, momentum=0.9, fix_gamma=False, use_global_stats=False, in_channels=None) (32): Activation(relu) (33): Conv2D(None -> 256, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), groups=256, bias=False) (34): BatchNorm(axis=1, eps=1e-05, momentum=0.9, fix_gamma=False, use_global_stats=False, in_channels=None) (35): Activation(relu) (36): Conv2D(None -> 512, kernel_size=(1, 1), stride=(1, 1), bias=False) (37): BatchNorm(axis=1, eps=1e-05, momentum=0.9, fix_gamma=False, use_global_stats=False, in_channels=None) (38): Activation(relu) (39): Conv2D(None -> 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=512, bias=False) (40): BatchNorm(axis=1, eps=1e-05, momentum=0.9, fix_gamma=False, use_global_stats=False, in_channels=None) (41): Activation(relu) (42): Conv2D(None -> 512, kernel_size=(1, 1), stride=(1, 1), bias=False) (43): BatchNorm(axis=1, eps=1e-05, momentum=0.9, fix_gamma=False, use_global_stats=False, in_channels=None) (44): Activation(relu) (45): Conv2D(None -> 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=512, bias=False) (46): BatchNorm(axis=1, eps=1e-05, momentum=0.9, fix_gamma=False, use_global_stats=False, in_channels=None) (47): Activation(relu) (48): Conv2D(None -> 512, kernel_size=(1, 1), stride=(1, 1), bias=False) (49): BatchNorm(axis=1, eps=1e-05, momentum=0.9, fix_gamma=False, use_global_stats=False, in_channels=None) (50): Activation(relu) (51): Conv2D(None -> 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=512, bias=False) (52): BatchNorm(axis=1, eps=1e-05, momentum=0.9, fix_gamma=False, use_global_stats=False, in_channels=None) (53): Activation(relu) (54): Conv2D(None -> 512, kernel_size=(1, 1), stride=(1, 1), bias=False) (55): BatchNorm(axis=1, eps=1e-05, momentum=0.9, fix_gamma=False, use_global_stats=False, in_channels=None) (56): Activation(relu) (57): Conv2D(None -> 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=512, bias=False) (58): BatchNorm(axis=1, eps=1e-05, momentum=0.9, fix_gamma=False, use_global_stats=False, in_channels=None) (59): Activation(relu) (60): Conv2D(None -> 512, kernel_size=(1, 1), stride=(1, 1), bias=False) (61): BatchNorm(axis=1, eps=1e-05, momentum=0.9, fix_gamma=False, use_global_stats=False, in_channels=None) (62): Activation(relu) (63): Conv2D(None -> 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=512, bias=False) (64): BatchNorm(axis=1, eps=1e-05, momentum=0.9, fix_gamma=False, use_global_stats=False, in_channels=None) (65): Activation(relu) (66): Conv2D(None -> 512, kernel_size=(1, 1), stride=(1, 1), bias=False) (67): BatchNorm(axis=1, eps=1e-05, momentum=0.9, fix_gamma=False, use_global_stats=False, in_channels=None) (68): Activation(relu) (69): Conv2D(None -> 512, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), groups=512, bias=False) (70): BatchNorm(axis=1, eps=1e-05, momentum=0.9, fix_gamma=False, use_global_stats=False, in_channels=None) (71): Activation(relu) (72): Conv2D(None -> 1024, kernel_size=(1, 1), stride=(1, 1), bias=False) (73): BatchNorm(axis=1, eps=1e-05, momentum=0.9, fix_gamma=False, use_global_stats=False, in_channels=None) (74): Activation(relu) (75): Conv2D(None -> 1024, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=1024, bias=False) (76): BatchNorm(axis=1, eps=1e-05, momentum=0.9, fix_gamma=False, use_global_stats=False, in_channels=None) (77): Activation(relu) (78): Conv2D(None -> 1024, kernel_size=(1, 1), stride=(1, 1), bias=False) (79): BatchNorm(axis=1, eps=1e-05, momentum=0.9, fix_gamma=False, use_global_stats=False, in_channels=None) (80): Activation(relu) (81): GlobalAvgPool2D(size=(1, 1), stride=(1, 1), padding=(0, 0), ceil_mode=True, global_pool=True, pool_type=avg, layout=NCHW) (82): Flatten ) (output): Dense(None -> 1000, linear) )>
#查看第二个conv的参数
print(mobilenet_v1.features[3].weight)
print(mobilenet_v1.features[3].name)
Parameter mobilenet4_conv1_weight (shape=(32, 0, 3, 3), dtype=<class 'numpy.float32'>) mobilenet4_conv1
下面方法二的查看网络参数的方法也是如此!
方法二:将模型保存在model文件夹中;自动下载的!!!
测试图像:都是我自己标注的名字!
import cv2
img = cv2.imread("test_image/lion.jpg")
cv2.imshow("input image",img)
cv2.waitKey(0)
cv2.destroyAllWindows()
import gluoncv
model_name = 'MobileNet1.0'
net = gluoncv.model_zoo.get_model(model_name,pretrained=True,root="model/")
import mxnet as mx
# or use cv2 import image
img = mx.image.imread("test_image/lion.jpg")
transformed_img = gluoncv.data.transforms.presets.imagenet.transform_eval(img)
pred = net(transformed_img)
# map predicted values to probability by softmax
prob = mx.nd.softmax(pred)[0].asnumpy()
# find the 3 class indices with the highest score
ind = mx.nd.topk(pred, k=3)[0].astype('int').asnumpy().tolist()
print("索引列表为:",ind)
print("\n")
# print the class name and predicted probability
print('The input picture is classified to be')
for i in range(3):
print('- [%s], with probability %.3f.'%(net.classes[ind[i]], prob[ind[i]]))
索引列表为: [291, 286, 293] The input picture is classified to be - [lion], with probability 0.968. - [cougar], with probability 0.010. - [cheetah], with probability 0.001.
测试的准确率很高!并且速度也很快!其他图像的测试就不添加了
三、MobileNet1.0如何实现?
正在测试。。。明天添加!