caffe2与pytorch计算速度比较

说明

caffe2 是读取的onnx模型,pytorch是加载的原始pth模型

测试结果

模型:mobilenet-v2

device caffe2 pytorch
cuda 90ms 8ms
cpu 24ms 10ms

附caffe2推理代码

import onnx
import datetime
# Load the ONNX model
model = onnx.load("model/mobilenet-v2_100.onnx")

# Check that the IR is well formed
onnx.checker.check_model(model)

# Print a human readable representation of the graph
onnx.helper.printable_graph(model.graph, 'graph.txt')

# ...continuing from above
import caffe2.python.onnx.backend as backend
import numpy as np

rep = backend.prepare(model, device="CPU") # or "CPU"
# For the Caffe2 backend:
#     rep.predict_net is the Caffe2 protobuf for the network
#     rep.workspace is the Caffe2 workspace for the network
#       (see the class caffe2.python.onnx.backend.Workspace)

for i in range(100):
	begin = datetime.datetime.now()
	outputs = rep.run(np.random.randn(1, 3, 128, 128).astype(np.float32))
	end = datetime.datetime.now()
	k = (end - begin).microseconds
	print(k/1000)
# To run networks with more than one input, pass a tuple
# rather than a single numpy ndarray.
print(outputs[0])
发布了8 篇原创文章 · 获赞 1 · 访问量 292
展开阅读全文

没有更多推荐了,返回首页

©️2019 CSDN 皮肤主题: 数字20 设计师: CSDN官方博客

分享到微信朋友圈

×

扫一扫,手机浏览