Getting started with caffe questions answers (摘选)

本文摘选了Getting started with caffe questions answers 部分内容,更多细节请下载pdf文件 getting-started-with-caffe-questions-answers.pdf
caffe 资料可在百度云盘下载
链接: http://pan.baidu.com/s/1jIRJ6mU
提取密码:xehi
Q: Is there a minimum size of dataset needed in order to get a good speedup on Titan X GPU? In the past I have seen that the GPU pipelines need to be filled in order to get needed speedups
A: Good question. Generally the nature of DL requires an extensive training data set. That often implies you have ample work to keep one or more GPUs fully busy. The model parameters also impact the performance but for most cases you would likely see plenty of work to keep a Titan X
busy. The good news is that caffe, cuDNN, and DIGITS all do a great job of making sure you are getting the maximum value out of whatever GPU resources you have available. In summary, use a framework that uses cuDNN and you should be seeing very good speedups with a Titan X.

Q: is there any reason why one would work with theano over caffe?
A: The approach of both frameworks are very different. Caffe is a DL framework. Theano can be seen as a compiler. It will be application dependent.

Q: If there is a preexisting model that identifies giraffes and another that identifies horses, how do you know which ones to choose if you want to transfer knowledge to identify cats, for example?
A: You can train a model on one set of images, giraffes and horses in this example, then modify the final layers to include new categories and retrain
them on the new categories. This is called fine-tuning

Q: What is the advantage of using batch size >1?
A: Using a larger batch size allows the GPU to train on multiple images at a time, greatly boosting performance.

Q: Does Caffe support unsupervised deep learning?
A: Not at this time. You need to label all of your input categories.

Q: Any references to improve skills in finetuning a network?
A: I used this the Caffe example to help me get started when I was learning to use Caffe http://caffe.berkeleyvision.org/gathered/examples/finetune_flickr_style.html .

Q: How do we make sure that our batch size is appropriate relative to the capabilities of our GPU?
A: You can run nvidia-smi to check your GPU utilization. One thing you can do is increase or decrease batch size to maximize GPU utilization given the amount of memory on the board.

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值