本文摘选了Getting started with caffe questions answers 部分内容,更多细节请下载pdf文件 getting-started-with-caffe-questions-answers.pdf
caffe 资料可在百度云盘下载
链接: http://pan.baidu.com/s/1jIRJ6mU
提取密码:xehi
Q: Is there a minimum size of dataset needed in order to get a good speedup on Titan X GPU? In the past I have seen that the GPU pipelines need to be filled in order to get needed speedups
A: Good question. Generally the nature of DL requires an extensive training data set. That often implies you have ample work to keep one or more GPUs fully busy. The model parameters also impact the performance but for most cases you would likely see plenty of work to keep a Titan X
busy. The good news is that caffe, cuDNN, and DIGITS all do a great job of making sure you are getting the maximum value out of whatever GPU resources you have available. In summary, use a framework that uses cuDNN and you should be seeing very good speedups with a Titan X.
Q: is there any reason why one would work with theano over caffe?
A: The approach of both frameworks are very different. Caffe is a DL framework. Theano can be seen as a compiler. It will be application dependent.
Q: If there is a preexisting model that identifies giraffes and another that identifies horses, how do you know which ones to choose if you want to transfer knowledge to identify cats, for example?
A: You can train a model on one set of images, giraffes and horses in this example, then modify the final layers to include new categories and retrain
them on the new categories. This is called fine-tuning
Q: What is the advantage of using batch size >1?
A: Using a larger batch size allows the GPU to train on multiple images at a time, greatly boosting performance.
Q: Does Caffe support unsupervised deep learning?
A: Not at this time. You need to label all of your input categories.
Q: Any references to improve skills in finetuning a network?
A: I used this the Caffe example to help me get started when I was learning to use Caffe http://caffe.berkeleyvision.org/gathered/examples/finetune_flickr_style.html .
Q: How do we make sure that our batch size is appropriate relative to the capabilities of our GPU?
A: You can run nvidia-smi to check your GPU utilization. One thing you can do is increase or decrease batch size to maximize GPU utilization given the amount of memory on the board.