【亲测】跑深度学习模型:笔记本的RTX3060 6G vs Google colab免费GPU 速度比较

简单测评

笔记本:thinkbook16p RTX3060标配

模型是FCN,跑的小数据集, 用的tensorflow
同样的数据和模型参数,我们来看看在两块GPU上的表现如何吧:

1、本地 RTX3060 6G (残血版,105w?):


2、Google Colab 分配的免费GPU:

在这里插入图片描述
【结果】除了第一个epoch速度不太稳定可以忽略:
本地RTX3060:8s /epoch
Colab免费GPU:6s /epoch
本地CPU:24s /epoch

(上一台笔记本的英特尔i5-8265U CPU 更慢,大概是50+s/epoch)

补充:偶尔本地GPU的速度能达到2s /epoch,但大部分情况下仍是8s/epoch
在这里插入图片描述
结论就是本地3060不如Colab上白嫖的GPU速度快 (真香!),不过没有差太多。但是Colab不好的一点就是太容易断连,这个真的很头疼。
对于本地的RTX3060, 我发现一个kernel只能开一个页面,再开第二个页面。训练模型时必崩溃。。必须restart kernel才行,可能是显存小的原因?总之感觉本地的这个3060局限还是挺多的,后面租GPU似乎是避免不了了。

但是目前对我来说,这块显卡还是很有用的,因为目前我的数据集和使用的模型都没有确定下来,还在探索中,使用租用的GPU实在是不方便,一个是需要频繁更换数据,另一个是配置环境。不得不说,Colab在配置环境这块真的很方便,库比较齐全,基本不需要额外安装。
当然,也不是只为了这个显卡才换的电脑,比如16寸的大屏是真的香(之前用了4年的14寸),除了键盘比较一言难尽外整体还是不错的。

ps: 笔记本最近刚降了一波价哦,现在某东和联想官网7999就能买到了,2月初的时候还是8499,我还是买早了(心痛)

=====================================
【更新】
免费GPU首推kaggle!p100 16G每周至少30小时,比Colab免费分配的Tesla T4快了好几倍。然后避坑Colab pro(在淘宝充的80多一个月)分配的GPU(Tesla v100 sxm2 )开到高配都和p100相差不大,体感甚至弱一些,而且每个月都有时间(算力)限制,注意是每个月(开最高配置,没仔细算过,估计12小时都跑不到。。性价比极低)
kaggle上除了不能改python版本和存储输出文件比较麻烦外,真的很良心了

  • 8
    点赞
  • 21
    收藏
    觉得还不错? 一键收藏
  • 11
    评论
李宏毅2020机器学习深度学习 P1. Machine Learning 2020_ Course Introduction P2. Rule of ML 2020 P3. Regression - Case Study P4. Basic Concept P5. Gradient Descent_1 P6. Gradient Descent_2 P7. Gradient Descent_3 P8. Optimization for Deep Learning 1_2 选学 P9. Optimization for Deep Learning 2_2 选学 P10. Classification_1 P11. Logistic Regression P12. Brief Introduction of Deep Learning P13. Backpropagation P14. Tips for Training DNN P15. Why Deep- P16. PyTorch Tutorial P17. Convolutional Neural Network P18. Graph Neural Network 1_2 选学 P19. Graph Neural Network 2_2 选学 P20. Recurrent Neural Network Part I P21. Recurrent Neural Network Part II P22. Unsupervised Learning - Word Embedding P23. Transformer P24. Semi-supervised P25. ELMO, BERT, GPT P26. Explainable ML 1_8 P27. Explainable ML 2_8 P28. Explainable ML 3_8 P29. Explainable ML 4_8 P30. Explainable ML 5_8 P31. Explainable ML 6_8 P32. Explainable ML 7_8 P33. Explainable ML 8_8 P34. More about Explainable AI 选学 P35. Attack ML Models 1_8 P36. Attack ML Models 2_8 P37. Attack ML Models 3_8 P38. Attack ML Models 4_8 P39. Attack ML Models 5_8 P40. Attack ML Models 6_8 P41. Attack ML Models 7_8 P42. Attack ML Models 8_8 P43. More about Adversarial Attack 1_2 选学 P44. More about Adversarial Attack 2_2 选学 P45. Network Compression 1_6 P46. Network Compression 2_6 P47. Network Compression 3_6 P48. Network Compression 4_6 P49. Network Compression 5_6 P50. Network Compression 6_6 P51. Network Compression 1_2 - Knowledge Distillation .flv P52. Network Compression 2_2 - Network Pruning 选学 P53. Conditional Generation by RNN & Attention P54. Pointer Network P55. Recursive P56. Transformer and its variant 选学 P57. Unsupervised Learning - Linear Methods P58. Unsupervised Learning - Neighbor Embedding P59. Unsupervised Learning - Auto-encoder P60. Unsupervised Learning - Deep Generative Model Part.flv P61. Unsupervised Learning - Deep Generative Model Part.flv P62. More about Auto-encoder 1_4 P63. More about Auto-encoder 2_4 P64. More about Auto-encoder 3_4 P65. More about Auto-encoder 4_4 P66. Self-supervised Learning 选学 P67. Anomaly Detection 1_7 P68. Anomaly Detection 2_7 P69. Anomaly Detection 3_7 P70. Anomaly Detection 4_7 P71. Anomaly Detection 5_7 P72. Anomaly Detection 6_7 P73. Anomaly Detection 7_7 P74. More about Anomaly Detection 选学 P75. Generative Adversarial Network1_10 P76. Generative Adversarial Network2_10 P77. Generative Adversarial Network3_10 P78. Generative Adversarial Network4_10 P79. Generative Adversarial Network5_10 P80. Generative Adversarial Network6_10 P81. Generative Adversarial Network7_10 P82. Generative Adversarial Network8_10 P83. Generative Adversarial Network9_10 P84. Generative Adversarial Network10_10 P85. SAGAN, BigGAN, SinGAN, GauGAN, GANILLA, NICE-GAN(选学.flv P86. Transfer Learning P87. More about Domain Adaptation 1_2 选学 P88. More about Domain Adaptation 2_2 选学 P89. Meta Learning – MAML 1_9 P90. Meta Learning – MAML 2_9 P91. Meta Learning – MAML 3_9 P92. Meta Learning – MAML 4_9 P93. Meta Learning – MAML 5_9 P94. Meta Learning – MAML 6_9 P95. Meta Learning – MAML 7_9 P96. Meta Learning – MAML 8_9 P97. Meta Learning – MAML 9_9 P98. More about Meta Learning 选学 P99. More about Meta Learning 选学 P100. Life Long Learning 1_7 P101. Life Long Learning 2_7 P102. Life Long Learning 3_7 P103. Life Long Learning 4_7 P104. Life Long Learning 5_7 P105. Life Long Learning 6_7 P106. Life Long Learning 7_7 P107. Deep Reinforcemen Learning3_1 P108. Deep Reinforcemen Learning3_2 P109. Deep Reinforcemen Learning3_3 P110. RL Advanced Version_1_Policy Gradient P111. RL Advanced Version_2_ Proximal Policy Optimizatio.flv P112. RL Advanced Version_3_Q-Learning P113. RL Advanced Version_4_Q-Learning Advanced Tips P114. RL Advanced Version_5_Q-Learning Continuous Action.flv P115. RL Advanced Version_6_Actor-Critic P116. RL Advanced Version_7_Sparse Reward P117. RL Advanced Version_8_Imitation Learning

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 11
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值