深度学习-keras & Tensorflow

目录

 

深度学习理论知识

Keras

Tensorflow 2

知识记忆

过拟合问题

Conclusions

TensorFlow-Serving的使用实战案例笔记

 

 


目标:

  1. 明确整体框架流程,DNN架构和损失函数、评价指标等
  2. 主流模型的搭建,比如RNN/LSTM/WDL/GAN
  3. 拿实际数据跑通模型

 

深度学习理论知识

Deep Learning with Python

https://github.com/fchollet/deep-learning-with-python-notebooks

https://tanthiamhuat.files.wordpress.com/2018/03/deeplearningwithpython.pdf

神经网络与深度学习

https://nndl.github.io/

https://zhuanlan.zhihu.com/p/58144032

https://github.com/nndl/nndl.github.io

https://github.com/MichalDanielDobrzanski/DeepLearningPython

Dive Into Deep Learning

https://github.com/TrickyGo/Dive-into-DL-TensorFlow2.0

https://courses.d2l.ai/zh-v2/

https://tangshusen.me/Dive-into-DL-PyTorch/#/

AI算法工程师

http://huaxiaozhuan.com/

 


Keras

https://keras.io/zh/

https://keras-cn.readthedocs.io/en/latest/

# Keras中文文档
https://link.zhihu.com/?target=https%3A//keras.io/zh/
https://www.tensorflow.org/guide/keras/sequential_model
https://keras.io/getting_started/

https://juejin.cn/post/6844903570358140935
    

 

Tensorflow 2

官方初学者

https://www.tensorflow.org/tutorials/quickstart/beginner

https://www.tensorflow.org/guide

 

50题真 • 一文入门TensorFlow2.x

https://zhuanlan.zhihu.com/p/111071013 

# TensorFlow 2 快速教程,初学者入门必备

https://link.zhihu.com/?target=https%3A//www.cnblogs.com/shiyanlou/p/11752002.html

# TensorFlow中文社区-首页

https://link.zhihu.com/?target=http%3A//www.tensorfly.cn/

 

# Github的资源

https://github.com/tensorflow/tensorflow/
    
Tensorflow官方教程(来自github的最后推荐资料)

https://www.tensorflow.org/responsible_ai

https://www.tensorflow.org/resources/learn-ml/basics-of-machine-learning

https://learning.oreilly.com/p/register/

coursera

https://www.coursera.org/learn/getting-started-with-tensor-flow2

https://www.coursera.org/learn/getting-started-with-tensor-flow2?action=enroll#enroll(过年花26个小时完成)


知识记忆

过拟合问题

https://www.tensorflow.org/tutorials/keras/overfit_and_underfit

深度学习重点考虑的不是模型拟合能力,而是模型泛化能力。

In deep learning, the number of learnable parameters in a model is often referred to as the model's "capacity".

Intuitively, a model with more parameters will have more "memorization capacity" and therefore will be able to easily learn a perfect dictionary-like mapping between training samples and their targets, a mapping without any generalization power, but this would be useless when making predictions on previously unseen data.

Always keep this in mind: deep learning models tend to be good at fitting to the training data, but the real challenge is generalization, not fitting.

 

To find an appropriate model size, it's best to start with relatively few layers and parameters, then begin increasing the size of the layers or adding new layers until you see diminishing returns on the validation loss.

 

Dropout使得模型不会过多依赖于个别节点,而是使得尽可能多的节点做训练过程中非常活跃。

可能是最有效、最普遍的防治过拟合的方式了。

组合多种抑制过拟合的办法,比单一方法往往更有效。

Conclusions

To recap: here are the most common ways to prevent overfitting in neural networks:

  • Get more training data.
  • Reduce the capacity of the network.
  • Add weight regularization.
  • Add dropout.

Two important approaches not covered in this guide are:

  • data-augmentation
  • batch normalization

 

 

TensorFlow-Serving的使用实战案例笔记

ts + flask 一键自动部署

https://cloud.tencent.com/developer/article/1606062

 

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值