深度学习优化器总结

1

输入代码:

import tensorflow as tf 
print(help(tf.train))

得到TF所有优化器77个,牛逼,具体区别,稍后继续整理




/usr/local/Cellar/python/3.7.0/Frameworks/Python.framework/Versions/3.7/lib/python3.7/importlib/_bootstrap.py:219: RuntimeWarning: compiletime version 3.6 of module 'tensorflow.python.framework.fast_tensor_util' does not match runtime version 3.7
  return f(*args, **kwds)
Help on module tensorflow.python.training.training in tensorflow.python.training:

NAME
    tensorflow.python.training.training - Support for training models.

DESCRIPTION
    See the @{$python/train} guide.
    
    @@Optimizer
    @@GradientDescentOptimizer
    @@AdadeltaOptimizer
    @@AdagradOptimizer
    @@AdagradDAOptimizer
    @@MomentumOptimizer
    @@AdamOptimizer
    @@FtrlOptimizer
    @@ProximalGradientDescentOptimizer
    @@ProximalAdagradOptimizer
    @@RMSPropOptimizer
    @@gradients
    @@AggregationMethod
    @@stop_gradient
    @@hessians
    @@clip_by_value
    @@clip_by_norm
    @@clip_by_average_norm
    @@clip_by_global_norm
    @@global_norm
    @@cosine_decay
    @@cosine_decay_restarts
    @@linear_cosine_decay
    @@noisy_linear_cosine_decay
    @@exponential_decay
    @@inverse_time_decay
    @@natural_exp_decay
    @@piecewise_constant
    @@polynomial_decay
    @@ExponentialMovingAverage
    @@Coordinator
    @@QueueRunner
    @@LooperThread
    @@add_queue_runner
    @@start_queue_runners
    @@Server
    @@Supervisor
    @@SessionManager
    @@ClusterSpec
    @@replica_device_setter
    @@MonitoredTrainingSession
    @@MonitoredSession
    @@SingularMonitoredSession
    @@Scaffold
    @@SessionCreator
    @@ChiefSessionCreator
    @@WorkerSessionCreator
    @@summary_iterator
    @@SessionRunHook
    @@SessionRunArgs
    @@SessionRunContext
    @@SessionRunValues
    @@LoggingTensorHook
    @@StopAtStepHook
    @@CheckpointSaverHook
    @@CheckpointSaverListener
    @@NewCheckpointReader
    @@StepCounterHook
    @@NanLossDuringTrainingError
    @@NanTensorHook
    @@SummarySaverHook
    @@GlobalStepWaiterHook
    @@FinalOpsHook
    @@FeedFnHook
    @@ProfilerHook
    @@SecondOrStepTimer
    @@global_step
    @@basic_train_loop
    @@get_global_step
    @@get_or_create_global_step
    @@create_global_step
    @@assert_global_step
    @@write_graph
    @@load_checkpoint
    @@load_variable
    @@list_variables
    @@init_from_checkpoint

FILE
    /usr/local/lib/python3.7/site-packages/tensorflow/python/training/training.py


None
[Finished in 3.6s]

常用的优化器:

GradientDescentOptimizer
AdamOptimizer
MomentumOptimizer

具体区别看点赞数决定再写

 

认识你是我们的缘分,同学,等等,学习人工智能,记得关注我。

 

微信扫一扫
关注该公众号

《湾区人工智能》

 

 

 

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 1
    评论
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值