tflearn.models.dnn.DNN与tflearn.DNN

通过如下几个操作,可以大概看出tflearn.models.dnn.DNN与tflearn.DNN的关系:

print(help(tflearn.models.dnn.DNN))

 
 
 
 

print(help(tflearn.DNN))


从返回可以看出,tflearn.models.dnn.DNN与tflearn.DNN是一样的:

class DNN(builtins.object)

 |  Deep Neural Network Model.深度神经网络模型

 | 

 |  Arguments:

 |      network: `Tensor`. Neural network to be used.

 |      tensorboard_verbose: `int`. Summary verbose(冗余的) level, it accepts different levels of tensorboard logs

 |      tensorboard_dir: `str`. Directory to store tensorboard logs.

 |      checkpoint_path: `str`. Path to store model checkpoints. If None, no model checkpoint will be saved. Default: None.

 |      best_checkpoint_path: `str`. Path to store the model when the validation rate reaches its

 |          highest point of the current training session and also is above best_val_accuracy. Default: None.

 |      max_checkpoints: `int` or None. Maximum amount of checkpoints. If None, no limit. Default: None.

 |      session: `Session`. A session for running ops. If None, a new one will

 |          be created. Note: When providing a session, variables must have been

 |          initialized already, otherwise an error will be raised.

 |      best_val_accuracy: `float` The minimum validation accuracy that needs to be

 |          achieved before a model weight's are saved to the best_checkpoint_path. This

 |          allows the user to skip early saves and also set a minimum save point when continuing

 |          to train a reloaded model. Default: 0.0.

 | 

 |  Attributes:

 |      trainer: `Trainer`. Handle model training.

 |      predictor: `Predictor`. Handle model prediction.

 |      session: `Session`. The current model session.

 | 

 |  Methods defined here:

 | 

 |  __init__(self, network, clip_gradients=5.0, tensorboard_verbose=0, tensorboard_dir='/tmp/tflearn_logs/', checkpoint_path=None, best_checkpoint_path=None, max_checkpoints=None, session=None, best_val_accuracy=0.0)

 |      Initialize self.  See help(type(self)) for accurate signature.

 | 

 |  evaluate(self, X, Y, batch_size=128)

 |      Evaluate.

 |     

 |      Evaluate model metric(s) on given samples.

 |     

 |      Arguments:

 |          X: array, `list` of array (if multiple inputs) or `dict`

 |              (with inputs layer name as keys). Data to feed to train

 |              model.

 |          Y: array, `list` of array (if multiple inputs) or `dict`

 |              (with estimators layer name as keys). Targets (Labels) to

 |              feed to train model. Usually set as the next element of a

 |              sequence, i.e. for x[0] => y[0] = x[1].

 |          batch_size: `int`. The batch size. Default: 128.

 |     

 |      Returns:

 |          The metric(s) score.

 | 

 |  fit(self, X_inputs, Y_targets, n_epoch=10, validation_set=None, show_metric=False, batch_size=None, shuffle=None, snapshot_epoch=True, snapshot_step=None, excl_trainops=None, validation_batch_size=None, run_id=None, callbacks=[])

 |      Fit.

 |     

 |      Train model, feeding X_inputs and Y_targets to the network.

 |     

 |      NOTE: When not feeding dicts, data assignations is made by

 |          input/estimator layers creation order (For example, the second

 |          input layer created will be feeded by the second value of

 |          X_inputs list).

 |     

 |      Examples:

 |          ```python

 |          model.fit(X, Y) # Single input and output

 |          model.fit({'input1': X}, {'output1': Y}) # Single input and output

 |          model.fit([X1, X2], Y) # Mutliple inputs, Single output

 |     

 |          # validate with X_val and [Y1_val, Y2_val]

 |          model.fit(X, [Y1, Y2], validation_set=(X_val, [Y1_val, Y2_val]))

 |          # 10% of training data used for validation

 |          model.fit(X, Y, validation_set=0.1)

 |          ```

 |     

 |      Arguments:

 |          X_inputs: array, `list` of array (if multiple inputs) or `dict`

 |              (with inputs layer name as keys). Data to feed to train

 |              model.

 |          Y_targets: array, `list` of array (if multiple inputs) or `dict`

 |              (with estimators layer name as keys). Targets (Labels) to

 |              feed to train model.

 |          n_epoch: `int`. Number of epoch to run. Default: None.

 |          validation_set: `tuple`. Represents data used for validation.

 |              `tuple` holds data and targets (provided as same type as

 |              X_inputs and Y_targets). Additionally, it also accepts

 |              `float` (<1) to performs a data split over training data.

 |          show_metric: `bool`. Display or not accuracy at every step.

 |          batch_size: `int` or None. If `int`, overrides all network

 |              estimators 'batch_size' by this value.  Also overrides

 |              `validation_batch_size` if `int`, and if `validation_batch_size`

 |              is None.

 |          validation_batch_size: `int` or None. If `int`, overrides all network

 |              estimators 'validation_batch_size' by this value.

 |          shuffle: `bool` or None. If `bool`, overrides all network

 |              estimators 'shuffle' by this value.

 |          snapshot_epoch: `bool`. If True, it will snapshot model at the end

 |              of every epoch. (Snapshot a model will evaluate this model

 |              on validation set, as well as create a checkpoint if

 |              'checkpoint_path' specified).

 |          snapshot_step: `int` or None. If `int`, it will snapshot model

 |              every 'snapshot_step' steps.

 |          excl_trainops: `list` of `TrainOp`. A list of train ops to

 |              exclude from training process (TrainOps can be retrieve

 |              through `tf.get_collection_ref(tf.GraphKeys.TRAIN_OPS)`).

 |          run_id: `str`. Give a name for this run. (Useful for Tensorboard).

 |          callbacks: `Callback` or `list`. Custom callbacks to use in the

 |              training life cycle

 | 

 |  fit_batch(self, X_inputs, Y_targets)

 | 

 |  get_train_vars(self)

 | 

 |  get_weights(self, weight_tensor)

 |      Get Weights.

 |     

 |      Get a variable weights.

 |     

 |      Examples:

 |          ```

 |          dnn = DNNTrainer(...)

 |          w = dnn.get_weights(denselayer.W) # get a dense layer weights

 |          w = dnn.get_weights(convlayer.b) # get a conv layer biases

 |          ```

 |     

 |      Arguments:

 |          weight_tensor: `Tensor`. A Variable.

 |     

 |      Returns:

 |          `np.array`. The provided variable weights.

 | 

 |  load(self, model_file, weights_only=False, **optargs)

 |      Load.

 |     

 |      Restore model weights.

 |     

 |      Arguments:

 |          model_file: `str`. Model path.

 |          weights_only: `bool`. If True, only weights will be restored (

 |              and not intermediate variable, such as step counter, moving

 |              averages...). Note that if you are using batch normalization,

 |              averages will not be restored as well.

 |          optargs: optional extra arguments for trainer.restore (see helpers/trainer.py)

 |                   These optional arguments may be used to limit the scope of

 |                   variables restored, and to control whether a new session is

 |                   created for the restored variables.

 | 

 |  predict(self, X)

 |      Predict.

 |     

 |      Model prediction for given input data.

 |     

 |      Arguments:

 |          X: array, `list` of array (if multiple inputs) or `dict`

 |              (with inputs layer name as keys). Data to feed for prediction.

 |     

 |      Returns:

 |          array or `list` of array. The predicted probabilities.

 | 

 |  predict_label(self, X)

 |      Predict Label.

 |     

 |      Predict class labels for input X.

 |     

 |      Arguments:

 |          X: array, `list` of array (if multiple inputs) or `dict`

 |              (with inputs layer name as keys). Data to feed for prediction.

 |     

 |      Returns:

 |          array or `list` of array. The predicted classes index array, sorted

 |          by descendant probability value.

 | 

 |  save(self, model_file)

 |      Save.

 |     

 |      Save model weights.

 |     

 |      Arguments:

 |          model_file: `str`. Model path.

 | 

 |  set_weights(self, tensor, weights)

 |      Set Weights.

 |     

 |      Assign a tensor variable a given value.

 |     

 |      Arguments:

 |          tensor: `Tensor`. The tensor variable to assign value.

 |          weights: The value to be assigned.

 | 

 |  ----------------------------------------------------------------------

 |  Data descriptors defined here:

 | 

 |  __dict__

 |      dictionary for instance variables (if defined)

 | 

 |  __weakref__

 |      list of weak references to the object (if defined)

 

 

NAME

    tflearn.models

 

PACKAGE CONTENTS

    dnn

    generator

 

DATA

    absolute_import = _Feature((2, 5, 0, 'alpha', 1), (3, 0, 0, 'alpha', 0...


有点困惑,既然两种方法最后得到的类是同样的,为什么还要有两种表述?

 

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值