基于tensorflow搭建神经网络

基于tensorflow搭建神经网络

一、tf.keras搭建神经网络步骤

六步法

  1. import
  2. train,test
  3. model = tf.keras.models.Sequential
  4. model.compile
  5. model.fit
  6. model.summary

models.Sequential()

model = tf.keras.models.Sequential ([ 网络结构 ]) #描述各层网络
网络结构举例:

  1. 拉直层

tf.keras.layers.Flatten( )

  1. 全连接层

tf.keras.layers.Dense(神经元个数, activation= “激活函数”,kernel_regularizer=哪种正则化)

activation(字符串给出)可选: relu、 softmax、 sigmoid 、 tanh
kernel_regularizer可选: tf.keras.regularizers.l1()、tf.keras.regularizers.l2()

  1. 卷积层

tf.keras.layers.Conv2D(filters = 卷积核个数, kernel_size = 卷积核尺寸,strides = 卷积步长, padding = " valid" or “same”)

  1. LSTM层: tf.keras.layers.LSTM()

model.compile()

model.compile(optimizer = 优化器,
loss = 损失函数
metrics = [“准确率”] )

Optimizer可选:
‘sgd’ or tf.keras.optimizers.SGD (lr=学习率,momentum=动量参数)
‘adagrad’ or tf.keras.optimizers.Adagrad (lr=学习率)
‘adadelta’ or tf.keras.optimizers.Adadelta (lr=学习率)
‘adam’ or tf.keras.optimizers.Adam (lr=学习率, beta_1=0.9, beta_2=0.999)

loss可选:
‘mse’ or tf.keras.losses.MeanSquaredError()
‘sparse_categorical_crossentropy’ or tf.keras.losses.SparseCategoricalCrossentropy(from_logits=False)

Metrics可选:
‘accuracy’ :y_和y都是数值,如y_=[1] y=[1]
‘categorical_accuracy’ :y_和y都是独热码(概率分布),如y_=[0,1,0] y=[0.256,0.695,0.048]
‘sparse_categorical_accuracy’ :y_是数值,y是独热码(概率分布),如y_=[1] y=[0.256,0.695,0.048]

model.fit()

model.fit (训练集的输入特征, 训练集的标签,
batch_size= , epochs= ,
validation_data=(验证集的输入特征,验证集的标签),
validation_split=从训练集划分多少比例给验证集,
validation_freq = 多少次epoch验证一次)

model.summary()

二、自定义Model

class MyModel(Model):
    def __init__(self):
        super(MyModel, self).__init__()
        定义网络结构块
    def call(self, x):
        调用网络结构块,实现前向传播
        return y
model = MyModel()

例:

class IrisModel(Model):
    def __init__(self):
        super(IrisModel, self).__init__()
        self.d1 = Dense(3, activation='softmax', kernel_regularizer=tf.keras.regularizers.l2())

    def call(self, x):
        y = self.d1(x)
        return y

model = IrisModel()

三、tf.keras实现手写数字分类

import tensorflow as tf

mnist = tf.keras.datasets.mnist
(x_train, y_train), (x_test, y_test) = mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0

model = tf.keras.models.Sequential([
    tf.keras.layers.Flatten(),
    tf.keras.layers.Dense(128, activation='relu'),
    tf.keras.layers.Dense(10, activation='softmax')
])

model.compile(optimizer='adam',
              loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=False),
              metrics=['sparse_categorical_accuracy'])

model.fit(x_train, y_train, batch_size=32, epochs=5, validation_data=(x_test, y_test), validation_freq=1)
model.summary()

运行结果如下:

60000/60000 [==============================] - 5s 89us/sample - loss: 0.0455 - sparse_categorical_accuracy: 0.9861 - val_loss: 0.0806 - val_sparse_categorical_accuracy: 0.9752
Model: "sequential"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
flatten (Flatten)            multiple                  0         
_________________________________________________________________
dense (Dense)                multiple                  100480    
_________________________________________________________________
dense_1 (Dense)              multiple                  1290      
=================================================================
Total params: 101,770
Trainable params: 101,770
Non-trainable params: 0
  • 1
    点赞
  • 5
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
好的,以下是一个基于TensorFlow搭建模块化神经网络的示例代码: ```python import tensorflow as tf # 定义输入层 def create_input_layer(input_size): return tf.placeholder(tf.float32, shape=[None, input_size], name='input') # 定义隐层 def create_hidden_layer(input_layer, num_neurons, activation): input_size = int(input_layer.shape[1]) stddev = 2.0 / tf.sqrt(input_size + num_neurons) weights = tf.Variable(tf.truncated_normal([input_size, num_neurons], stddev=stddev)) biases = tf.Variable(tf.zeros([num_neurons])) layer = tf.matmul(input_layer, weights) + biases if activation: layer = activation(layer) return layer # 定义输出层 def create_output_layer(input_layer, num_classes): input_size = int(input_layer.shape[1]) stddev = 2.0 / tf.sqrt(input_size + num_classes) weights = tf.Variable(tf.truncated_normal([input_size, num_classes], stddev=stddev)) biases = tf.Variable(tf.zeros([num_classes])) layer = tf.matmul(input_layer, weights) + biases return layer # 定义模型 def build_model(input_size, hidden_layers, num_classes): input_layer = create_input_layer(input_size) hidden_layer = input_layer for num_neurons, activation in hidden_layers: hidden_layer = create_hidden_layer(hidden_layer, num_neurons, activation) output_layer = create_output_layer(hidden_layer, num_classes) return input_layer, output_layer # 测试 input_size = 784 hidden_layers = [(256, tf.nn.relu), (128, tf.nn.relu)] num_classes = 10 input_layer, output_layer = build_model(input_size, hidden_layers, num_classes) print(input_layer) print(output_layer) ``` 这个示例代码定义了一个三层神经网络,包括一个输入层、两个隐层和一个输出层。其中,输入层和输出层都比较简单,而隐层则是根据输入层的大小、神经元个数和激活函数来创建的。同时,这个代码还支持模型的构建和测试。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值