(16)tensorflow全连接的层实现

全连接的层实现

功能函数代码
层实现方式layers.Dense(units, activation)
获取 Dense 类的权值矩阵fc.kernel
获取 Dense 类的偏置向量fc.bias
返回待优化参数列表fc.trainable_variables

层实现

  • layers.Dense(units, activation)
  • units指定层输出节点数
  • activation指定激活函数
  • ayers.Dense在调用时会根据输入数据自动生成输入节点数
import tensorflow as tf
from tensorflow.keras import layers
x = tf.random.normal([3,50])#模拟2个样本,50个特征
fc = layers.Dense(2,activation=tf.nn.relu)
h1 = fc(x)
print('h1',h1)
print('K',fc.kernel)
print('b',fc.bias)
print('v',fc.trainable_variables)

out:
C:\Users\Admin\PycharmProjects\untitled1\venv\Scripts\python.exe C:/Users/Admin/PycharmProjects/untitled1/tensor练习使用.py
2020-09-04 13:06:21.911380: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2
h1 tf.Tensor(
[[0.95606685 1.1911137 ]
 [0.         0.        ]
 [0.14124475 0.04009145]], shape=(3, 2), dtype=float32)
K <tf.Variable 'dense/kernel:0' shape=(50, 2) dtype=float32, numpy=
array([[ 0.0455344 , -0.19482273],
       [-0.26869717,  0.02412117],
       [ 0.21953332, -0.03901473],
       [ 0.22162515, -0.32377893],
       [ 0.30170476,  0.07104701],
       [-0.24665953,  0.214688  ],
       [ 0.11439314,  0.2840066 ],
       [ 0.27506483,  0.2678889 ],
       [-0.3137263 ,  0.16774249],
       [-0.08592525,  0.09441769],
       [-0.21467607, -0.00248331],
       [-0.23901463,  0.1094763 ],
       [ 0.08673692,  0.26048535],
       [-0.06203741, -0.1819801 ],
       [ 0.18807596,  0.2989127 ],
       [-0.25750422,  0.153965  ],
       [ 0.12020397,  0.19332612],
       [ 0.25078076, -0.12966038],
       [ 0.06809869,  0.265687  ],
       [-0.3342834 , -0.24474217],
       [ 0.1052615 , -0.11368768],
       [-0.12656587,  0.08862066],
       [ 0.11732206,  0.3000934 ],
       [ 0.3307876 , -0.3007087 ],
       [ 0.14409512, -0.21482137],
       [-0.04636192,  0.08430123],
       [ 0.14868012,  0.26966405],
       [-0.13744491,  0.18106598],
       [ 0.05841842, -0.14975952],
       [ 0.10092789,  0.2506609 ],
       [ 0.31394488,  0.10897231],
       [-0.22760901, -0.10806008],
       [ 0.23610026,  0.2924561 ],
       [ 0.19984013, -0.12296666],
       [-0.21468846, -0.29082647],
       [ 0.20162839, -0.24724546],
       [-0.05970627,  0.33641344],
       [ 0.30267793, -0.18490817],
       [ 0.31117034, -0.31890184],
       [ 0.09414282,  0.08582264],
       [ 0.00278926, -0.08124155],
       [ 0.17134207,  0.32664698],
       [ 0.2573135 ,  0.1359677 ],
       [-0.31898403,  0.02981722],
       [-0.29750133, -0.33452296],
       [ 0.20652354,  0.09661892],
       [-0.21122393, -0.265467  ],
       [ 0.25332725, -0.15709157],
       [ 0.31241155, -0.11074884],
       [ 0.02461901,  0.18401676]], dtype=float32)>
b <tf.Variable 'dense/bias:0' shape=(2,) dtype=float32, numpy=array([0., 0.], dtype=float32)>
v [<tf.Variable 'dense/kernel:0' shape=(50, 2) dtype=float32, numpy=
array([[ 0.0455344 , -0.19482273],
       [-0.26869717,  0.02412117],
       [ 0.21953332, -0.03901473],
       [ 0.22162515, -0.32377893],
       [ 0.30170476,  0.07104701],
       [-0.24665953,  0.214688  ],
       [ 0.11439314,  0.2840066 ],
       [ 0.27506483,  0.2678889 ],
       [-0.3137263 ,  0.16774249],
       [-0.08592525,  0.09441769],
       [-0.21467607, -0.00248331],
       [-0.23901463,  0.1094763 ],
       [ 0.08673692,  0.26048535],
       [-0.06203741, -0.1819801 ],
       [ 0.18807596,  0.2989127 ],
       [-0.25750422,  0.153965  ],
       [ 0.12020397,  0.19332612],
       [ 0.25078076, -0.12966038],
       [ 0.06809869,  0.265687  ],
       [-0.3342834 , -0.24474217],
       [ 0.1052615 , -0.11368768],
       [-0.12656587,  0.08862066],
       [ 0.11732206,  0.3000934 ],
       [ 0.3307876 , -0.3007087 ],
       [ 0.14409512, -0.21482137],
       [-0.04636192,  0.08430123],
       [ 0.14868012,  0.26966405],
       [-0.13744491,  0.18106598],
       [ 0.05841842, -0.14975952],
       [ 0.10092789,  0.2506609 ],
       [ 0.31394488,  0.10897231],
       [-0.22760901, -0.10806008],
       [ 0.23610026,  0.2924561 ],
       [ 0.19984013, -0.12296666],
       [-0.21468846, -0.29082647],
       [ 0.20162839, -0.24724546],
       [-0.05970627,  0.33641344],
       [ 0.30267793, -0.18490817],
       [ 0.31117034, -0.31890184],
       [ 0.09414282,  0.08582264],
       [ 0.00278926, -0.08124155],
       [ 0.17134207,  0.32664698],
       [ 0.2573135 ,  0.1359677 ],
       [-0.31898403,  0.02981722],
       [-0.29750133, -0.33452296],
       [ 0.20652354,  0.09661892],
       [-0.21122393, -0.265467  ],
       [ 0.25332725, -0.15709157],
       [ 0.31241155, -0.11074884],
       [ 0.02461901,  0.18401676]], dtype=float32)>, <tf.Variable 'dense/bias:0' shape=(2,) dtype=float32, numpy=array([0., 0.], dtype=float32)>]

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

小蜗笔记

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值