单层感知器python_TensorFlow:使用自己的数据实现单层感知器/多层感知器

最后我得到了。使用单层感知器和tensorflow、numpy、matplotlib软件包构建、训练和最小化人工神经网络的成本/损失。数据以数组的形式使用,而不是使用MNIST的。

这是密码。在import tensorflow as tf

import numpy as np

import matplotlib.pyplot as plt

learning_rate = 0.0008

training_epochs = 2000

display_step = 50

# taking input as array from numpy package and converting it into tensor

inputX = np.array([[ 2, 3],

[ 1, 3]])

inputY = np.array([[ 2, 3],

[ 1, 3]])

x = tf.placeholder(tf.float32, [None, 2])

y_ = tf.placeholder(tf.float32, [None, 2])

W = tf.Variable([[0.0,0.0],[0.0,0.0]])

b = tf.Variable([0.0,0.0])

layer1 = tf.add(tf.matmul(x, W), b)

y = tf.nn.softmax(layer1)

cost = tf.reduce_sum(tf.pow(y_-y,2))

optimizer =tf.train.GradientDescentOptimizer(learning_rate=learning_rate).minimize(cost)

init = tf.global_variables_initializer()

sess = tf.Session()

sess.run(init)

avg_set = []

epoch_set = []

for i in range(training_epochs):

sess.run(optimizer, feed_dict = {x: inputX, y_:inputY})

#log training

if i % display_step == 0:

cc = sess.run(cost, feed_dict = {x: inputX, y_:inputY})

#check what it thinks when you give it the input data

print(sess.run(y, feed_dict = {x:inputX}))

print("Training step:", '%04d' % (i), "cost=", "{:.9f}".format(cc))

avg_set.append(cc)

epoch_set.append(i + 1)

print("Optimization Finished!")

training_cost = sess.run(cost, feed_dict = {x: inputX, y_: inputY})

print("Training cost = ", training_cost, "\nW=", sess.run(W),

"\nb=", sess.run(b))

plt.plot(epoch_set,avg_set,'o',label = 'SLP Training phase')

plt.ylabel('cost')

plt.xlabel('epochs')

plt.legend()

plt.show()

之后通过添加隐藏层,也可以用多层感知器实现

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值