jupyter运行程序提示TypeError: ‘method‘ object is not subscriptable(已解决)

import tensorflow as tf
import numpy as np
from tensorflow import keras
model = tf.keras.Sequential([keras.layers.Dense(units=1, input_shape=[1])])
model.compile(optimizer='sgd', loss='mean_squared_error')
xs = np.array([-1.0,  0.0, 1.0, 2.0, 3.0, 4.0], dtype=float)
ys = np.array([-3.0, -1.0, 1.0, 3.0, 5.0, 7.0], dtype=float)
model.fit(xs, ys, epochs=5)
model.predict[(10.0)]

运行前8行一且正常,能运行.运行第9行就一直报错,把括号改了不行,这是什么原因?

最后一行修改为model.predict(np.array([10.0]))即可解决问题

以下是参考:

#只有一层的神经网络。该层神经元个数为 1,输入形状为 1
model=tf.keras.Sequential([keras.layers.Dense(units=1, input_shape=[1])])
……

#使用模型
print(model.predict(([10.0])))
model.predict()期望第一个参数是一个numpy数组。您提供了一个列表,该列表不具有shapenumpy数组具有的属性。
修改:print(model.predict(np.array([10.0])))
————————————————

                            版权声明:本文为博主原创文章,遵循 CC 4.0 BY-SA 版权协议,转载请附上原文出处链接和本声明。
                        
原文链接:https://blog.csdn.net/weixin_43482279/article/details/105568594

神经元网络深度学习的起步程序 Hello World

Like every first app you should start with something super simple that shows the overall scaffolding for how your code works.

In the case of creating neural networks, the sample I like to use is one where it learns the relationship between two numbers. So, for example, if you were writing code for a function like this, you already know the 'rules' --

第一个应用程序总是应该从超级简单的东西开始,这样可以看到代码如何产生和运作的整体框架。

就创建神经网络而言,我喜欢使用的例子是一个能够学习两组数字之间函数关系的神经元。具体来说,如果你在写下面函数的代码,表明你已经知道了这个函数的"规则",即x和y的映射关系。

float hw_function(float x){
    float y = (2 * x) - 1;
    return y;
}

So how would you train a neural network to do the equivalent task? Using data! By feeding it with a set of Xs, and a set of Ys, it should be able to figure out the relationship between them.

This is obviously a very different paradigm than what you might be used to, so let's step through it piece by piece.

那么,如何训练一个神经网络来完成同等的任务呢? 用数据!用数据来训练神经网络。通过给它输入一组X,和一组Y,它应该能够找出它们之间的关系。

这显然和你习惯的范式很不一样,所以让我们一步步来了解它。

导入tensorflow

Let's start with our imports. Here we are importing TensorFlow and calling it tf for ease of use.

We then import a library called numpy, which helps us to represent our data as lists easily and quickly.

The framework for defining a neural network as a set of Sequential layers is called keras, so we import that too. 让我们从导入TensorFlow开始。为了方便后续使用,我们把它叫做tf。

然后我们导入一个名为numpy的库,它可以帮助我们方便快捷地将数据表示为列表。

定义神经网络的框架叫做keras,它将神经元网络模型定义为一组Sequential层。Keras库也需要导入。

[1]:

import tensorflow as tf

import numpy as np

from tensorflow import keras

定义并编译神经元网络

Next we will create the simplest possible neural network. It has 1 layer, and that layer has 1 neuron, and the input shape to it is just 1 value. 接下来我们将创建一个最简单的神经网络。它只有1层,且这层只有1个神经元,它的输入只是1个数值。

[2]:

model = tf.keras.Sequential([keras.layers.Dense(units=1, input_shape=[1])])

Now we compile our Neural Network. When we do so, we have to specify 2 functions, a loss and an optimizer.

If you've seen lots of math for machine learning, here's where it's usually used, but in this case it's nicely encapsulated in functions for you. But what happens here -- let's explain...

We know that in our function, the relationship between the numbers is y=2x-1.

When the computer is trying to 'learn' that, it makes a guess...maybe y=10x+10. The LOSS function measures the guessed answers against the known correct answers and measures how well or how badly it did.

It then uses the OPTIMIZER function to make another guess. Based on how the loss function went, it will try to minimize the loss. At that point maybe it will come up with somehting like y=5x+5, which, while still pretty bad, is closer to the correct result (i.e. the loss is lower)

It will repeat this for the number of EPOCHS which you will see shortly. But first, here's how we tell it to use 'MEAN SQUARED ERROR' for the loss and 'STOCHASTIC GRADIENT DESCENT' for the optimizer. You don't need to understand the math for these yet, but you can see that they work! :)

Over time you will learn the different and appropriate loss and optimizer functions for different scenarios.

在编译神经网络时,我们必须指定2个函数:一个损失函数和一个优化器。

如果你读过很多有关机器学习的数学理论,这里通常是用到它们的地方。但Tensorflow将这些数学很好地封装在函数中供你使用。那么这个程序里到底发生了什么?我们来看一下:

我们知道,在上面的函数中,两组数字之间的关系其实是y=2x-1。当计算机试图 "学习 "这个映射关系时,它猜测......也许y=10x+10。LOSS(损失)函数将猜测的答案与已知的正确答案进行比较,并衡量偏差程度。然后,计算机使用OPTIMIZER函数再做一次猜测,努力使损失最小化。这时,也许计算机会得出一些像y=5x+5这样的结果,虽然还是很糟糕,但更接近正确的结果(即损失更低)。训练的时候,将依据指定的EPOCHS次数,重复这样的猜测与优化过程。

下面的程序中可以看到如何设置用 "平均平方误差 "来计算损失,并使用 "同步梯度下降 "来优化神经元网络。你并不需要理解背后的这些数学,但你可以看到它们的成效! :)

随着经验的积累,你将了解如何选择相应的损失和优化函数,以适应不同的情况。

[3]:

model.compile(optimizer='sgd', loss='mean_squared_error')

Providing the Data 提供训练数据

Next up we'll feed in some data. In this case we are taking 6 xs and 6ys. You can see that the relationship between these is that y=2x-1, so where x = -1, y=-3 etc. etc.

A python library called 'Numpy' provides lots of array type data structures that are a defacto standard way of doing it. We declare that we want to use these by specifying the values asn an np.array[]

接下来我们将提供一些数据。对于本案例,我们提供6个X和6个Y。可以看到它们之间的关系是y=2x-1,所以当X=-1,y=-3,以此类推。

[4]:

xs = np.array([-1.0, 0.0, 1.0, 2.0, 3.0, 4.0], dtype=float)

ys = np.array([-3.0, -1.0, 1.0, 3.0, 5.0, 7.0], dtype=float)

Training the Neural Network 训练神经元网络

The process of training the neural network, where it 'learns' the relationship between the Xs and Ys is in the model.fit call. This is where it will go through the loop we spoke about above, making a guess, measuring how good or bad it is (aka the loss), using the opimizer to make another guess etc. It will do it for the number of epochs you specify. When you run this code, you'll see the loss on the right hand side.

在调用model.fit函数时,神经网络“学习”X和Y之间的关系。在这个过程中,它将一次又一次地完成上面所说的循环,即做一个猜测,衡量它有多好或多坏(又名损失),使用Opimizer进行再一次猜测,如此往复。训练将根据指定的遍数(epochs)执行此操作。当运行此代码时,将在输出结果中看到损失(loss)。

[5]:

model.fit(xs, ys, epochs=500)

Epoch 1/500

Ok, now you have a model that has been trained to learn the relationshop between X and Y. You can use the model.predict method to have it figure out the Y for a previously unknown X. So, for example, if X = 10, what do you think Y will be? Take a guess before you run this code:

到这里为止模型已经训练好了,它学习了X和Y之间的关系。现在,你可以使用model.predict方法来让它计算未知X对应的Y。例如,如果X=10,你认为Y会是什么?在运行下面代码之前,请猜一猜:

[6]:

print(model.predict([10.0]))

[[18.985321]]

You might have thought 19, right? But it ended up being a little under. Why do you think that is?

Remember that neural networks deal with probabilities, so given the data that we fed the NN with, it calculated that there is a very high probability that the relationship between X and Y is Y=2X-1, but with only 6 data points we can't know for sure. As a result, the result for 10 is very close to 19, but not necessarily 19.

As you work with neural networks, you'll see this pattern recurring. You will almost always deal with probabilities, not certainties, and will do a little bit of coding to figure out what the result is based on the probabilities, particularly when it comes to classification.

你可能会想到19岁,对吧?但最后输出比19低了一丁点儿。这是为什么呢?因为神经网络处理的是概率,所以根据我们向神经元网络提供的数据,它计算出X和y之间的关系是y=2x-1的概率非常高。但由于只有6个数据点,无法完全确定x和y的函数关系。因此,10对应的y值非常接近19,但不一定正好是19。当使用神经网络时,会看到这种模式反复出现。你几乎总是在处理概率,而非确定的数值。并经常需要通过进一步编写程序,来找出概率所对应的结果,特别当处理分类问题时。

评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值