TensorFlow预测正弦函数

《TensorFlow实战Google深度学习框架》Chapter8
最后一个样例程序是预测sin函数并绘制曲线的。

但是代码运行总是出错,而且python2和python3报错不同,反复修改还是不行,问题出在矩阵相乘的维度问题上,或者导包失败。
在CSDN上找到了可运行的代码(抄过来以备不时之需)

原链接:http://blog.csdn.net/u012416045/article/details/78223798

代码如下:

#coding:utf-8
import numpy as np
import tensorflow as tf
from tensorflow.contrib import rnn
# 加载matplotlib工具包,使用该工具包可以对预测的sin函数曲线进行绘图
import matplotlib as mpl
from tensorflow.contrib.learn.python.learn.estimators.estimator import SKCompat
mpl.use('Agg')
from matplotlib import pyplot as plt
learn = tf.contrib.learn
HIDDEN_SIZE = 30  # Lstm中隐藏节点的个数
NUM_LAYERS = 2  # LSTM的层数
TIMESTEPS = 10  # 循环神经网络的截断长度
TRAINING_STEPS = 10000  # 训练轮数
BATCH_SIZE = 32  # batch大小
TRAINING_EXAMPLES = 10000  # 训练数据个数
TESTING_EXAMPLES = 1000  # 测试数据个数
SAMPLE_GAP = 0.01  # 采样间隔
# 定义生成正弦数据的函数
def generate_data(seq):
    X = []
    Y = []
    # 序列的第i项和后面的TIMESTEPS-1项合在一起作为输入;第i+TIMESTEPS项作为输出
    # 即用sin函数前面的TIMESTPES个点的信息,预测第i+TIMESTEPS个点的函数值
    for i in range(len(seq) - TIMESTEPS - 1):
        X.append([seq[i:i + TIMESTEPS]])
        Y.append([seq[i + TIMESTEPS]])
    return np.array(X, dtype=np.float32), np.array(Y, dtype=np.float32)
def LstmCell():
    lstm_cell = rnn.BasicLSTMCell(HIDDEN_SIZE,state_is_tuple=True)
    return lstm_cell
# 定义lstm模型
def lstm_model(X, y):
    cell = rnn.MultiRNNCell([LstmCell() for _ in range(NUM_LAYERS)])
    output, _ = tf.nn.dynamic_rnn(cell, X, dtype=tf.float32)
    output = tf.reshape(output, [-1, HIDDEN_SIZE])
    # 通过无激活函数的全连接层计算线性回归,并将数据压缩成一维数组结构
    predictions = tf.contrib.layers.fully_connected(output, 1, None)
   
    # 将predictions和labels调整统一的shape
    labels = tf.reshape(y, [-1])
    predictions = tf.reshape(predictions, [-1])
   
    loss = tf.losses.mean_squared_error(predictions, labels)
    train_op = tf.contrib.layers.optimize_loss(loss, tf.contrib.framework.get_global_step(),
                                             optimizer="Adagrad",
                                             learning_rate=0.1)
    return predictions, loss, train_op
# 进行训练
# 封装之前定义的lstm
regressor = SKCompat(learn.Estimator(model_fn=lstm_model, model_dir="Models/model_2"))
# 生成数据
test_start = TRAINING_EXAMPLES * SAMPLE_GAP
test_end = (TRAINING_EXAMPLES + TESTING_EXAMPLES) * SAMPLE_GAP
train_X, train_y = generate_data(np.sin(np.linspace(0, test_start, TRAINING_EXAMPLES, dtype=np.float32)))
test_X, test_y = generate_data(np.sin(np.linspace(test_start, test_end, TESTING_EXAMPLES, dtype=np.float32)))
# 拟合数据
regressor.fit(train_X, train_y, batch_size=BATCH_SIZE, steps=TRAINING_STEPS)
# 计算预测值
predicted = [[pred] for pred in regressor.predict(test_X)]
# 计算MSE
rmse = np.sqrt(((predicted - test_y) ** 2).mean(axis=0))
#print("Mean Square Error is:%f" % rmse[0])
#plot_predicted, = plt.plot(predicted, label='predicted')
#plot_test, = plt.plot(test_y, label='real_sin')
#plt.legend([plot_predicted, plot_test],['predicted', 'real_sin'])
#plt.show()

输出结果:

2018-03-20 21:07:39.740355: I tensorflow/core/common_runtime/gpu/gpu_device.cc:993] Creating TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:1 with 105 MB memory) -> physical GPU (device: 1, name: Tesla K80, pci bus id: 0000:07:00.0, compute capability: 3.7)
2018-03-20 21:07:39.745409: I tensorflow/core/common_runtime/gpu/gpu_device.cc:993] Creating TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:2 with 258 MB memory) -> physical GPU (device: 2, name: Tesla K80, pci bus id: 0000:85:00.0, compute capability: 3.7)
2018-03-20 21:07:39.749280: I tensorflow/core/common_runtime/gpu/gpu_device.cc:993] Creating TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:3 with 201 MB memory) -> physical GPU (device: 3, name: Tesla K80, pci bus id: 0000:86:00.0, compute capability: 3.7)
2018-03-20 21:08:49.454755: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1312] Adding visible gpu devices: 0, 1, 2, 3
2018-03-20 21:08:49.455133: I tensorflow/core/common_runtime/gpu/gpu_device.cc:993] Creating TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 204 MB memory) -> physical GPU (device: 0, name: Tesla K80, pci bus id: 0000:06:00.0, compute capability: 3.7)
2018-03-20 21:08:49.455372: I tensorflow/core/common_runtime/gpu/gpu_device.cc:993] Creating TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:1 with 0 MB memory) -> physical GPU (device: 1, name: Tesla K80, pci bus id: 0000:07:00.0, compute capability: 3.7)
2018-03-20 21:08:49.455596: I tensorflow/core/common_runtime/gpu/gpu_device.cc:993] Creating TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:2 with 224 MB memory) -> physical GPU (device: 2, name: Tesla K80, pci bus id: 0000:85:00.0, compute capability: 3.7)
2018-03-20 21:08:49.455833: I tensorflow/core/common_runtime/gpu/gpu_device.cc:993] Creating TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:3 with 224 MB memory) -> physical GPU (device: 3, name: Tesla K80, pci bus id: 0000:86:00.0, compute capability: 3.7)
Mean Square Error is:0.001389

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
您可以使用TensorFlow创建一个LSTM模型来预测正弦函数的值。下面是一个简单的示例代码: ```python import numpy as np import tensorflow as tf from tensorflow.keras.models import Sequential from tensorflow.keras.layers import LSTM, Dense # 生成正弦函数数据 sequence_length = 100 x = np.linspace(0, 2 * np.pi, num=sequence_length) y = np.sin(x) # 将数据转换为LSTM模型的输入格式 data = np.array([y[i:i+3] for i in range(len(y)-3)]) x_train = data[:, :-1] y_train = data[:, -1] # 创建并训练LSTM模型 model = Sequential() model.add(LSTM(32, input_shape=(2, 1))) # 输入维度为2,输出维度为1 model.add(Dense(1)) model.compile(optimizer='adam', loss='mse') model.fit(x_train[:, :, np.newaxis], y_train[:, np.newaxis], epochs=100) # 使用模型进行预测 x_test = np.array([[y[-2], y[-1]]]) y_pred = model.predict(x_test[np.newaxis, :, np.newaxis]) print("预测的下一个正弦值:", y_pred[0, 0]) ``` 在这个示例中,我们首先生成了一个包含100个正弦函数值的序列。然后,我们将序列转换为LSTM模型的输入格式,即每个样本包含连续的三个正弦函数值,用前两个值预测下一个值。接下来,我们创建了一个简单的LSTM模型,包含一个LSTM层和一个全连接层。最后,我们使用生成的模型对最后两个正弦函数值进行预测,并输出预测的下一个正弦值。 请注意,这只是一个简单的示例,实际应用中可能需要更复杂的模型和更多的数据来获得更好的预测效果。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值