Tensorflow| Simple application of approaching stock prices
Today I want to share a simple program of approaching stock prices through tensorflow. It is a very simple case but It could help us understand the principle of Tensorflow more clearly.
Step1: Import the packages.
import tensorflow.compat.v1 as tf
tf.disable_v2_behavior()
import numpy as np
import matplotlib.pyplot as plt
import os
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'
Step2: Import the data of stock prices
Here I have imported the opening and closing prices of the stock market for 15 working days.
data = np.linspace(1,15,15)
endPrice = np.array([111.6,108.9,108.6,108.8,113.0,109.9,113.5,112.5,123.8,114.4,118.4,119.8,116.6,117.0,119.1])
beginPrice = np.array([113.0,109.0,110.0,108.0,107.5,111.7,113.9,112.0,114.5,122.1,113.0,117.0,118.5,115.0,117.5])
Step 3.Create a chart of stock market.
After we imported the data of stock prices, we created a bar chart to represent the stock price data we imported. Red represents the closing price above the opening price and green represents the closing price below the opening price.
plt.figure()
for i in range(0,15):
dataOne = np.zeros([2])
dataOne[0]=i
dataOne[1]=i
prizeOne = np.zeros([2])
prizeOne[0] = beginPrice[i]
prizeOne[1]=endPrice[i]
if endPrice[i]>beginPrice[i]:
plt.plot(dataOne,prizeOne,'r',lw=8)#draw the red bar.
else:
plt.plot(dataOne,prizeOne,'g',lw=8)#draw the green bar.
plt.show()
Step 4.Normalization operation
We normalize the data to improve the accuracy of the calculation results.
Here the price is divided into 14 servings and 200 servings, you can customize the number of splits according to your actual data.
dataNormal = np.zeros([15,1])
priceNormal = np.zeros([15,1])
for i in range (0,15):
dataNormal[i,0] = i/14.0
priceNormal[i,0] = endPrice[i]/200.0
Step 5. Use placeholder to load our data.
x = tf.placeholder(tf.float32,[None,1])
y = tf.placeholder(tf.float32,[None,1])
Step 6. Define the hidden layer.
w1 = tf.Variable(tf.random_uniform([1,10],0,1))#Initial weight
b1 = tf.Variable(tf.zeros([1,10]))#Additional constant
wb1 = tf.matmul(x,w1)+b1. # wb1=x*w1+b1
layer1 = tf.nn.relu(wb1)#Excitation function
Step 7.Define the output layer.
The principle is the same as the hidden layer.
w2= tf.Variable(tf.random_uniform([10,1],0,1))#Initial weight
b2 = tf.Variable(tf.zeros([15,1]))#Additional constant
wb2 = tf.matmul(layer1,w2)+b2 # wb2=x*w2+b2
layer2 = tf.nn.relu(wb2)#Excitation function
Step 8.Define the loss & Use gradient descent method to reduce loss.
loss = tf.reduce_mean(tf.square(y-layer2))
#define the loss
train_step = tf.train.GradientDescentOptimizer(0.1).minimize(loss)
#Gradient descent method, each drop 0.1
#Function: reduce loss
Step 9. Run the artificial network.
with tf.Session()as sess:
sess.run(tf.global_variables_initializer())#
for i in range(10000):
sess.run(train_step,feed_dict={x:dataNormal,y:priceNormal})
#to Get optimized w1,b1,w2,b2
preds = sess.run(layer2,feed_dict={x:dataNormal})
#use the w and b we got to calculate the layer2
predPrice =np.zeros([15,1])
for i in range(0,15):
predPrice[i,0]=(preds*200)[i,0]
#Anti-normalization
plt.plot(data,predPrice,'b',lw=1)
#draw the predict lines of stock prices of color blue.
plt.show()
result:
Conclusion:
The number of samples is only fifteen, and the training time is too long, the method is not practical. However, in the process of preliminary learning tensorflow, this simple program has a good effect on understanding the principles of neural networks. Besides, you can adjust some parameters yourself to try different effects and then find the best training results.
Thank you for reading!
--credit by dora 2020.4.22