tensorflow学习笔记1(代码转自官网)

1、首先要把tensorflow给import进去

import tensorflow as tf

2、tensor

3 # a rank 0 tensor; this is a scalar with shape []
[1. ,2., 3.] # a rank 1 tensor; this is a vector with shape [3]
[[1., 2., 3.], [4., 5., 6.]] # a rank 2 tensor; a matrix with shape [2, 3]
[[[1., 2., 3.]], [[7., 8., 9.]]] # a rank 3 tensor with shape [2, 1, 3]

computational graph计算阵

3、建立constant node:常量只能在初始化时被赋值,以后不能更改
1)如建立两个浮点型node

node1 = tf.constant(3.0, tf.float32)
node2 = tf.constant(4.0) # also tf.float32 implicitly
print(node1, node2)

这个语句的运行结果为

Tensor("Const:0", shape=(), dtype=float32) Tensor("Const_1:0", shape=(), dtype=float32)

若要显示变量的值3.0, 4.0便要用计算矩阵 (computational graph)跑一下

sess = tf.Session()
print(sess.run([node1, node2]))

出来的结果是

[3.0, 4.0]

2)两个常量的加法运算

node3 = tf.add(node1, node2)

4、placeholder可在以后赋值

a = tf.placeholder(tf.float32)
b = tf.placeholder(tf.float32)
adder_node = a + b  # + provides a shortcut for tf.add(a, b)

而且支持加减乘除法

print(sess.run(adder_node, {a: 3, b:4.5}))
print(sess.run(adder_node, {a: [1,3], b: [2, 4]}))

若要显示出加减乘除的结果,要如上这样先给a,b赋值,再sess.run()一下。
结果为

7.5
[ 3.  7.]

还可以

add_and_triple = adder_node * 3.
print(sess.run(add_and_triple, {a: 3, b:4.5}))

5、变量(Variable)
1)变量定义

W = tf.Variable([.3], tf.float32)
b = tf.Variable([-.3], tf.float32)

2)变量初始化(一定要记住初始化,不然程序会跑错)

init = tf.global_variables_initializer()
sess.run(init)

3)可与其他类型量进行组合运算

x = tf.placeholder(tf.float32)
linear_model = W * x + b
print(sess.run(linear_model, {x:[1,2,3,4]}))

4)变量赋值

fixW = tf.assign(W, [-1.])
fixb = tf.assign(b, [1.])
sess.run([fixW, fixb])

这时再

print(sess.run(loss, {x:[1,2,3,4], y:[0,-1,-2,-3]}))#loss为前面已经定义的一种操作组合

loss的值就是新的值了
6、tf.train API
tf是一个用来实现机器学习相关功能的平台,而机器学习的目的主要是给一部分输入,并对输出的数据进行预测,优化参数使预测更加准确。具体的预测方式有许多种,比如线性回归等。而tf本身提供了一些函数,可以自动调整预测模型的参数,使预测的输出数据和现实的输出数据之间的差别尽可能小。而预测的多组输出数据和现实的多组输出数据之间的差值的平方的和是一种基本的loss function(损失函数),换句话说机器学习就是要调整参数使得loss最少。下面的代码是用梯度优化实现调整参数的代码。

optimizer = tf.train.GradientDescentOptimizer(0.01)
train = optimizer.minimize(loss)
sess.run(init) # reset values to incorrect defaults.
for i in range(1000):
  sess.run(train, {x:[1,2,3,4], y:[0,-1,-2,-3]})

print(sess.run([W, b]))

7、完整的线性回归训练模型如下

#!/usr/bin/env python3
# -*- coding: utf-8 -*-

' a test module '

__author__ = 'Google and Emma Guo'

import tensorflow as tf
import numpy as np
import os
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'

# Model parameters模型参数
W = tf.Variable([.3], tf.float32)
b = tf.Variable([-.3], tf.float32)
# Model input and output模型的输入输出
x = tf.placeholder(tf.float32)
linear_model = W * x + b
y = tf.placeholder(tf.float32)
# loss损失函数
loss = tf.reduce_sum(tf.square(linear_model - y)) # sum of the squares
# optimizer参数优化部分
optimizer = tf.train.GradientDescentOptimizer(0.01)
train = optimizer.minimize(loss)
# training data训练数据
x_train = [1,2,3,4]
y_train = [0,-1,-2,-3]
# training loop训练部分
init = tf.global_variables_initializer()#变量初始化
sess = tf.Session()
sess.run(init) # reset values to wrong
for i in range(1000):
  sess.run(train, {x:x_train, y:y_train})#真正的运行train来优化参数

# evaluate training accuracy人工与机器评估训练的准确性
curr_W, curr_b, curr_loss  = sess.run([W, b, loss], {x:x_train, y:y_train})
print("W: %s b: %s loss: %s"%(curr_W, curr_b, curr_loss))

8、tf.contrib.learn是一个更高级的tf模型,简化了ml(机器学习)的步骤

#!/usr/bin/env python3
# -*- coding: utf-8 -*-

' a test module '

__author__ = 'Google and Emma Guo'

import tensorflow as tf
# NumPy is often used to load, manipulate and preprocess data.
# NumPy一般被用来读取操作预处理数据
import numpy as np
import os
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'

# Declare list of features. We only have one real-valued feature. There are many
# other types of columns that are more complicated and useful.
# 声明一系列特征,此例只声明了一个特征
features = [tf.contrib.layers.real_valued_column("x", dimension=1)]

# An estimator is the front end to invoke training (fitting) and evaluation
# (inference). There are many predefined types like linear regression,
# logistic regression, linear classification, logistic classification, and
# many neural network classifiers and regressors. The following code
# provides an estimator that does linear regression.
# estimator是调用训练和评估的前端。已经有好多已经设计好了的estimator,比如线性
# 回归,logistic回归,线性分类,logistic分类还有很多神经网络分类和回归方式。下
# 面的代码是一个线性回归的例子
estimator = tf.contrib.learn.LinearRegressor(feature_columns=features)

# TensorFlow provides many helper methods to read and set up data sets.
# Here we use `numpy_input_fn`. We have to tell the function how many batches
# of data (num_epochs) we want and how big each batch should be.
# Tf提供了很多有用的方法去读取,建立数据集合。这里我们用`numpy_input_fn`,我们
# 必须告诉这个函数我们想要多少批数据(num_epochs),这些数据的大小(batch_size)
# 如何
x = np.array([1., 2., 3., 4.])
y = np.array([0., -1., -2., -3.])
input_fn = tf.contrib.learn.io.numpy_input_fn({"x":x}, y, batch_size=4,
                                              num_epochs=1000)

# We can invoke 1000 training steps by invoking the `fit` method and passing the
# training data set.
# 我们可以调用fit函数去调用1000步训练,并把训练数据传递下去
estimator.fit(input_fn=input_fn, steps=1000)

# Here we evaluate how well our model did. In a real example, we would want
# to use a separate validation and testing data set to avoid overfitting.
# 下面我们评估我们的模型建立的好不好。这个例子中我们用一个单独检验和测试数据集合
# 去避免过拟合
print(estimator.evaluate(input_fn=input_fn))

9、用我们一开始说的基础算法也可以来实现ml,不过太麻烦,就不赘述,附上代码

import numpy as np
import tensorflow as tf
# Declare list of features, we only have one real-valued feature
def model(features, labels, mode):
  # Build a linear model and predict values
  W = tf.get_variable("W", [1], dtype=tf.float64)
  b = tf.get_variable("b", [1], dtype=tf.float64)
  y = W*features['x'] + b
  # Loss sub-graph
  loss = tf.reduce_sum(tf.square(y - labels))
  # Training sub-graph
  global_step = tf.train.get_global_step()
  optimizer = tf.train.GradientDescentOptimizer(0.01)
  train = tf.group(optimizer.minimize(loss),
                   tf.assign_add(global_step, 1))
  # ModelFnOps connects subgraphs we built to the
  # appropriate functionality.
  return tf.contrib.learn.ModelFnOps(
      mode=mode, predictions=y,
      loss=loss,
      train_op=train)

estimator = tf.contrib.learn.Estimator(model_fn=model)
# define our data set
x = np.array([1., 2., 3., 4.])
y = np.array([0., -1., -2., -3.])
input_fn = tf.contrib.learn.io.numpy_input_fn({"x": x}, y, 4, num_epochs=1000)

# train
estimator.fit(input_fn=input_fn, steps=1000)
# evaluate our model
print(estimator.evaluate(input_fn=input_fn, steps=10))
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值