tensorflow如何取平均_tensorflow-求平均值的函数

axis为0表示求列

>>> xxx=tf.constant([[1., 10.],[3.,30.]])

>>> sess.run(xxx)

array([[ ?1., ?10.],

? ? ? ?[ ?3., ?30.]], dtype=float32)

>>> mymean=tf.reduce_mean(xxx,0)

>>> sess.run(mymean)

array([ ?2., ?20.], dtype=float32)

>>> mymean=tf.reduce_mean(xxx,1)

>>> sess.run(mymean)

array([ ?5.5, ?16.5], dtype=float32)

>>>?

keep_dims表示是否保持维度。

>>> mymean=tf.reduce_mean(xxx,axis=0,keep_dims=True)

>>> sess.run(mymean)

array([[ ?2., ?20.]], dtype=float32)

>>> mymean=tf.reduce_mean(xxx,axis=0,keep_dims=False)

>>> sess.run(mymean)

array([ ?2., ?20.], dtype=float32)

>>> mymean=tf.reduce_mean(xxx,keep_dims=False)

>>> sess.run(mymean)

11.0

>>> mymean=tf.reduce_mean(xxx,keep_dims=True)

>>> sess.run(mymean)

array([[ 11.]], dtype=float32)

>>> mymean=tf.reduce_mean(xxx)

>>> sess.run(mymean)

11.0

tf.reduce_mean

?

reduce_mean(

? ? input_tensor,

? ? axis=None,

? ? keep_dims=False,

? ? name=None,

? ? reduction_indices=None

)

Defined in?tensorflow/python/ops/math_ops.py.

See the guide:?Math > Reduction

Computes the mean of elements across dimensions of a tensor.

Reduces?input_tensor?along the dimensions given in?axis. Unless?keep_dims?is true, the rank of the tensor is reduced by 1 for each entry in?axis. If?keep_dims?is true, the reduced dimensions are retained with length 1.

If?axis?has no entries, all dimensions are reduced, and a tensor with a single element is returned.

For example:

?

‘x‘ is [[1., 1.]

? ? ? ? [2., 2.]]

tf.reduce_mean(x) ==> 1.5

tf.reduce_mean(x, 0) ==> [1.5, 1.5]

tf.reduce_mean(x, 1) ==> [1., ?2.]

Args:

input_tensor: The tensor to reduce. Should have numeric type.

axis: The dimensions to reduce. If?None?(the default), reduces all dimensions.

keep_dims: If true, retains reduced dimensions with length 1.

name: A name for the operation (optional).

reduction_indices: The old (deprecated) name for axis.

tf.pow

?

pow(

? ? x,

? ? y,

? ? name=None

)

Defined in?tensorflow/python/ops/math_ops.py.

See the guide:?Math > Basic Math Functions

Computes the power of one value to another.

Given a tensor?x?and a tensor?y, this operation computes \(x^y\) for corresponding elements in?x?and?y. For example:

?

tensor ‘x‘ is [[2, 2], [3, 3]]

tensor ‘y‘ is [[8, 16], [2, 3]]

tf.pow(x, y) ==> [[256, 65536], [9, 27]]

class tf.train.AdamOptimizer

init(learning_rate=0.001, beta1=0.9, beta2=0.999, epsilon=1e-08, use_locking=False, name=‘Adam‘)

线性分类源码:

#!/usr/bin/env python2

# -*- coding: utf-8 -*-

"""

Created on Mon Jul 10 09:35:04 2017

@author: myhaspl@myhaspl.com,http://blog.csdn.net/myhaspl"""

import tensorflow as tf

import numpy as np

batch_size=10

w1=tf.Variable(tf.random_normal([2,3],stddev=1,seed=1))

w2=tf.Variable(tf.random_normal([3,1],stddev=1,seed=1))

x=tf.placeholder(tf.float32,shape=(None,2),name="x")

y=tf.placeholder(tf.float32,shape=(None,1),name="y")

h=tf.matmul(x,w1)

yo=tf.matmul(h,w2)

#损失函数计算差异平均值

cross_entropy=tf.reduce_mean(tf.abs(y-yo))

#反向传播

train_step=tf.train.AdamOptimizer().minimize(cross_entropy)

#生成200个随机样本

DATASIZE=200

x_=np.random.rand(DATASIZE,2)

y_=[[int((x1+x2)>2.5)] for (x1,x2) in x_]

with tf.Session() as sess:

#初始化变量

init_op=tf.global_variables_initializer()

sess.run(init_op)

print sess.run(w1)

print sess.run(w2)

#设定训练轮数

TRAINCOUNT=10000

for i in range(TRAINCOUNT):

#每次递进选择一组

start=(i*batch_size) % DATASIZE

end=min(start+batch_size,DATASIZE)

#开始训练

sess.run(train_step,feed_dict={x:x_[start:end],y:y_[start:end]})

if i%1000==0:

total_cross_entropy=sess.run(cross_entropy,feed_dict={x:x_[start:end],y:y_[start:end]})

print("%d 次训练之后,损失:%g"%(i+1,total_cross_entropy))

print(sess.run(w1))

print(sess.run(w2))

? ??[[-0.81131822 ?1.48459876 ?0.06532937 -2.4427042 ? 0.0992484 ? 0.59122431]

?[ 0.59282297 -2.12292957 -0.72289723 -0.05627038 ?0.64354479 -0.26432407]]

[[-0.81131822]

?[ 1.48459876]

?[ 0.06532937]

?[-2.4427042 ]

?[ 0.0992484 ]

?[ 0.59122431]]

1 次训练之后,损失:2.37311

1001 次训练之后,损失:0.587702

2001 次训练之后,损失:0.00187977

3001 次训练之后,损失:0.000224713

4001 次训练之后,损失:0.000245593

5001 次训练之后,损失:0.000837345

6001 次训练之后,损失:0.000561878

7001 次训练之后,损失:0.000521504

8001 次训练之后,损失:0.000369141

9001 次训练之后,损失:2.88023e-05

[[-0.40749896 ?0.74481744 -1.35231423 -1.57555723 ?1.5161525 ? 0.38725093]

?[ 0.84865922 -2.07912779 -0.41053897 -0.21082011 -0.0567192 ?-0.69210052]]

[[ 0.36143586]

?[ 0.34388798]

?[ 0.79891819]

?[-1.57640576]

?[-0.86542428]

?[-0.51558757]]

tf.nn.relu

?

relu(

? ? features,

? ? name=None

)

Defined in?tensorflow/python/ops/gen_nn_ops.py.

See the guides:?Layers (contrib) > Higher level ops for building neural network layers,?Neural Network > Activation Functions

Computes rectified linear:?max(features, 0)

  • 1
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值