course1-Overview of Tensorflow

Overview of Tensorflow


写在前面的话:本笔记主要用于学习记录,当然也欢迎大神拍板砖,课程链接,里面有课件和官方笔记。


Overview of Tensorflow

What’s TensorFlow™?

“Open source software library for numerical computation using data flow graphs”

Why TensorFlow?

  • Flexibility + Scalability
    Originally developed by Google as a single infrastructure for machine learning in both production and research
  • Popularity

Goals

  • Understand TF’s computation graph approach
  • Explore TF’s built-in functions and classes
  • Learn how to build and structure models best suited for a deep learning project

Resources

Graphs and Sessions

What’s a tensor?

  • An n-dimensional array
    • 0-d tensor: scalar (number)
    • 1-d tensor: vector
    • 2-d tensor: matrix
      and so on

Data Flow Graphs


TensorFlow separates definition of computations from their execution


  • Phase 1: assemble a graph
  • Phase 2: use a session to execute operations in the graph.
    这里写图片描述
import tensorflow as tf
a = tf.add(3, 5)
print(a)
>>>Tensor("Add:0", shape=(), dtype=int32)
(Not 8)

Visualized by TensorBoard

  • Nodes: operators, variables, and constants
  • Edges: tensors
    TensorFlow = tensor + flow = data + flow

How to get the value of a?(tf.Session())

  • A Session object encapsulates the environment in which Operation objects are executed, and Tensor objects are evaluated.
  • ession will also allocate memory to store the current values of variables.
import tensorflow as tf
a = tf.add(3, 5)
sess = tf.Session()
print(sess.run(a))
sess.close()

or

import tensorflow as tf
a = tf.add(3, 5)
with tf.Session() as sess:
    print(sess.run(a))

这里写图片描述
The session will look at the graph, trying to think: hmm, how can I get the value of a,
then it computes all the nodes that leads to a.

More graph

x = 2
y = 3
op1 = tf.add(x, y)
op2 = tf.multiply(x, y)
op3 = tf.pow(op2, op1)
with tf.Session() as sess:
    op3 = sess.run(op3)

这里写图片描述

Subgraphs

x = 2
y = 3
add_op = tf.add(x, y)
mul_op = tf.multiply(x, y)
useless = tf.multiply(x, add_op)
pow_op = tf.pow(add_op, mul_op)
with tf.Session() as sess:
    z = sess.run(pow_op)

Because we only want the value of pow_op and pow_op doesn’t depend on useless, session won’t compute value of useless

x = 2
y = 3
add_op = tf.add(x, y)
mul_op = tf.multiply(x, y)
useless = tf.multiply(x, add_op)
pow_op = tf.pow(add_op, mul_op)
with tf.Session() as sess:
    z, not_useless = sess.run([pow_op, useless])
tf.Session.run(fetches,feed_dict=None,options=None,run_metadata=None)

fetches is a list of tensors whose values you want

  • Possible to break graphs into several chunks and run them parallelly across multiple CPUs, GPUs, TPUs, or other devices

这里写图片描述

Distributed Computation

#To put part of a graph on a specific CPU or GPU
# Creates a graph
with tf.device('/gpu:2'):
    a = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], name='a')
    b = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], name='b')
    c = tf.multiply(a, b)
# Creates a session with log_device_placement set to True.
sess = tf.Session(config=tf.ConfigProto(log_device_placement=True))
# Runs the op.
print(sess.run(c))

build more than one graph?

  • BUG ALERT!

  • Multiple graphs require multiple sessions, each will try to use all available resources by default

  • Can’t pass data between them without passing them through python/numpy, which doesn’t work in distributed
  • It’s better to have disconnected subgraphs within one graph

Conclusion(Why graphs)

  • save computation. Only run subgraphs that lead to the values you want to fetch.
  • Break computation into small, differential pieces to facilitate auto-differentiation
  • Facilitate distributed computation, spread the work across multiple CPUs, GPUs, TPUs, or other devices
  • Many common machine learning models are taught and visualized as directed graphs
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值