The Computational Graph

1. TF CORE - researchers

2. Library :tf.contrib.learn

3. Computational graph: i.Each node takes zero or more tensors as inputs and produces a tensor as an output. ii.One type of node is a constant, which takes no inputs and it outputs a value it stores internally.   xxxx = tf.constant(initial value, [tf.float32])    [ ] default

4. Session object with method run. xxxx = tf.Session() xxx.run()

5. We can build more complicated computations by combining Tensor nodes with operations. Operations are also nodes.

6. Placeholder can be processed as kind of a "lambda" or a function (parameterized value)  xxxx = tf.placeholder(tf.float32)

    lambda sess.run(a kind of operation, operated feed_dict parameter)  

Especially, operation can act like this: adder_node = a + b            add_and_triple = adder_node * 3


7. Variables allow us to add trainable parameters to a graph.    xxxx = tf.Variable([.3], tf.float32) constructed with a type and initial value. HOWEVER, when you call tf.Variable, variables are not initialized instantly

8. init = tf.global_variables_initializer()

   sess.run(init)

   init is a handle. Until we call sess.run, the variables are uninitialized

9. loss function = cost function

    reduce_sum reduce_xxx is under Reduction in the official document.

    ReductionTensorFlow provides several operations that you can use to perform common math computations that "reduce various dimensions" of a tensor.
10. tf.assign for Variable assignment.




  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
Since the first attempts to model proteins on a computer began almost thirty years ago, our understanding of protein structure and dynamics has dramatically increased. Spectroscopic measurement techniques continue to improve in resolution and sensitivity, allowing a wealth of information to be obtained with regard to the kinetics of protein folding and unfolding, and complementing the detailed structural picture of the folded state. Concurrently, algorithms, software, and computational hardware have progressed to the point where both structural and kinetic problems may be studied with a fair degree of realism. Despite these advances, many major challenges remain in understanding protein folding at both the conceptual and practical levels. Computational Methods for Protein Folding seeks to illuminate recent advances in computational modeling of protein folding in a way that will be useful to physicists, chemists, and chemical physicists. Covering a broad spectrum of computational methods and practices culled from a variety of research fields, the editors present a full range of models that, together, provide a thorough and current description of all aspects of protein folding. A valuable resource for both students and professionals in the field, the book will be of value both as a cutting-edge overview of existing information and as a catalyst for inspiring new studies. Computational Methods for Protein Folding is the 120th volume in the acclaimed series Advances in Chemical Physics, a compilation of scholarly works dedicated to the dissemination of contemporary advances in chemical physics, edited by Nobel Prize-winner Ilya Prigogine.
这个错误通常出现在使用 TensorFlow 时,因为 TensorFlow 的运行过程中需要先构建计算图(computational graph),然后再运行。如果构建的计算图为空,就会出现这个错误。 要解决这个错误,你需要确保在调用 `run()` 方法之前,已经构建好了计算图。可以通过以下方式来构建计算图: 1. 定义计算图中的所有操作(包括输入数据和模型),并将它们添加到默认图中。 2. 开始一个会话(Session)。 3. 在会话中运行计算图。 以下是一个简单的 TensorFlow 示例代码,演示如何构建计算图和运行会话: ```python import tensorflow as tf # 定义计算图中的操作 x = tf.placeholder(tf.float32, shape=[None, 784]) W = tf.Variable(tf.zeros([784, 10])) b = tf.Variable(tf.zeros([10])) y = tf.nn.softmax(tf.matmul(x, W) + b) # 开始一个会话 with tf.Session() as sess: # 初始化所有变量 sess.run(tf.global_variables_initializer()) # 运行计算图 result = sess.run(y, feed_dict={x: some_input_data}) ``` 在这个示例中,`x` 是一个占位符(placeholder),它表示输入数据的形状。`W` 和 `b` 是可以训练的变量,它们的初始值是全零。`y` 是一个 softmax 操作,它将输入数据与权重矩阵相乘并加上偏置向量,然后进行 softmax 操作得到输出。 在会话中,我们首先初始化所有变量,然后运行计算图,并将输入数据传递给占位符。最终的结果保存在 `result` 中。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值