个人笔记,可以忽略这篇文章
0. Convolutional Neural Networks
这一节要对对神经网络有一个大致的把握,最重要的是了解每一层的意义,为什么一层比一层更抽象?是怎么提取更大的特征的。
图像位置大小等这些是怎么解决的?
带着问题探究怎么去优化每个神经元的 weight 以及 bias.
接着探究梯度下降等数学知识,所以学到了积分。最后发现学的积分只能针对单一变量,那就要学习多变量积分。
Convolutional Neural Networks–CNN 1
Tensors in Neural Networks–CNN 2
Numbers and Linear Functions–CNN 3
1. Differential, Derivative and Calculus
Differential, Derivative and Calculus–CNN 4
2. Linear Regression
3. Gradient Descent,Cost Function
Gradient Descent(cost function)–CNN 5
4. Gradient descent, how neural networks learn
Gradient descent, how neural networks learn
Function Notation
Recall the notation that R stands for the real numbers. Similarly, R 2 R^{2} R2 is a two-dimensional vector, and R 3 R^{3} R3 is a three-dimensional vector.
Scalar-valued functions
In one-variable calculus, you worked a lot with one-variable functions, i.e., functions from R
onto R
. If
f
(
x
)
f(x)
f(x) is such a one-variable functions, we can write
f
:
R
→
R
f:R→R
f:R→R as a shorthand way of expressing that
f
f
f is a function from R
onto R
.
A function like
f
(
x
,
y
)
=
x
+
y
f(x,y)=x+y
f(x,y)=x+y is a function of two variables. It takes an element of
R
2
R^{2}
R2, like (2,1)
, and gives a value that is a real number (i.e., an element of
R
R
R), like
f
(
2
,
1
)
=
3
f(2,1)=3
f(2,1)=3. Since f maps
R
2
R^{2}
R2 to
R
R
R, we write
f
:
R
2
→
R
f:R^{2}→R
f:R2→R.
Vector-valued functions
A vector-valued function in three dimensions can be written f : R → R 3 f:R→R3 f:R→R3. For example, if f ( x ) = ( 1 − x , x 3 , c o s x ) f(x)=(1−x,x^{3},cosx) f(x)=(1−x,x3,cosx), then f ( 0 ) = ( 1 , 0 , 1 ) f(0)=(1,0,1) f(0)=(1,0,1). We sometimes write vector-valued functions using the standard unit vector i, j, and k, as in f ( x ) = ( 1 − x ) i + x 3 j + ( c o s x ) k f(x)=(1−x)i+x^{3}j+(cosx)k f(x)=(1−x)i+x3j+(cosx)k.
Find More Information:
https://mathinsight.org/function_notation