Couresra-吴恩达-深度学习-第一章:Neural Network and Deep Learning- 第二周

Coursera-吴恩达-深度学习-第一章:Neural Network and Deep Learning- 第二周

本周学习内容

  • Logsitic Regression as a Neural Network
    • Binary Classification (二分类)
    • Logistic Regression (逻辑回归)
    • Logistic Regression Cost Function (逻辑回归损失函数)
    • Gradient Descent (梯度下降法)
    • Derivatives and Examples (导数)
    • Computation graph (计算图)
    • Logistic Regression Gradient Descent (逻辑回归中的梯度下降法)
  • Python and Vectorization
    • Vectorization (向量化)
    • Vectorizing Logistic Regression (向量化逻辑回归)
    • Vectorizing Logistic Regression’s Gradient (向量化逻辑回归的梯度输出)
    • Broadcasting in Python (广播)
    • A note on python/numpy vectors
    • Explanation of logistic regression cost function (逻辑回归损失函数的解释)

练习

  1. Python Basics with numpy
    • 目标:
      • Learn how to use numpy
      • Implement some basic core deep learning functions such as the softmax, sigmoid, dsigmoid, etc. 熟悉几个激活函数
      • Learn how to handle data by normalizing inputs and reshaping images. 均一化输出及重塑图片
      • Recognize the importance of vectorization. 向量化
      • Understand how python broadcasting works. 广播
    • Building basic functions with numpy
      • sigmoid :s = 1 / ( 1 + np.exp(-x))
      • sigmoid gradient: ds = s*(1-s)
      • reshaping arrays: 将图片转化为向量(image2vector
      • normalizing rows: (normalizingRows)
      • softmax :(softmax)
# image2vector is commonly used in deep learning
def image2vector(image):
	#aurgument:
	#image - a numpy array of shape (length, height, depth)
	#returns:
	#v - a vector of shape (length*height*depth, 1)
	v = image.reshpe((image.shape[0]*image.shape[1]*image.shape[2],1))
	
	return v
# normalizingRows
def normalizingRows(x):
	#normlize x by row (axis=1)
	x_norm = np.linalg.norm(x, ord=2, axis=1, keepdims=True)
	x = x / x_norm
	
	return x
# softmax function
def softmax(x):
	# x - matrix of shape(m,n)
	# s -softmax of x
	x_exp = np.exp(x)
	x_sum = np.sum(x_exp, axis=1, keepdims=True)
	s = x_exp / x_sum
	return s

  1. Logistic Regression with a Neural Network mindset
    1. 目标
      1. 建立一个图片识别算法,用于分类猫/非猫图片,准确率70%。
      2. Work with logistic regression in a way that builds intuition relevant to neural network.
      3. Learn how to minimize the cost function. 学习怎样最小化代价函数
      4. Understand how derivatives of the cost are used to update parameters. 理解代价函数的梯度是怎样用于更新参数的。
    2. Overview of the Problem set (对数据集的总览)
      1. Common steps for pre-processing a new dataset are:
        1. Figure out the dimensions and shapes of the problem (m_train, m_test, num_px…)
        2. Reshape the datasets such that each example is now a vector of size (num_pxnum_px3,1)
        3. Standardize the data
        4. (详细可见此文)
    3. General Architecture of the learning algorithm (算法的搭建)
      1. 内容:
        1. Initialize the parameters of the model (初始化参数)
        2. Learn the parameters for the model by minimizing the cost最小化代价函数以学习参数
        3. Use the learned parameters to make predictions (预测)
        4. Analyse the results and conclude
      2. 算法模块:
        1. Initialize With Zeros
        2. Loop:
          1. Calculate current loss (forward propagation) - propagate()
          2. Calculate current gradient (backward propagation) - propagate()
          3. Update parameters (gradient descent) - optimize()
        3. Integrate them into one function - model()
# find the values for number of training examples
m_train = train_set_y.shape[1]
# fllaten a matrix X of shape (a,b,c,d) to a matrix X_flatten of shape(b*c*d, a) is to use: X_flatten = X.shape([0], -1).T
train_set_x_flatten = train_set_x_orig.reshape(train_set_x_orig.shape[0],-1).T
# standardize a image
train_set_x = train_set_x_flatten/255.
def initialize with zeros(dim):
	w = np.zeros(shape = (dim,1))
	b = 0
	return w, b
dim = 2
w, b = initialize_with_zeros(dim)
def propagate(w,b,X,Y):
	m = X.shape[1]
	A = sigmoid(np.dot(w.T,X) + b)   
	cost = ( -1 / m)*np.sum(Y*np.log(A) + (1-Y)*(np.log(1-A)))
	
    dw = (1/m) * np.dot(X, (A-Y).T)
    db = (1/m) * np.sum(A-Y) 
	grads = {"dw":dw, "db":db}
	
	return grads, cost

def optimize(w, b, X, Y, num_iterations, learning_rate, print_cost = False):
	cost = []
	for i in range(num_iterations):
        dw = grads["dw"]
        db = grads["db"]	
		w =	w - learning_rate * dw
		b = b - leanring_rate * db
        # Record the costs
        if i % 100 == 0:
            costs.append(cost)
        
        # Print the cost every 100 training iterations
        if print_cost and i % 100 == 0:
            print ("Cost after iteration %i: %f" %(i, cost))
    
   	 	params = {"w": w,
              "b": b}
    
    	grads = {"dw": dw,
    	         "db": db}
    
    	return params, grads, costs
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值