吴恩达机器学习 编程作业 python 版提交方法以及代码-week2

本文主要包含两个部分:

1.吴恩达机器学习如何提交python作业并得到分数

2.week 2 编程作业——线性回归 python 代码

一、吴恩达机器学习如何提交python作业并得到分数

吴恩达机器学习课程有点老了,当时吴教授选的octave/matlab来编程,我也尝试了安装octave,不仅编程语法不熟悉,就是submission也没搞明白,幸运的是github上的Gerges大神,搞了个python版交作业神器,直接在jupyter notebook上运行代码就能提交作业。

传送门:

GitHub - dibgerge/ml-coursera-python-assignments: Python assignments for the machine learning class by andrew ng on coursera with complete submission for grading capability and re-written instructions.Python assignments for the machine learning class by andrew ng on coursera with complete submission for grading capability and re-written instructions. - GitHub - dibgerge/ml-coursera-python-assignments: Python assignments for the machine learning class by andrew ng on coursera with complete submission for grading capability and re-written instructions.https://github.com/dibgerge/ml-coursera-python-assignments以week2为例:

week2 的练习分成两个部分,一个是算分的required exercises,一个是不算分的附加题 

Gerges 大神已经给大家整理好了,我们只需要在your code 部分写上代码,再运行grader.grade()后填上email,以及token就行了。比如 warm up exercise, 这个地方我们需要把A=[] 改为 A = np.eye(5)

然后再运行(不需要自己写,已经放在jupter notebook上了)

 运行完这段代码,会让我们填couresa绑定的邮箱地址以及token

 token 在我们的couresa assignment里面,点generate new token 就能自动生成,但是每次token只有30分钟有效。(可以提前写好solution,每道题都做好了,最后再来run 每个部分的grader.grade()  )

 运行完之后直接就能看到分数: 

每交一道,就会显示当前累计得分, 写这个教程时我已经交完作业了,所以看到的分数是100。

二、week 2 编程作业——线性回归 python 代码

先来导包吧

# used for manipulating directory paths
import os

# Scientific and vector computation for python
import numpy as np

# Plotting library
from matplotlib import pyplot
from mpl_toolkits.mplot3d import Axes3D  # needed to plot 3-D surfaces

# library written for this exercise providing additional functions for assignment submission, and others
import utils 

# define the submission/grader object for this exercise
grader = utils.Grader()

# tells matplotlib to embed plots within the notebook
%matplotlib inline

1.第一题 warm up exercise 

要求:生成一个对角矩阵。

方案:numpy 里的eye()函数就能实现这个功能

def warmUpExercise():
    """
    Example function in Python which computes the identity matrix.
    
    Returns
    -------
    A : array_like
        The 5x5 identity matrix.
    
    Instructions
    ------------
    Return the 5x5 identity matrix.
    """    
    # ======== YOUR CODE HERE ======
    A = np.eye(5)   # modify this line
    
    # ==============================
    return A

2.第二题Computing the cost 

要求:补全Computing the cost函数

 

# 导入数据
data = np.loadtxt(os.path.join('Data', 'ex1data1.txt'), delimiter=',')
X, y = data[:, 0], data[:, 1]  #数据切片
m = y.size   # 训练集的个数

#数据可视化
def plotData(x, y):
    """
    Plots the data points x and y into a new figure. Plots the data 
    points and gives the figure axes labels of population and profit.
    
    Parameters
    ----------
    x : array_like
        Data point values for x-axis.

    y : array_like
        Data point values for y-axis. Note x and y should have the same size.
    
    Instructions
    ------------
    Plot the training data into a figure using the "figure" and "plot"
    functions. Set the axes labels using the "xlabel" and "ylabel" functions.
    Assume the population and revenue data have been passed in as the x
    and y arguments of this function.    
    
    Hint
    ----
    You can use the 'ro' option with plot to have the markers
    appear as red circles. Furthermore, you can make the markers larger by
    using plot(..., 'ro', ms=10), where `ms` refers to marker size. You 
    can also set the marker edge color using the `mec` property.
    """
    fig = pyplot.figure()  # open a new figure
    
    # ====================== YOUR CODE HERE ======================= 
    pyplot.plot(x, y, 'ro', ms=10, mec='k')
    pyplot.ylabel('Profit in $10,000')
    pyplot.xlabel('Population of City in 10,000s')

    # =============================================================

plotData(X, y)

#生成 computecost函数
#给x添加值为1的列(吴教授在课程里有讲,这里相当于添加偏置,y=w0*1+w1*x )
X = np.stack([np.ones(m), X], axis=1)

def computeCost(X, y, theta):
    """
    Compute cost for linear regression. Computes the cost of using theta as the
    parameter for linear regression to fit the data points in X and y.
    
    Parameters
    ----------
    X : array_like
        The input dataset of shape (m x n+1), where m is the number of examples,
        and n is the number of features. We assume a vector of one's already 
        appended to the features so we have n+1 columns.
    
    y : array_like
        The values of the function at each data point. This is a vector of
        shape (m, ).
    
    theta : array_like
        The parameters for the regression function. This is a vector of 
        shape (n+1, ).
    
    Returns
    -------
    J : float
        The value of the regression cost function.
    
    Instructions
    ------------
    Compute the cost of a particular choice of theta. 
    You should set J to the cost.
    """
    
    # initialize some useful values
    m = y.size  # number of training examples
    
    # You need to return the following variables correctly
    J = 0
    
    # ====================== YOUR CODE HERE =====================
    h = np.dot(X, theta)
    
    J = (1/(2 * m)) * np.sum(np.square(np.dot(X, theta) - y))
    
    # ===========================================================
    return J

3.第三题gradientdescent

def gradientDescent(X, y, theta, alpha, num_iters):
    """
    Performs gradient descent to learn `theta`. Updates theta by taking `num_iters`
    gradient steps with learning rate `alpha`.
    
    Parameters
    ----------
    X : array_like
        The input dataset of shape (m x n+1).
    
    y : arra_like
        Value at given features. A vector of shape (m, ).
    
    theta : array_like
        Initial values for the linear regression parameters. 
        A vector of shape (n+1, ).
    
    alpha : float
        The learning rate.
    
    num_iters : int
        The number of iterations for gradient descent. 
    
    Returns
    -------
    theta : array_like
        The learned linear regression parameters. A vector of shape (n+1, ).
    
    J_history : list
        A python list for the values of the cost function after each iteration.
    
    Instructions
    ------------
    Peform a single gradient step on the parameter vector theta.

    While debugging, it can be useful to print out the values of 
    the cost function (computeCost) and gradient here.
    """
    # Initialize some useful values
    m = y.shape[0]  # number of training examples
    
    # make a copy of theta, to avoid changing the original array, since numpy arrays
    # are passed by reference to functions
    theta = theta.copy()
    
    J_history = [] # Use a python list to save cost in every iteration
    
    for i in range(num_iters):
        # ==================== YOUR CODE HERE =================================
        theta = theta - (alpha / m) * (np.dot(X, theta) - y).dot(X)

        # =====================================================================
        
        # save the cost J in every iteration
        J_history.append(computeCost(X, y, theta))
    
    return theta, J_history


# 初始化theta
theta = np.zeros(2)

# 初始化迭代次数和学习率
iterations = 1500
alpha = 0.01

theta, J_history = gradientDescent(X ,y, theta, alpha, iterations)
print('Theta found by gradient descent: {:.4f}, {:.4f}'.format(*theta))
print('Expected theta values (approximately): [-3.6303, 1.1664]')

# 可视化
plotData(X[:, 1], y)
pyplot.plot(X[:, 1], np.dot(X, theta), '-')
pyplot.legend(['Training data', 'Linear regression']);

  • 2
    点赞
  • 25
    收藏
    觉得还不错? 一键收藏
  • 4
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 4
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值