斯坦福ML课程——python转写(Week2—课程作业ex1_1)

利用python完成课程作业ex1的第一部分,第一部分的要求如下:

In this part of this exercise, you will implement linear regression with onevariable to predict prots for a food truck. Suppose you are the CEO of arestaurant franchise and are considering different cities for opening a newoutlet. The chain already has trucks in various cities and you have data forprots and populations from the cities.

在ex1.py的代码中,分为四个部分,每个部分分别如下:

  • Part 1: Basic Function
  • Part 2:Plotting
  • Part 3: Cost and Gradient descent 
  • Part 4: Visualizing J(theta_0, theta_1) 

具体改写代码如下:

# -*- coding: utf-8 -*-
"""
Created on Sat Nov 16 10:40:51 2019

@author: Lonely_hanhan
"""
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D

''' ==================== Part 1: Basic Function ===================='''
print('Running warmUpExercise ... \n')
print('5x5 Identity Matrix: \n')

def warmUpExercise():
    A = np.ones((5,5))
    return A

print(warmUpExercise())

''' ==================== Part 2:Plotting ===================='''   
 
print('Plotting Data ...\n')
Data = np.loadtxt('D:\exercise\machine-learning-ex1\machine-learning-ex1\ex1\ex1data1.txt', delimiter=',')
X = Data[:, 0]
y = Data[:, 1]
m = len(y) #number of training examples
X = X.reshape((m,1))
y = y.reshape((m,1)) 

 # Plot Data
def PlotData(x, y):
     plt.plot(x, y, color='red', linewidth='2', marker='x', markersize='10', markerfacecolor = 'red', linestyle='None')
     plt.xlabel('Population of City in 10,000s')
     plt.ylabel('Profit in $10,000s')

 
PlotData(X, y)

'''=================== Part 3: Cost and Gradient descent ==================='''

x_0 = np.ones((m , 1))
X = np.hstack((x_0 , X)) # Add a column of ones to X
theta = np.zeros((1, 2)) # initialize fitting parameters

# Some gradient descent settings
iterations = 1500
alpha = 0.01

def h_func(theta, x):
    return np.dot(x, theta.T).reshape((x.shape[0], 1))

def computeCost(x, y, theta):
    m = len(y)
    J = 0
    J = np.sum(np.square((h_func(theta, x) - y))) / (2 * m)
    return J

print('\nTesting the cost function ...\n')

J = computeCost(X, y, theta)

print('\nWith theta = [0, 0]\nCost computed = \n', J)

theta1 = np.array([-1, 2])

J = computeCost(X, y, theta1)

print('\nWith theta = [-1, 2]\nCost computed = \n', J)

print('\nRunning Gradient Descent ...\n')
#run gradient descent
def gradientDescent(x, y, theta, alpha, iterations):
    m = len(y)
    J_history = np.zeros((iterations, 1))
    for iter1 in range(0, iterations):
        theta = theta - alpha * np.dot((h_func(theta, x)-y).T, x) / m
        J_history[iter1] = computeCost(x, y, theta)
    return theta, J_history
        
[theta2, J_history] = gradientDescent(X, y, theta, alpha, iterations)

# print theta to screen
print('Theta found by gradient descent:\n')
print('%f\n', theta2)
print('Expected theta values (approx)\n')
print(' -3.6303\n  1.1664\n\n')

#PlotData(X, y)
plt.plot(X[:, 1], np.dot(X, (theta2.T)), color='green', linestyle='-')
plt.show()

'''=================== Part 4: Visualizing J(theta_0, theta_1) ==================='''

print('Visualizing J(theta_0, theta_1) ...\n')
#Grid over which we will calculate J
theta0_vals = np.linspace(-10, 10, 100)
theta1_vals = np.linspace(-1, 4, 100)

#initialize J_vals to a matrix of 0's
J_vals = np.zeros((len(theta0_vals), len(theta1_vals)))

#Fill out J_vals
for i in range(0, len(theta0_vals)):
    for j in range(0, len(theta1_vals)):
        t = np.array([theta0_vals[i], theta1_vals[j]])
        J_vals[i,j] = computeCost(X, y, t)
lvls = np.logspace(-2, 3, 20)
plt.contour(theta0_vals,theta1_vals,J_vals, levels = lvls)

fig = plt.figure()
ax = Axes3D(fig)
ax.plot_surface(theta0_vals,theta1_vals,J_vals, rstride=1, cstride=1, cmap='rainbow')

代码运行结果如下:

 

 

疑问:

对于第四部分,在对比文档中给出的二维结果(如下图所示), 发现本文中所输出的二维结果不相符:

但是对比三维结果图(如下图所示),又大致相符(由于在anaconda中无法旋转只能大致对比):

为了记录自己的学习进度同时也加深自己对知识的认知,刚刚开始写博客,如有错误或者不妥之处,请大家给予指正。 

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值