机器学习-梯度下降算法

In[1]:


import numpy as np
import matplotlib.pyplot as plt


# In[2]:


plot_x = np.linspace(-1, 6, 141)
plot_x


# In[3]:


plot_y = (plot_x - 2.5) ** 2 -1
plot_y


# In[4]:


plt.plot(plot_x, plot_y)
plt.show()


# In[5]:


def dJ(thetha):
    return 2 * (thetha - 2.5)


# In[6]:


def J(thetha):
    return (thetha - 2.5) ** 2 -1


# In[7]:


eta = 0.1
epsilon = 1e-8
thetha = 0.0
while True:
    gradient = dJ(thetha)
    last_thetha = thetha
    thetha = thetha - eta * gradient
    if (abs(J(thetha) - J(last_thetha)) < epsilon):
        break        
print(thetha, J(thetha))

    
    


# In[8]:


eta = 0.1
epsilon = 1e-8
thetha = 0.0
thetha_history = [thetha]
while True:
    gradient = dJ(thetha)
    last_thetha = thetha
    thetha = thetha - eta * gradient
    thetha_history.append(thetha)
    if (abs(J(thetha) - J(last_thetha)) < epsilon):
        break        
plt.plot(plot_x, plot_y, color='b')
plt.plot(np.array(thetha_history), J(np.array(thetha_history)), color='r',marker='+')
plt.show()
print(thetha, J(thetha))        


# In[9]:


eta = 0.01
epsilon = 1e-8
thetha = 0.0
thetha_history = [thetha]
while True:
    gradient = dJ(thetha)
    last_thetha = thetha
    thetha = thetha - eta * gradient
    thetha_history.append(thetha)
    if (abs(J(thetha) - J(last_thetha)) < epsilon):
        break        
plt.plot(plot_x, plot_y, color='b')
plt.plot(np.array(thetha_history), J(np.array(thetha_history)), color='r',marker='+')
plt.show()
print(thetha, J(thetha))      



评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值