python语言简介
python是一门解释型语言,具有如下特点:
-
与C++/C 相比,不需要编译操作。
由于C/C++需要进行编译操作,将源代码最终翻译成机器代码,所以从执行效率上是最高的。
但是程序的修改相对比较麻烦,需要维护CMakeLists.txt文件。与之相对应的Python不需要进行相应的编译操作,可以直接运行,方便了程序的修改和调试。当时执行效率相对较低。
-
伪多线程的存在
-
Python和C++一样是面向对象的编程。
3.1 ”面向对象的编程“与”面向过程的编程“。
面向对象的编程:将具有相同属性的变量或者函数进行封装,较为符合人们的理解习惯,同时在大型工程中,利于分工与合作,代码的管理等。
面向过程的编程:注重代码的实现过程,目前在底层驱动方面用得较多,以C语言为主。编程语法较少,适合初学者入门。
-
Python具有丰富的包。通过这几年的发展,python在各个行业都已经有各种较为成熟的代码包可供使用,这大大降低了工程的开发难度,同时,python这种解释性语言,也很方便的将自己写的代码做成代码包,方便管理与复用。
-
应用领域较广:机器视觉,机器学习,科学计算,网页爬虫,文本处理,UI界面,无人机控制等领域
python 语言的编程环境
在Windows下和Ubuntu下均有较好的编程体验。建议采用Pytharm进行编程操作。
python基础知识+ 运用
-
程序的注释
采用 # 进行单行注释,采用‘’‘ This is comment''' 来完成多行注释。(Remark: 在pycharm中可以采用 ctrl + / 完成对程序的注释,同时对于已经注释的程序,则再次 ctrl+/ 可以取消注释)''' 多行注释'''
-
命名法则
python对程序中的小写敏感。
3.1 变量名:一般采用驼峰命名法。首字母小写,后面的单词首字母大写,例如 "printFunction"
3.2 类名建议首字母大写
3.3 Python中33个保留字,不能重名:def , class, if , for , while, not , and , pass 等
-
python中的变量属性
5.1. python中变量可以直接进行赋值 (str int float bool type(valueName)) True False
(C++ true false)
a_float = 1.2 print(type(a_float)) b_str = "hello" print(type(b_str)) print(type(person)) a_int = 1 print(type(a_int))
5.2. 不同数据类型可以默认转换 强制类型转换 str(valueName) int(a_str) float(a_float)
print(a_int + a_float) print(str(a_int + a_float) + a_str)
5.3. python中含有bool型数据类型,即 True, False 判断语句中,True 非零值, False
if (a)
-
python中的标准输入输出
-
print("hello") 默认含有换行回车操作,若要取消换行,需要print("hello", end="")
-
print("a = %f \t b=%f" %(a,b))
-
print(f"a = {a}, b={b}")
-
name = input() print("hello"+"\t"+name)
注:输入的一定是字符串
python程序编程
顺序,分支,循环三种程序结构
-
在C语音的程序编写中,程序的入口为
int main(int argc, char** argv) { /* main program to be programmed */ return 1; }
python采用脚本式编程,程序自上而下顺序执行。遇到函数以及类时,跳过不执行。一般我们采用如下语句申明main函数
-
程序的执行方式:顺序,分支,循环,下面分别予以介绍
-
分支程序, 包含有 if else, 注:逻辑运算符 not and or (if elif)
if Ture: ## program else: ## program if A and B(A or B): ## if not A: ## program else: ## program
注:添加编程训练,从键盘输入两个数,判断大小,并输出显示较大的数。
输入第一个数:
输入第二个数:
如果 第一个数大于第二个数:
输出第一个数
再如果 第一个数等于第二个数:
输入有误
否则
输出第二个数
-
-
4. 循环结构,包含有whie , for。若想停止循环,可以选择break; 进行下一次分支语句,continue; pass语句表示什么都不执行,进行占位
C/C++语言中
for( int i=0 ; i< 6; i++)
{
// program...
}
int i=0
while(i<10)
{
i++;
//program;
}
```python
for i in range(10):
print(i)
```
以及对应的while循环
```python
while True:
print("hello")
time.sleep(1)
```
python中的占位符。
常用的range循环结构,重点掌握其中的前四个
序列 | Python | C/C++/Java |
---|---|---|
[0,1,2,…,9] | range(10) | for (i=0; i<=9; i++) |
[0,1,2,…,n-1] | range(n) | for (i=0; i<n ; i++) |
[n-1, …, 1, 0] | range(n-1, -1, -1) | for (i=n-1; i>=0; i--) |
[1,2,…,n] | range(1, n+1) | for (i=1; i<=n; i++) |
[1,2,3,4 …… | import itertoolsfor i in itertools.count(1): | for (i=1; ; i++) |
小于n的奇数 | range(1, n, 2) | for (i=1; i<n ; i=i+2) |
所有奇数 | import itertoolsfor i in itertools.count(1,2): | for (i=1; ; i=i+2) |
猜数字:
随机产生一个1~100的正数, 从键盘输入一个正整数,程序判断告诉:输入数据大了,还是小了。最终找到这个数字。
import random
randNum=random.randint(1,100)
while True: number=float(input("请输入一个100以内的数:"))
if(number>randNum):
print("输入值偏大")
elif(number<randNum):
print("输入值偏小")
else:
print("数值正确") break
输出1--100之间能够被3整除的整数:
for num in range(100): if num%3==0: print(num,end='\t')
python函数的定义
-
格式 def + 函数名(传递参数): 函数体, return
def fool(a,b): return a+b print(fool(1,2))
-
返回类型可以是多个变量,返回值为一个列表,可以通过列表的访问形式进行访问; 也可以采用多个返回值来表示。
def fool(a,b): return a+b, a c,d = fool(1,2)
-
传递参数可以有默认值;传递参数可以是对象,且对象不必添加类名字
-
函数名建议采用驼峰命名法
python中的类
-
讲不同的类型放在一起,组成一个整体,形成类。类的出现方便的数据的管理,利于程序的编写,是面向对象变成的体现。类包含有自己的属性,对外提供的方法,以及类内的函数。C++中对类的描述分为public, protect, private三种属性。一般采用public表示对外提供的接口以及方法,采用protect和private表示保护以及私有属性,保护以及私有属性,一般在类外不能访问。在python的编程中,建议采用相同的编程逻辑。
-
格式 class 类名(父类): 类体
class Person(): def __init(self,name="liming", age = 18, classname="classthree")__: self. name = name self. age = age self. classname = classname def ShowName(self): return self.name
-
对象的建立 person = Person() 表示建立默认的对象,由于没有指定传递参数,采用默认的传递参数。也可以在建立对象的时候指定传递参数。比如 person1 = Person("xiaohau", 20, "classone")
-
一般采用类中提供的方法实现功能(方法即为类中对外提供的函数)
-
类的封装性:只对外提供想要的实现,对类内封闭的属性进行封装。python中允许直接方针对象的属性
-
类中的魔法函数:魔法函数是指类内部以双下划线开头,并且以双下划线结尾的函数,在特定时刻,Python会自动调用这些函数。魔法函数不是通过继承等机制获得的,而是类一旦定义,Python内部机制自动会给类赋予这些特殊的函数,且用户是不能创建魔法函数的,即使函数名以双下划线开头和双下划线结尾。通过魔法函数可以实现许多个性化、便捷的操作。
如:
class Person(): def __init__(self): self.name = 'lihua' self.age = 10 def __str__(self): return f"name = {self.name}, age = {self.age}" person = Person() print(person)
与之对应的还有__gititem__表示索引,以及__len__表示len(对象)返回的是函数中的返回值
练习题目:创建一个汽车类,包含有汽车的名字,车长,车重, 并创建实现方法显示其中的属性。
练习题目: 在游戏中为了增加互动性,通常会创建一个游戏角色,该角色具有一定的属性,比如性别,名字,初始生命值。 我们假设,当受到一次攻击时,生命值会减少10, 当生命值小于等于0时,角色死亡;当击打别人一次,经验值增加10, 当经验值大于50的时候,升级。请编程模拟该过程。初始人物为 {” Little Boy“, male, 100}, 受到2次攻击,同时击打别人5次的过程。
从键盘输入两个整数,调用四个函数,分别显示输出+-*/(加减乘除)
def functionAdd(num1,num2):
return num1+num2
def functionSub(num1,num2):
return num1-num2
def functionMul(num1,num2):
return num1*num2
def functionDiv(num1,num2):
if not num2:
return 99999
return num1/num2
if __name__=='__main__':
num1= int(input('Please input a number:'))
num2= int(input('Please input a number:'))
print(functionAdd(num1,num2))
print(functionSub(num1,num2))
print(functionMul(num1,num2))
divResult =functionDiv(num1,num2)
if divResult ==99999:
print('the second is zero...wrong input')
else:
print(functionDiv(num1,num2))
#汽车类型代码
class Car():
def __init__(self,kind='BMW',length=4.5,weight=1.6):
self.kind=kind
self.length=length
self.__weight=weight
def showWeight(self):
return self.__weight
BMW=Car()
print(BMW.kind)
print(BMW.showWeight())
print(BMW.length)
python中的异常处理
采用关键字 try, except, finally. 其中try 表示有可能出错的代码; except表示如果出错了怎么办,finally表示不管错没错都需要执行的代码。
a = 1.0
b = float(input())
try:
c = a/b
print(f"c={c}")
except:
print("b is zero...")
finally:
print("program end...")
思考:c = list([1,2,"h","hello",1.23]), 此时是否存在max(c), min(c), 以及len(c)
a = list()
a.append(1)
a.append(2.3)
a.append("hello")
a.pop()
a.remove(2.3)
a = list()
for i in range(10):
a.append(i)
print(a[1])
print(a[3:6])
游戏设计
class Game():
def __init__(self, name, sex, hp, exp):
self.name = name
self.sex = sex
self.hp = hp
self.exp = exp
def showhp(self):
self.hp -= 20
if self.hp <= 0:
print('your hero is dead...')
else:
pass
return self.hp
def showexp(self):
self.exp += 50
if self.exp == 50:
print('你升级了')
else:
pass
return self.exp
if __name__ == '__main__':
Button = str(input('是否开始游戏 y/n '))
if Button == 'y':
Player1 = Game(name='Little Boy', sex='male', hp=100, exp=0)
for i in range(3):
Player1.showhp()
for i in range(5):
Player1.showexp()
else:
pass
print('你剩余的生命值为')
print(Player1.hp)
matplotlib 绘制图形例子
from matplotlib import pyplot as plt
import numpy as np
x = np.linspace(0, np.pi * 2, 50)
plt.plot(x, np.sin(x))
plt.show()
import pandas as pd
import numpy as np
c = pd.Series([1,2,3,4,6,6,7])
print(c)
c = pd.Series([1,2,3,4,np.nan,6,7])
print(c)
d = pd.DataFrame(np.arange(12).reshape((3,4)))
print(d)
df = pd.Series(['a',1,3,5.0])
print(df)
print(df.index)
print(df.values)
s = pd.Series([1,2,3,4,5],index=['a','b','c','d','e'])
print(s)
s3 = {'h':1,'b':2, 'm':3}
s4 = pd.Series(s3)
print(s4)
d = pd.read_csv('ab.csv',delimiter=',')
print(d)
d.loc[0,'chinese'] = 60
print(d)
# print(d.loc[0,'chinese'])
# print(d.iloc[1,:])
6 绘制y=|x|,y=log(x),y=eˣ,以及y=sin(x)的图像,并将其放在一个图形中,分2*2显示。
from matplotlib import pyplot as plt
import numpy as np
x = np.linspace(-5,5,50)
# print(x)
plt.subplot(221)
plt.xlabel('x')
plt.ylabel('sinx')
plt.plot(x,np.sin(x),'g')
plt.subplot(222)
plt.xlabel('x')
plt.ylabel('|x|')
plt.plot(x,np.abs(x),'g')
plt.subplot(223)
plt.xlabel('x')
plt.ylabel('$e^x$')
plt.plot(x,np.e**x,'g')
plt.subplot(224)
plt.xlabel('x')
plt.ylabel('logx')
x1 = np.linspace(0.1,10,100)
plt.plot(x1,np.log(x1),'g')
plt.show()
7
倒计时游戏, 从60开始进行循环倒计时,每一秒自减一,按下按键‘b',开始倒计时并显示, 按下按键’t'停止倒计时。
import time
import threading
begin = False
def numCounter():
num = 60
global begin
while True:
if begin:
num = num -1
print(f'num={num}, begin={begin}')
time.sleep(1)
if num<=0:
num = 60
else:
pass
def keyDectect():
# TODO:need to be revised by using the threading lock
global begin
while True:
key = input()
if key == 'b':
begin = True
elif key == 't':
begin = False
else:
print('wrong input..')
if __name__ == '__main__':
t1 = threading.Thread(target=numCounter)
t2 = threading.Thread(target=keyDectect)
t1.setDaemon(False)
t2.setDaemon(False)
t1.start()
t2.start()
9
深度学习
线性模型
找到一个组合适的$x, b$使得上式中的$loss$最小
import numpy as np
from matplotlib import pyplot as plt
data_x = [1, 2, 3]
data_y = [2, 4, 6]
loss_list = list()
a_list = list()
alpha = 0.01
def forward(x):
return a * x
def lossFunction(x, y):
y_pred = forward(x)
loss = (y_pred - y) ** 2
return loss
def predict(x, a_):
return a_ * x
def gradient(a, x, y):
a = a - alpha * 2 * (a * x - y) * x
return a
if __name__ == '__main__':
a = 0
for epoch in range(1000):
# for a in np.arange(0, 4, 0.1):
sum_loss = 0
for i in range(3):
sum_loss += lossFunction(data_x[i], data_y[i])
a = gradient(a, data_x[i], data_y[i])
loss_list.append(sum_loss / 3)
a_list.append(a)
plt.subplot(211)
plt.plot(a_list)
plt.subplot(212)
plt.plot(loss_list)
plt.show()
plt.figure()
plt.plot(a_list, loss_list)
plt.xlabel('a')
plt.ylabel('loss')
plt.show()
min_value = min(loss_list)
index_lossMin = loss_list.index(min_value)
print(index_lossMin)
proper_a = a_list[index_lossMin]
print(proper_a)
print("Please input the desired x:")
desired_x = input()
print(f"The predict output for the linear model is {predict(float(desired_x), proper_a)}")
下面给出采用sklearn的方法来实现线性回归的效果
import numpy as np
from matplotlib import pyplot as plt
from sklearn import linear_model
import pandas as pd
lrm = linear_model.LinearRegression()
x_data = np.array([1,2,3])
y_data = np.array([2,4,6])
z_data = np.zeros([3,2])
m_data = np.zeros([3,2])
z_data[:,0] = x_data
z_data[:,1] = y_data
m_data[:,0] = x_data
m_data[:,1] = x_data
lrm.fit(m_data,z_data)
print(lrm.predict([[4,4]]))
练习题:
'''
给定训练集为 x=1, y=6.8
x=2, y=9.8
x=3, y=13.2
x=4, y=16.2
测试集 x=5, y=? '''
import numpy as np
from matplotlib import pyplot as plt
x_data = [1,2,3,4]
y_data = [6.8,9.8,13.2,16.2]
loss_list = list()
def forward(a,x,b):
return a*x+b
def lossFunction(a,x,y,b):
y_pred = forward(a,x,b)
loss = (y_pred - y)**2
return loss
a_list = list()
b_list = list()
if __name__ == '__main__':
for a in np.arange(0,6,0.1):
for b in np.arange(0,6,0.1):
sum_loss = 0
for i in range(4):
sum_loss += lossFunction(a, x_data[i], y_data[i],b)
loss_list.append(sum_loss/4)
a_list.append(a)
b_list.append(b)
plt.plot(a_list,loss_list)
plt.xlabel('a')
plt.ylabel('loss')
print(min(loss_list))
loss_min_index = loss_list.index(min(loss_list))
print(loss_min_index)
a_wanted = a_list[loss_min_index]
b_wanted = b_list[loss_min_index]
print(f'a_wanted = {a_wanted}, b_wanted ={b_wanted}')
# plt.show()
# a_wanted = a_list[loss_list.index(min(loss_list))]
# print(forward(a_wanted, 4))
print(forward(a_wanted, 5, b_wanted))
#通过下面的代码可以直观的看出,当取得不同$b$的时候,得到的直线的样子。
def LinearFunction(x,a=3.2,b=3.4):
return a*x+b
def LinearFunction2(x,a=3.2,b=3.5):
return a*x+b
x_data = [1,2,3,4]
y_data = [6.8, 9.8, 13.2, 16.2]
z_data = [6,12,18,24]
n_data = np.arange(5)
m_data = np.zeros([5,1])
l_data = np.zeros([5,1])
for i in range(5):
m_data[i] = LinearFunction(n_data[i])
l_data[i] = LinearFunction2(n_data[i])
plt.scatter(x_data,y_data)
plt.plot(n_data,m_data,'r')
plt.plot(n_data,l_data,'g')
plt.show()
梯度方法改写
import time
import numpy as np
from matplotlib import pyplot as plt
import random
x_data = [1,2,3]
y_data = [2,4,6]
loss_list = list()
a_b_list = list()
def forward(a, x):
return a*x
def lossFunction(a,x,y):
y_pred = forward(a,x)
loss = (y - y_pred)**2
return loss
alpha = 0.1
def gradient(x, a, y):
a = a - alpha*2*x*(x*a -y)
return a
a_list = list()
b_list = list()
if __name__ == '__main__':
a = random.randint(0, 10)
for epoch in range(100):
sum_loss = 0
for i in range(3):
sum_loss += lossFunction(a, x_data[i], y_data[i])
a = gradient(x_data[i], a, y_data[i])
loss_list.append(sum_loss/3)
a_list.append(a)
b_list.append(epoch)
print(f'epoch = {epoch}, a = {a}, loss = {sum_loss/3}')
time.sleep(0.5)
# time.sleep(0.5)
# plt.plot(a_list, loss_list)
plt.plot(b_list, loss_list)
plt.show()
10
反向传播
知识点回顾:链式传播法则
$y=f(g(x))$ 则有 $\frac{dy}{dx}=\frac{\partial{f}}{\partial{g}} \frac{\partial{g}}{\partial{x}}$
在pytorch中,tensor的数据类型中,一方面包含有数据的数值,还有一个数据包含有数据的梯度。 如下例子
import torch
a = torch.tensor([2, 3], requires_grad=True, dtype=torch.float)
b = torch.tensor([6, 4], requires_grad=True, dtype=torch.float)
Q = 3 * a ** 3 - b ** 2
extern_gradient = torch.tensor([1, 1])
Q.backward(gradient=extern_gradient)
print(a.grad)
print(b.grad)
我们定义$Q=3a^3 - b^2$, 则可以计算出 $\frac{\partial{Q}}{\partial{a}} = 9a^2$ 且 $\frac{\partial{Q}}{\partial{b}}=-2b$, 然后我们将$a,b$的值带入就可以计算出对应的梯度值。
import torch
x = torch.tensor(3, dtype=torch.float32, requires_grad=True)
y = torch.tensor(4, dtype=torch.float32, requires_grad=True)
b = torch.tensor(5, dtype=torch.float32, requires_grad=True)
z = x*y + b
"Z = xy"
print(z)
z.backward()
print(z.requires_grad, x.grad, y.grad, b.grad)
在pytorch中,可以通过backward()自动的反向计算梯度,对应的数据类型应该是torch中的tenser. 在上一个例子中,我们需要计算的是对应变量$a$的梯度,因此,我们对上一个例子进行修改。首先,引入pytorch的包, import torch.
定义数据类型为 a= torch.Tensor([7.0]) 且声明 a. requires_grad=True
from matplotlib import pyplot as plt
import torch
data_x = [1, 2, 3]
data_y = [2, 4, 6]
loss_list = list()
a_list = list()
alpha = 0.01
def forward(x):
return a * x
def lossFunction(x, y):
y_pred = forward(x)
loss = (y_pred - y) ** 2
return loss
if __name__ == '__main__':
a = torch.Tensor([7.0])
a.requires_grad = True
for epoch in range(1000):
# for a in np.arange(0, 4, 0.1):
sum_loss = 0
for i in range(3):
sum_loss += lossFunction(data_x[i], data_y[i])
l = lossFunction(data_x[i],data_y[i])
l.backward()
a.data = a.data - alpha*a.grad
a.grad = None
a_list.append(a.data)
# a = gradient(a, data_x[i], data_y[i])
loss_list.append(sum_loss / 3)
print(a_list)
plt.subplot(211)
plt.plot(a_list)
plt.subplot(212)
plt.plot(loss_list)
plt.show()
11 同时之前的例子可以改写为:
import torch
from matplotlib import pyplot as plt
x_data = torch.tensor([[1], [2], [3]], dtype=torch.float)
y_data = torch.tensor([[2], [4], [6]], dtype=torch.float)
class LinearExample(torch.nn.Module):
def __init__(self):
super(LinearExample, self).__init__()
self.linear = torch.nn.Linear(1, 1)
def forward(self, x):
y_pred = self.linear(x)
return y_pred
model = LinearExample()
criterion = torch.nn.MSELoss(reduction='sum')
optimizer = torch.optim.SGD(model.parameters(), lr=0.01)
epoch_list = list()
a_list = list()
if __name__ == '__main__':
for epoch in range(100):
y_hat = model(x_data)
loss = criterion(y_hat, y_data)
a_list.append(model.linear.weight.item())
epoch_list.append(epoch)
optimizer.zero_grad()
loss.backward()
optimizer.step()
plt.plot(epoch_list, a_list)
plt.show()
12
逻辑斯特回归问题:用于处理二分类问题-->二分类表示输出或者是0, 或者是1, 输出只有两种选择。
损失函数采用交叉熵
import time
import matplotlib.pyplot as plt
import numpy as np
import torch
import torch.nn.functional as F
x_data = torch.tensor([[1],[2],[3]],dtype=torch.float32)
y_data = torch.tensor([[0],[0],[1]],dtype=torch.float32)
class Model(torch.nn.Module):
def __init__(self):
super(Model, self).__init__()
self.linear = torch.nn.Linear(1,1)
def forward(self,x):
y_pred = self.linear(x)
return torch.sigmoid(y_pred)
model = Model()
criterion = torch.nn.BCELoss(size_average=False)
optim = torch.optim.SGD(model.parameters(),lr=0.01)
for epoch in range(1000):
y_pred = model(x_data)
loss = criterion(y_pred,y_data)
print(epoch, loss.item())
# time.sleep(0.1)
optim.zero_grad()
loss.backward()
optim.step()
x = np.linspace(0,10,200)
x_t = torch.tensor(x,dtype=torch.float32).view((200,1))
y_t = model(x_t)
y = y_t.data.numpy()
plt.plot(x,y)
plt.show()
糖尿病预测
import numpy as np
from matplotlib import pyplot as plt
import torch
data_xy = np.loadtxt('/home/chasing/Documents/pytorchbooklit/diabetes.csv.gz', delimiter=',', dtype=np.float32)
x_data = torch.from_numpy(data_xy[:,:-1])
y_data = torch.from_numpy(data_xy[:,-1]).reshape(-1,1)
class LinearExample(torch.nn.Module):
def __init__(self):
super(LinearExample, self).__init__()
self.linear1 = torch.nn.Linear(8,6)
self.linear2 = torch.nn.Linear(6,4)
self.linear3 = torch.nn.Linear(4,1)
# self.linear4 = torch.nn.Linear(2,1)
self.sigmoid = torch.nn.Sigmoid()
self.relu = torch.nn.ReLU()
def forward(self,x):
x = self.relu(self.linear1(x))
x = self.relu(self.linear2(x))
x = self.linear3(x)
# x = self.linear4(x)
return self.relu(x)
# return self.sigmoid(x)
model = LinearExample()
criterion = torch.nn.BCELoss(reduction='mean')
optimizer = torch.optim.SGD(model.parameters(),lr=1e-2)
loss_list = list()
if __name__ == '__main__':
for epoch in range(300):
y_pred = model(x_data)
loss = criterion(y_pred, y_data)
loss_list.append(loss.item())
optimizer.zero_grad()
loss.backward()
optimizer.step()
plt.plot(loss_list)
plt.show()
采用一个batch会提升并行计算能力,提升计算的速度。采用一组数据会避免陷入鞍点(局部最优), 本历程中,我们采用dataset构造数据集。
from torch.utils.data import DataLoader
from torch.utils.data import Dataset
import numpy as np
import torch
from matplotlib import pyplot as plt
class LinearExample(torch.nn.Module):
def __init__(self):
super(LinearExample, self).__init__()
self.linear1 = torch.nn.Linear(8,6)
self.linear2 = torch.nn.Linear(6,4)
self.linear3 = torch.nn.Linear(4,1)
# self.linear4 = torch.nn.Linear(2,1)
self.sigmoid = torch.nn.Sigmoid()
self.relu = torch.nn.ReLU()
def forward(self,x):
x = self.relu(self.linear1(x))
x = self.relu(self.linear2(x))
x = self.linear3(x)
# x = self.linear4(x)
return self.relu(x)
# return self.sigmoid(x)
class DiabetesDatset(Dataset):
def __init__(self):
data_xy = np.loadtxt('/home/chasing/Documents/pytorchbooklit/diabetes.csv.gz', delimiter=',', dtype=np.float32)
self.len = data_xy.shape[0]
self.data_x = torch.from_numpy(data_xy[:,:-1])
self.data_y = torch.from_numpy(data_xy[:,-1]).reshape(-1,1)
def __getitem__(self, index):
return self.data_x[index], self.data_y[index]
def __len__(self):
return self.len
model = LinearExample()
dataset = DiabetesDatset()
train_loader = DataLoader(dataset=dataset, batch_size=32, shuffle=True, num_workers=2)
criterion = torch.nn.BCELoss(size_average=True)
optimizer = torch.optim.SGD(model.parameters(), lr=1e-2)
loss_list = list()
if __name__ == '__main__':
for epoch in range(100):
for i, data in enumerate(train_loader, 0):
inputs, labels = data
y_pred = model(inputs)
loss = criterion(y_pred, labels)
optimizer.zero_grad()
loss.backward()
optimizer.step()
loss_list.append(loss)
plt.plot(loss_list)
plt.show()
13
$$P(y=i) = \frac{e^{z_i}}{\sum_{j=0}^{k}{e^{z_j}}} \quad j=0,1,2,3...i...k$$
保证了求和后为1, 且每个值都是正数, $e^x$是一个单调增函数
同时计算损失的时候,之前的BCE已经不再适用(该问题是针对的二分类问题), 所以引入一种新的损失计算方法 CrossEntropyLoss(), 计算表达式为
$-y log\hat{y}$
import torch
criterion = torch.nn.CrossEntropyLoss()
y = torch.LongTensor([2, 0, 1])
y1 = torch.tensor([[0.1,0.2,0.9],
[1.1,0.1,0.2],
[0.2,2.1,0.1]])
y2 = torch.tensor([[0.8,0.2,0.3],
[0.2,0.3,0.5],
[0.2,0.2,0.5]])
loss1 = criterion(y1, y)
loss2 = criterion(y2, y)
print(f'loss1= {loss1}, loss2={loss2}')
手写数字识别例题:
import torch
from matplotlib import pyplot as plt
from torchvision import datasets
from torch.utils.data import DataLoader
from torchvision import transforms
import torch.optim as optim
import numpy as np
batch_size = 64
batch_size_test = 100
data_transform = transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,))])
minist_tainloader = datasets.MNIST(root='./', train=True, download=True, transform=data_transform)
minist_testloader = datasets.MNIST(root='./', train=False, download=True, transform=data_transform)
trainloader = DataLoader(minist_tainloader, batch_size=batch_size, shuffle=True)
testloader = DataLoader(minist_testloader, batch_size=batch_size_test, shuffle=False)
class Model(torch.nn.Module):
def __init__(self):
super(Model, self).__init__()
self.linear1 = torch.nn.Linear(784, 512)
self.linear2 = torch.nn.Linear(512, 256)
self.linear3 = torch.nn.Linear(256, 128)
self.linear4 = torch.nn.Linear(128, 64)
self.linear5 = torch.nn.Linear(64, 10)
self.relu = torch.nn.ReLU()
def forward(self, x):
x = x.view(-1, 784)
x = self.relu(self.linear1(x))
x = self.relu(self.linear2(x))
x = self.relu(self.linear3(x))
x = self.relu(self.linear4(x))
return self.linear5(x)
model = Model()
criterion = torch.nn.CrossEntropyLoss()
optimizer = optim.SGD(model.parameters(), lr=1e-2, momentum=0.5)
loss_list = list()
def test_accuracy():
correct = 0
with torch.no_grad():
for data in testloader:
images, labels = data
pred = model(images)
total_num = 0
correct = 0
for i in range(batch_size_test):
labels_np = labels.numpy().tolist()
pred_np = pred.numpy().tolist()
total_num += 1
if labels_np[i] == pred_np[i].index(max(pred_np[i])):
correct += 1
print(f'Accuracy = {correct/total_num}, i = {i}')
if __name__ == '__main__':
for epoch in range(10):
for i, data in enumerate(trainloader, 0):
inputs, label = data
outputs = model(inputs)
optimizer.zero_grad()
loss = criterion(outputs, label)
loss_list.append(loss)
loss.backward()
optimizer.step()
print(f'[{epoch}]: loss = {loss}')
plt.plot(loss_list)
plt.show()
test_accuracy()
通过PIL识别图像
import numpy as np
from PIL import Image
a = Image.open('test.jpg')
c = a.convert('L')
c.show()
# print(c)
im = np.array(a)
im_gray = np.array(c)
print(im_gray.shape)
print(im_gray)
print(im.shape)
# print(im)
b = np.array([[[1,2,3],[2,3,3],[3,4,5]],[[2,1,2],[3,4,5],[4,5,6]]])
# print(b.shape)
# a.show()
# print(a)
14
卷积神经网络
首先需要完成卷积网络的维度的推断
import torch
width, height = 28, 28
in_channle = 1
batch_size = 1
inputs = torch.randn(batch_size, in_channle,
width, height)
print(inputs.shape)
conv_lay1 = torch.nn.Conv2d(in_channels=1,
out_channels=10,
kernel_size=5)
output1 = conv_lay1(inputs)
print(output1.shape)
maxpool_lay = torch.nn.MaxPool2d(kernel_size=2)
output2 = maxpool_lay(output1)
print(output2.shape)
conv_lay2 = torch.nn.Conv2d(in_channels=10,
out_channels=20,
kernel_size=5)
output3 = conv_lay2(output2)
print(output3.shape)
output4 = maxpool_lay(output3)
print(output4.shape)
output5 = output4.view(1, -1)
linear_lay = torch.nn.Linear(320, 10)
output6 = linear_lay(output5)
print(output6.shape)
下面将手写数字识别的程序修改成带有卷积操作的深度神经网络结构
import torch
from matplotlib import pyplot as plt
from torchvision import datasets
from torch.utils.data import DataLoader
from torchvision import transforms
import torch.optim as optim
import numpy as np
batch_size = 64
data_transform = transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,))])
minist_tainloader = datasets.MNIST(root='./', train=True, download=True, transform=data_transform)
minist_testloader = datasets.MNIST(root='./', train=False, download=True, transform=data_transform)
trainloader = DataLoader(minist_tainloader, batch_size=batch_size, shuffle=True)
testloader = DataLoader(minist_testloader, batch_size=batch_size, shuffle=False)
class Model(torch.nn.Module):
def __init__(self):
super(Model, self).__init__()
self.conv1 = torch.nn.Conv2d(1, 10, kernel_size=5)
self.conv2 = torch.nn.Conv2d(10, 20, kernel_size=5)
self.pooling = torch.nn.MaxPool2d(kernel_size=2)
self.linear = torch.nn.Linear(320, 10)
self.relu = torch.nn.ReLU()
def forward(self, x):
batch_size = x.size(0)
x = self.relu(self.pooling(self.conv1(x)))
x = self.relu(self.pooling(self.conv2(x)))
x = x.view(batch_size, -1)
x = self.linear(x)
return x
model = Model()
criterion = torch.nn.CrossEntropyLoss()
optimizer = optim.SGD(model.parameters(), lr=1e-2, momentum=0.5)
loss_list = list()
def test_accuracy():
correct = 0
with torch.no_grad():
for data in testloader:
images, labels = data
pred = model(images)
total_num = 0
correct = 0
for i in range(batch_size):
labels_np = labels.numpy().tolist()
pred_np = pred.numpy().tolist()
total_num += 1
if labels_np[i] == pred_np[i].index(max(pred_np[i])):
correct += 1
print(f'Accuracy = {correct / total_num}')
if __name__ == '__main__':
for epoch in range(3):
for i, data in enumerate(trainloader, 0):
inputs, label = data
outputs = model(inputs)
optimizer.zero_grad()
loss = criterion(outputs, label)
loss_list.append(loss)
loss.backward()
optimizer.step()
print(f'[{epoch}]: loss = {loss}')
plt.plot(loss_list)
plt.show()
test_accuracy()
15
RNN 循环神经网络
主要用于处理序灌问题(输入数据之间存在一定的相关性)
下面给出一个学习实例, 主要完成的输入数据为"hello" -->"ohlol"
import torch
batch_size = 1
seq_len = 5
input_size = 4
hidden_size = 4
num_layer = 1
idx2char = ['e', 'h', 'l', 'o']
x_data = [1, 0, 2, 2, 3]
y_data = [3, 1, 2, 3, 2]
one_hot_lookup = [[1, 0, 0, 0],
[0, 1, 0, 0],
[0, 0, 1, 0],
[0, 0, 0, 1]]
x_one_hot = [one_hot_lookup[x] for x in x_data]
inputs = torch.Tensor(x_one_hot).view(seq_len, batch_size, input_size)
labels = torch.LongTensor(y_data)
class NLPModel(torch.nn.Module):
def __init__(self):
super(NLPModel, self).__init__()
self.rnn = torch.nn.RNN(input_size=input_size, hidden_size=hidden_size,
num_layers=num_layer)
def forward(self, x):
hidden = torch.zeros(num_layer, batch_size, hidden_size)
out, _ = self.rnn(x, hidden)
return out.view(-1, hidden_size)
model = NLPModel()
criterion = torch.nn.CrossEntropyLoss()
optim = torch.optim.Adam(model.parameters(), lr=0.05)
for epoch in range(35):
outputs = model(inputs)
loss = criterion(outputs, labels)
optim.zero_grad()
loss.backward()
optim.step()
_, idex = outputs.max(dim= 1)
idx = idex.data.numpy()
print('Predicted:', ''.join([idx2char[x] for x in idx]), end='')
print(f'\t epoch={epoch}, loss={loss.item()}')