一、什么是pytorch
pytorch是基于python的科学计算库,常用作进行深度学习。它有以下特点:
- 类似于numpy,但可在GPU上计算
- 可以用它定义深度学习模型,可以灵活地进行深度学习模型的训练和使用
- 不同于tensorflow,pytorch是动态计算图
- pytorch代码非常接近python的原生代码
二、Tensor
Tensor类似与NumPy的ndarray,唯一的区别是Tensor可以在GPU上加速运算。
Tensor的基本数据类型有五种:
- 32位浮点型:torch.FloatTensor。pytorch.Tensor()默认的就是这种类型。
- 64位整型:torch.LongTensor。
- 32位整型:torch.IntTensor。
- 16位整型:torch.ShortTensor。
- 64位浮点型:torch.DoubleTensor。
三、基本操作
import torch
print(torch.__version__) #查看版本
print(torch.cuda.is_available()) #查看cuda是否可用
>>> 1.2.0
True
3.1、构建tensor
构造一个未初始化的5x3矩阵(tensor里的数据为垃圾值):
torch.empty(5,3)
>>> tensor([[9.2755e-39, 1.0561e-38, 1.0929e-38],
[1.1112e-38, 4.2246e-39, 1.0286e-38],
[1.0653e-38, 1.0194e-38, 8.4490e-39],
[1.0469e-38, 9.3674e-39, 9.9184e-39],
[8.7245e-39, 9.2755e-39, 8.9082e-39]])
构建一个随机初始化的矩阵:
torch.rand(5,3)
>>> tensor([[0.8201, 0.4734, 0.6552],
[0.4519, 0.9172, 0.6124],
[0.7901, 0.4706, 0.5679],
[0.9822, 0.7702, 0.5624],
[0.5378, 0.3124, 0.3104]])
构建一个全部为0,类型为long的矩阵(默认为int32):
torch.zeros((5,3), dtype=torch.long), torch.zeros((5,3), dtype=torch.long).dtype
>>> (tensor([[0, 0, 0],
[0, 0, 0],
[0, 0, 0],
[0, 0, 0],
[0, 0, 0]]), torch.int64)
torch.zeros((5,3)).long().dtype
>>> torch.int64
从数据直接直接构建tensor:
torch.tensor([1,2,3])
>>>tensor([1, 2, 3])
也可以从一个已有的tensor构建一个tensor。这些方法会重用原来tensor的特征,例如,数据类型,除非提供新的数据。
x = torch.tensor([1,2,3]).double()
x.new_ones(5,3)
>>> tensor([[1., 1., 1.],
[1., 1., 1.],
[1., 1., 1.],
[1., 1., 1.],
[1., 1., 1.]], dtype=torch.float64)
torch.randn_like(x)
>>> tensor([ 0.2463, -1.6457, -0.3800], dtype=torch.float64)
#得到tensor的形状
x = torch.randn(5, 3)
x.shape, x.size()
>>>(torch.Size([5, 3]), torch.Size([5, 3]))
注意:torch.Size
返回的是一个tuple
3.2、操作
加、减、乘、除(以加为例)
x = torch.randn(5, 3)
y = torch.randn(5, 3)
x, y
>>> (tensor([[ 1.2412, 1.0667, 0.5601],
[-0.3544, -0.5696, -0.2675],
[ 1.1691, 0.4461, -1.4666],
[-0.8285, 0.2627, -1.6137],
[-1.1441, -0.0798, -0.5284]]),
tensor([[-1.1433, 0.2899, -0.0323],
[-0.8388, 0.6572, -0.3184],
[ 0.2969, -0.0813, 0.4874],
[ 0.1227, -0.2958, 0.4423],
[-1.7285, 1.0617, 0.0103]]))
x + y
>>> tensor([[ 0.0979, 1.3566, 0.5278],
[-1.1932, 0.0876, -0.5859],
[ 1.4661, 0.3648, -0.9792],
[-0.7058, -0.0330, -1.1714],
[-2.8725, 0.9819, -0.5181]])
torch.add(x, y)
>>> tensor([[ 0.0979, 1.3566, 0.5278],
[-1.1932, 0.0876, -0.5859],
[ 1.4661, 0.3648, -0.9792],
[-0.7058, -0.0330, -1.1714],
[-2.8725, 0.9819, -0.5181]])
#将加的结果赋给指定的变量
result = torch.empty(5,3)
torch.add(x, y, out=result)
result
>>> tensor([[ 0.0979, 1.3566, 0.5278],
[-1.1932, 0.0876, -0.5859],
[ 1.4661, 0.3648, -0.9792],
[-0.7058, -0.0330, -1.1714],
[-2.8725, 0.9819, -0.5181]])
y
>>> tensor([[-1.1433, 0.2899, -0.0323],
[-0.8388, 0.6572, -0.3184],
[ 0.2969, -0.0813, 0.4874],
[ 0.1227, -0.2958, 0.4423],
[-1.7285, 1.0617, 0.0103]])
#调用此函数,会将加法执行后的结果赋给y
y.add_(x)
>>> tensor([[ 0.0979, 1.3566, 0.5278],
[-1.1932, 0.0876, -0.5859],
[ 1.4661, 0.3648, -0.9792],
[-0.7058, -0.0330, -1.1714],
[-2.8725, 0.9819, -0.5181]])
y
>>> tensor([[ 0.0979, 1.3566, 0.5278],
[-1.1932, 0.0876, -0.5859],
[ 1.4661, 0.3648, -0.9792],
[-0.7058, -0.0330, -1.1714],
[-2.8725, 0.9819, -0.5181]])
注意:任何in-place的运算都会以_
结尾。 举例来说:x.copy_(y)
, x.t_()
, 会改变 x
。
各种类似NumPy的indexing都可以在PyTorch tensor上面使用。
x = torch.randn(5, 3)
x
>>> tensor([[ 0.0564, 0.7476, 0.7645],
[-0.2150, -1.3739, 0.6493],
[ 0.4380, 1.0037, -0.1457],
[-0.1147, -0.7476, 0.1697],
[-0.0382, 0.0023, -0.2586]])
x[1:, :2]
>>> tensor([[-0.2150, -1.3739],
[ 0.4380, 1.0037],
[-0.1147, -0.7476],
[-0.0382, 0.0023]])
如果你希望resize/reshape一个tensor,可以使用torch.view
:
x = torch.randn(4, 4, dtype=torch.double)
x
>>> tensor([[ 0.1464, 0.9768, -0.7375, -0.1667],
[-0.5939, 1.5819, 0.0243, 0.5670],
[-0.8493, -2.2761, -0.5836, -0.1339],
[ 0.2758, -0.5617, -0.3112, 0.3896]], dtype=torch.float64)
x.view(16)
>>> tensor([ 0.1464, 0.9768, -0.7375, -0.1667, -0.5939, 1.5819, 0.0243, 0.5670,
-0.8493, -2.2761, -0.5836, -0.1339, 0.2758, -0.5617, -0.3112, 0.3896],
dtype=torch.float64)
x.view(-1, 8)
>>> tensor([[ 0.1464, 0.9768, -0.7375, -0.1667, -0.5939, 1.5819, 0.0243, 0.5670],
[-0.8493, -2.2761, -0.5836, -0.1339, 0.2758, -0.5617, -0.3112, 0.3896]],
dtype=torch.float64)
如果你有一个只有一个元素的tensor,使用.item()
方法可以把里面的value变成Python数值。
x = torch.tensor([-1])
x
>>>tensor([-1])
x.item()
>>>-1
转置:调用.transpose()
方法
x = torch.randn(5, 3)
print(x)
print(x.transpose(1, 0))
>>>tensor([[-0.3185, 0.0586, 1.1811],
[-1.4644, -0.9203, 1.5137],
[ 0.4608, 0.7533, 0.0783],
[-0.0866, 1.1774, 0.7667],
[-1.3153, 1.0434, 0.5277]])
tensor([[-0.3185, -1.4644, 0.4608, -0.0866, -1.3153],
[ 0.0586, -0.9203, 0.7533, 1.1774, 1.0434],
[ 1.1811, 1.5137, 0.0783, 0.7667, 0.5277]])
更多阅读
各种Tensor operations, 包括transposing, indexing, slicing,
mathematical operations, linear algebra, random numbers在
https://pytorch.org/docs/torch.
3.3、Numpy和Tensor之间的转化
在Torch Tensor和NumPy array之间相互转化非常容易。
Torch Tensor和NumPy array会共享内存,所以改变其中一项也会改变另一项。
3.3.1、把Torch Tensor转变成NumPy Array (使用.numpy()
)
x = torch.ones(5, 3)
x
>>> tensor([[1., 1., 1.],
[1., 1., 1.],
[1., 1., 1.],
[1., 1., 1.],
[1., 1., 1.]])
x.numpy()
>>> array([[1., 1., 1.],
[1., 1., 1.],
[1., 1., 1.],
[1., 1., 1.],
[1., 1., 1.]], dtype=float32)
#改变numpy中的值:
y = x.numpy()
y[1] = 2
y
>>> array([[1., 1., 1.],
[2., 2., 2.],
[1., 1., 1.],
[1., 1., 1.],
[1., 1., 1.]], dtype=float32)
#对应的x的值也会改变
x
>>> tensor([[1., 1., 1.],
[2., 2., 2.],
[1., 1., 1.],
[1., 1., 1.],
[1., 1., 1.]])
3.3.2、把NumPy ndarray转成Torch Tensor (使用torch.from_numpy()
)
由于numpy仅支持cpu,因此所有CPU上的Tensor都支持转成numpy或者从numpy转成Tensor。
import numpy as np
a = np.random.randn(5, 3)
a
>>> array([[ 1.07166976, 0.09258995, -0.11899883],
[ 0.69644084, 2.30319718, -0.95737744],
[ 1.80703737, -0.41834899, 0.94498918],
[-0.55230308, 0.02433892, -0.32977086],
[ 0.90323666, -1.69466371, 1.43269787]])
b = torch.from_numpy(a)
b
>>> tensor([[ 1.0717, 0.0926, -0.1190],
[ 0.6964, 2.3032, -0.9574],
[ 1.8070, -0.4183, 0.9450],
[-0.5523, 0.0243, -0.3298],
[ 0.9032, -1.6947, 1.4327]], dtype=torch.float64)
#改变tensor中的值:
b.add_(torch.randn_like(b))
b
>>> tensor([[ 0.4839, -0.8937, -0.4930],
[ 1.5765, 1.9617, 0.4366],
[ 1.5556, 0.0993, 0.1628],
[-1.6921, -1.2559, -1.9170],
[ 0.3976, -1.6242, -0.5577]], dtype=torch.float64)
#则原对应的numpy也会跟着变
a
>>> array([[ 0.48390751, -0.89372859, -0.49298913],
[ 1.57650821, 1.96165178, 0.43663263],
[ 1.55556077, 0.09933571, 0.16284555],
[-1.69205696, -1.25586832, -1.91698192],
[ 0.39757983, -1.62419721, -0.55765159]])
3.4、CUDA Tensors【.to()
,.cuda()
,.cpu()
】
使用.to
方法,Tensor可以被移动到别的device上。
if torch.cuda.is_available():
device = torch.device('cuda')
x = torch.randn(5,3)
x = x.to(device)
y = torch.ones_like(x, device=device)
z = x + y
print(z)
print(z.to('cpu', dtype=torch.double))
>>>tensor([[ 1.1218, 0.8690, 1.0999],
[ 2.0425, 1.2600, 1.2663],
[-0.4514, 1.8500, 0.9164],
[-0.4902, 1.8271, 0.8581],
[-0.7363, -0.0029, 2.1672]], device='cuda:0')
tensor([[ 1.1218, 0.8690, 1.0999],
[ 2.0425, 1.2600, 1.2663],
[-0.4514, 1.8500, 0.9164],
[-0.4902, 1.8271, 0.8581],
[-0.7363, -0.0029, 2.1672]], dtype=torch.float64)
y.to('cpu').numpy() #.to('cuda')将cpu转成gpu
>>> array([[1., 1., 1.],
[1., 1., 1.],
[1., 1., 1.],
[1., 1., 1.],
[1., 1., 1.]], dtype=float32)
y.cpu().numpy() #.cuda()将cpu转成gpu
>>> array([[1., 1., 1.],
[1., 1., 1.],
[1., 1., 1.],
[1., 1., 1.],
[1., 1., 1.]], dtype=float32)
四、实现两层神经网络
4.1、热身:用numpy实现
一个全连接ReLU神经网络,一个隐藏层,没有bias。用来从x预测y,使用L2 Loss。
- h = W 1 X h = W_1X h=W1X
- a = m a x ( 0 , h ) a = max(0, h) a=max(0,h),即ReLU函数
- y ^ = W 2 a \hat{y} = W_2a y^=W2a
这一实现完全使用numpy来计算前向神经网络,loss,和反向传播。
- forward pass
- loss
- backward pass
numpy ndarray是一个普通的n维array。它不知道任何关于深度学习或者梯度(gradient)的知识,也不知道计算图(computation graph),只是一种用来计算数学运算的数据结构。
N, D_in, H, D_out = 64, 1000, 100, 10 #输入样本个数,样本维数, 隐藏层维度, 输出维度
#随机生成训练数据
x = np.random.randn(N, D_in)
y = np.random.randn(N, D_out)
w1 = np.random.randn(D_in, H)
w2 = np.random.randn(H, D_out)
learning_rate = 1e-6
for t in range(500):
#FP
h = x.dot(w1) #N * H
h_relu = np.maximum(h, 0) #N * H
y_pred = h_relu.dot(w2) #N * D_out
#compute loss
loss = np.square(y_pred - y).sum()
print(t, loss)
#BP
#1.compute the gradient
grad_y_pred = 2.0 * (y_pred - y) #loss对y_pred求导,N * D_out
grad_w2 = h_relu.T.dot(grad_y_pred) #loss对w2求导(用了求导法则),H * D_out
grad_h_relu = grad_y_pred.dot(w2.T) #loss对a求导(用了求导法则),N * H
grad_h = grad_h_relu.copy()
grad_h[h<0] = 0 #a对h求导,大于0部分,为1;小于0部分,为0
grad_w1 = x.T.dot(grad_h) #loss对w1求导(用了求导法则),D_in * H
#2.update weight of w1,w2
w1 -= learning_rate * grad_w1
w2 -= learning_rate * grad_w2
>>> 0 28000226.370355956
1 26689745.413719796
2 30560354.453349
3 34621315.64681725
4 34230168.99347992
5 26554973.310202293
6 16197144.240957253
7 8099649.521414114
8 3901519.5227177767
9 2057086.6984820203
10 1283265.5333381372
...
490 1.164860672510714e-05
491 1.1165957616187579e-05
492 1.0703350617336565e-05
493 1.0260012811698124e-05
494 9.835032020813145e-06
495 9.427798068725098e-06
496 9.037616823044668e-06
497 8.663350568082964e-06
498 8.30466166238387e-06
499 7.96088008355248e-06
4.2、PyTorch: Tensors
这次我们使用PyTorch tensors来创建前向神经网络,计算损失,以及反向传播。
- np.random.randn() --> torch.randn()
- ndaaray1.dot(ndarray2) --> tensor1.mm(tensor2)
- np.maximum(0, h) --> h.clamp(min=0)
- ndarray.copy() --> tensor.clone()
- ndaaray.T --> tensor.t()
N, D_in, H, D_out = 64, 1000, 100, 10 #输入样本个数,样本维数, 隐藏层维度, 输出维度
#随机生成训练数据
x = torch.randn(N, D_in)
y = torch.randn(N, D_out)
w1 = torch.randn(D_in, H)
w2 = torch.randn(H, D_out)
learning_rate = 1e-6
for t in range(500):
#FP
h = x.mm(w1) #N * H
h_relu = h.clamp(min=0) #N * H
y_pred = h_relu.mm(w2) #N * D_out
#compute loss
loss = (y_pred - y).pow(2).sum().item()
print(t, loss)
#BP
#1.compute the gradient
grad_y_pred = 2.0 * (y_pred - y) #loss对y_pred求导,N * D_out
grad_w2 = h_relu.t().mm(grad_y_pred) #loss对w2求导(用了求导法则),H * D_out
grad_h_relu = grad_y_pred.mm(w2.T) #loss对a求导(用了求导法则),N * H
grad_h = grad_h_relu.clone()
grad_h[h<0] = 0 #a对h求导,大于0部分,为1;小于0部分,为0
grad_w1 = x.t().mm(grad_h) #loss对w1求导(用了求导法则),D_in * H
#2.update weight of w1,w2
w1 -= learning_rate * grad_w1
w2 -= learning_rate * grad_w2
>>> 0 29921732.0
1 25164024.0
2 26784450.0
3 30625412.0
4 32586802.0
5 28929820.0
6 20427622.0
7 11603735.0
8 5850786.0
9 2961650.5
10 1672187.5
...
490 5.811177470604889e-05
491 5.75326121179387e-05
492 5.680594767909497e-05
493 5.5811851780163124e-05
494 5.520442937267944e-05
495 5.430544842965901e-05
496 5.3734031098429114e-05
497 5.2921870519639924e-05
498 5.2350642363308e-05
499 5.17198204761371e-05
4.3、PyTorch: Tensor和autograd
PyTorch的一个重要功能就是autograd,也就是说只要定义了forward pass(前向神经网络),计算了loss之后,PyTorch可以自动求导计算模型所有参数的梯度。
一个PyTorch的Tensor表示计算图中的一个节点。如果x
是一个Tensor并且x.requires_grad=True
那么x.grad
是另一个储存着x
当前梯度(相对于一个scalar,常常是loss)的向量。
#一个autograd的例子:
x = torch.tensor(1., requires_grad=True)
w = torch.tensor(2., requires_grad=True)
b = torch.tensor(3., requires_grad=True)
y = w * x + b
y.backward()
print(w.grad, x.grad, b.grad)
>>> tensor(1.) tensor(2.) tensor(1.)
N, D_in, H, D_out = 64, 1000, 100, 10 #输入样本个数,样本维数, 隐藏层维度, 输出维度
#随机生成训练数据
x = torch.randn(N, D_in)
y = torch.randn(N, D_out)
w1 = torch.randn(D_in, H, requires_grad=True)
w2 = torch.randn(H, D_out, requires_grad=True)
learning_rate = 1e-6
for t in range(500):
#FP
y_pred = x.mm(w1).clamp(min=0).mm(w2)
#compute loss
loss = (y_pred - y).pow(2).sum()
print(t, loss.item())
#BP
#1.compute the gradient
loss.backward()
#2.update weight of w1,w2
with torch.no_grad(): #为了不让w1与w2的计算图记住,需要torch.no_grad()
w1 -= learning_rate * w1.grad
w2 -= learning_rate * w2.grad
w1.grad.zero_() #需要将w1.grad,w2.grad归0,不然多次求导。。
w2.grad.zero_()
>>> 0 44859680.0
1 43798720.0
2 42116264.0
3 33594256.0
4 20670000.0
5 10525615.0
6 5207585.0
7 2937573.25
8 1960476.0
9 1474883.5
10 1182829.875
...
490 0.00016607408178970218
491 0.00016301937284879386
492 0.00016003237396944314
493 0.0001572523615323007
494 0.00015416915994137526
495 0.000151164349517785
496 0.00014841070515103638
497 0.00014586362522095442
498 0.00014343412476591766
499 0.00014144206943456084
4.4、PyTorch: nn
这次我们使用PyTorch中nn这个库来构建网络。
用PyTorch autograd来构建计算图和计算gradients,
然后PyTorch会帮我们自动计算gradient。
import torch.nn as nn
N, D_in, H, D_out = 64, 1000, 100, 10 #输入样本个数,样本维数, 隐藏层维度, 输出维度
#随机生成训练数据
x = torch.randn(N, D_in).cuda()
y = torch.randn(N, D_out).cuda()
model = nn.Sequential(
nn.Linear(D_in, H, bias=False),
nn.ReLU(),
nn.Linear(H, D_out))
model = model.cuda()
nn.init.normal_(model[0].weight)
nn.init.normal_(model[2].weight)
loss_fn = nn.MSELoss(reduction='sum')
learning_rate = 1e-6
for t in range(500):
#FP
y_pred = model(x)
#compute loss
loss = loss_fn(y_pred, y)
print(t, loss.item())
#BP
#1.compute the gradient
loss.backward()
#2.update weight of w1,w2
with torch.no_grad(): #为了不让w1与w2的计算图被记住,需要torch.no_grad()
for param in model.parameters():
param -= learning_rate * param.grad
model.zero_grad() #需要将w1.grad,w2.grad归0,不然多次求导。。
>>> 0 34835004.0
1 35050880.0
2 39201828.0
3 40013600.0
4 32634348.0
5 20220532.0
6 10047959.0
7 4703330.0
8 2446285.25
9 1520974.75
10 1095113.5
...
490 0.00017202863818965852
491 0.00016858639719430357
492 0.0001655699743423611
493 0.0001626911835046485
494 0.0001595482463017106
495 0.00015681094373576343
496 0.00015449726197402924
497 0.00015196521417237818
498 0.00014958885731175542
499 0.0001468765112804249
model
#可通过model[0].weight查看第一层的权重参数值,同理。。。
>>> Sequential(
(0): Linear(in_features=1000, out_features=100, bias=False)
(1): ReLU()
(2): Linear(in_features=100, out_features=10, bias=True)
)
4.5、PyTorch: optim
这一次我们不再手动更新模型的weights,而是使用optim这个包来帮助我们更新参数。
optim这个package提供了各种不同的模型优化方法,包括SGD+momentum, RMSProp, Adam等等。
#一般使用这个代码
import torch.nn as nn
N, D_in, H, D_out = 64, 1000, 100, 10 #输入样本个数,样本维数, 隐藏层维度, 输出维度
#随机生成训练数据
x = torch.randn(N, D_in).cuda()
y = torch.randn(N, D_out).cuda()
model = nn.Sequential(
nn.Linear(D_in, H, bias=False),
nn.ReLU(),
nn.Linear(H, D_out)
)
nn.init.normal_(model[0].weight)
nn.init.normal_(model[2].weight)
model = model.cuda()
loss_fn = nn.MSELoss(reduction='sum')
#learning_rate = 1e-4
#optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)
learning_rate = 1e-6
optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate)
for t in range(500):
#FP
y_pred = model(x)
#compute loss
loss = loss_fn(y_pred, y)
print(t, loss.item())
optimizer.zero_grad() #需要将w1.grad,w2.grad归0,不然多次求导。。
#BP
loss.backward()
#update model parameters
optimizer.step()
>>> 0 34333472.0
1 33498260.0
2 33678632.0
3 29963248.0
4 21715316.0
5 12911738.0
6 6800274.5
7 3608358.25
8 2116238.25
9 1414878.75
10 1051604.75
...
490 7.164599082898349e-05
491 7.026739331195131e-05
492 6.93075853632763e-05
493 6.831727660028264e-05
494 6.73521135468036e-05
495 6.660958752036095e-05
496 6.552773993462324e-05
497 6.443727761507034e-05
498 6.32639930699952e-05
499 6.229060818441212e-05
4.6、PyTorch: 自定义 nn Modules
我们可以定义一个模型,这个模型继承自nn.Module类。如果需要定义一个比Sequential模型更加复杂的模型,就需要定义nn.Module模型。
#使用这个代码来自定义网络
import torch.nn as nn
N, D_in, H, D_out = 64, 1000, 100, 10 #输入样本个数,样本维数, 隐藏层维度, 输出维度
#随机生成训练数据
x = torch.randn(N, D_in).cuda()
y = torch.randn(N, D_out).cuda()
class TwoLayerNet(nn.Module):
def __init__(self, D_in, H, D_out):
super(TwoLayerNet, self).__init__()
self.linear1 = nn.Linear(D_in, H, bias=False)
self.linear2 = nn.Linear(H, D_out, bias=False)
def forward(self, x):
y_pred = self.linear2(self.linear1(x).clamp(min=0))
return y_pred
nn.init.normal_(model[0].weight)
nn.init.normal_(model[2].weight)
model = TwoLayerNet(D_in, H, D_out)
model = model.cuda()
loss_fn = nn.MSELoss(reduction='sum')
learning_rate = 1e-4
optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)
for t in range(500):
#FP
y_pred = model(x)
#compute loss
loss = loss_fn(y_pred, y)
print(t, loss.item())
optimizer.zero_grad() #需要将w1.grad,w2.grad归0,不然多次求导。。
#BP
loss.backward()
#update model parameters
optimizer.step()
>>> 0 721.03759765625
1 703.1661987304688
2 685.8058471679688
3 668.9493408203125
4 652.6142578125
5 636.874755859375
6 621.7089233398438
7 607.0018920898438
8 592.7435913085938
9 578.9154052734375
10 565.52880859375
...
490 3.6840455663877947e-07
491 3.4657691116990463e-07
492 3.2610486755402235e-07
493 3.067314366944629e-07
494 2.884615071252483e-07
495 2.71268362439514e-07
496 2.551720683641179e-07
497 2.3983261598914396e-07
498 2.2540255883995997e-07
499 2.1196282773416897e-07