PyTorch

什么是PyTorch?

PyTorch是一个基于Python的科学计算库,它有以下特点:

  • 类似于NumPy,但是它可以使用GPU
  • 可以用它定义深度学习模型,可以灵活地进行深度学习模型的训练和使用

Tensors

Tensor类似与NumPy的ndarray,唯一的区别是Tensor可以在GPU上加速运算。

In [2]:

1
import torch

构造一个未初始化的5x3矩阵:

In [4]:

1
2
x = torch.empty(5,3)
x

Out[4]:

1
2
3
4
5
tensor([[ 0.0000e+00, -8.5899e+09,  0.0000e+00],
[-8.5899e+09, nan, 0.0000e+00],
[ 2.7002e-06, 1.8119e+02, 1.2141e+01],
[ 7.8503e+02, 6.7504e-07, 6.5200e-10],
[ 2.9537e-06, 1.7186e-04, nan]])

构建一个随机初始化的矩阵:

In [5]:

1
2
x = torch.rand(5,3)
x

Out[5]:

1
2
3
4
5
tensor([[0.4628, 0.7432, 0.9785],
[0.2068, 0.4441, 0.9176],
[0.1027, 0.5275, 0.3884],
[0.9380, 0.2113, 0.2839],
[0.0094, 0.4001, 0.6483]])

构建一个全部为0,类型为long的矩阵:

In [8]:

1
2
x = torch.zeros(5,3,dtype=torch.long)
x

Out[8]:

1
2
3
4
5
tensor([[0, 0, 0],
[0, 0, 0],
[0, 0, 0],
[0, 0, 0],
[0, 0, 0]])

In [11]:

1
2
x = torch.zeros(5,3).long()
x.dtype

Out[11]:

1
torch.int64

从数据直接直接构建tensor:

In [12]:

1
2
x = torch.tensor([5.5,3])
x

Out[12]:

1
tensor([5.5000, 3.0000])

也可以从一个已有的tensor构建一个tensor。这些方法会重用原来tensor的特征,例如,数据类型,除非提供新的数据。

In [16]:

1
2
x = x.new_ones(5,3, dtype=torch.double)
x

Out[16]:

1
2
3
4
5
tensor([[1., 1., 1.],
[1., 1., 1.],
[1., 1., 1.],
[1., 1., 1.],
[1., 1., 1.]], dtype=torch.float64)

In [17]:

1
2
x = torch.randn_like(x, dtype=torch.float)
x

Out[17]:

1
2
3
4
5
tensor([[ 0.2411, -0.3961, -0.9206],
[-0.0508, 0.2653, 0.4685],
[ 0.5368, -0.3606, -0.0073],
[ 0.3383, 0.6826, 1.7368],
[-0.0811, -0.6957, -0.4566]])

得到tensor的形状:

In [20]:

1
x.shape

Out[20]:

1
torch.Size([5, 3])

注意

torch.Size 返回的是一个tuple

Operations

有很多种tensor运算。我们先介绍加法运算。

In [21]:

1
2
y = torch.rand(5,3)
y

Out[21]:

1
2
3
4
5
tensor([[0.9456, 0.3996, 0.1981],
[0.8728, 0.7097, 0.3721],
[0.7489, 0.9502, 0.6241],
[0.5176, 0.0200, 0.5130],
[0.3552, 0.2710, 0.7392]])

In [23]:

1
x + y

Out[23]:

1
2
3
4
5
tensor([[ 1.1866,  0.0035, -0.7225],
[ 0.8220, 0.9750, 0.8406],
[ 1.2857, 0.5896, 0.6168],
[ 0.8559, 0.7026, 2.2498],
[ 0.2741, -0.4248, 0.2826]])

另一种着加法的写法

In [24]:

1
torch.add(x, y)

Out[24]:

1
2
3
4
5
tensor([[ 1.1866,  0.0035, -0.7225],
[ 0.8220, 0.9750, 0.8406],
[ 1.2857, 0.5896, 0.6168],
[ 0.8559, 0.7026, 2.2498],
[ 0.2741, -0.4248, 0.2826]])

加法:把输出作为一个变量

In [26]:

1
2
3
4
result = torch.empty(5,3)
torch.add(x, y, out=result)
# result = x + y
result

Out[26]:

1
2
3
4
5
tensor([[ 1.1866,  0.0035, -0.7225],
[ 0.8220, 0.9750, 0.8406],
[ 1.2857, 0.5896, 0.6168],
[ 0.8559, 0.7026, 2.2498],
[ 0.2741, -0.4248, 0.2826]])

in-place加法

In [28]:

1
2
y.add_(x)
y

Out[28]:

1
2
3
4
5
tensor([[ 1.1866,  0.0035, -0.7225],
[ 0.8220, 0.9750, 0.8406],
[ 1.2857, 0.5896, 0.6168],
[ 0.8559, 0.7026, 2.2498],
[ 0.2741, -0.4248, 0.2826]])

注意

任何in-place的运算都会以_结尾。 举例来说:x.copy_(y), x.t_(), 会改变 x

各种类似NumPy的indexing都可以在PyTorch tensor上面使用。

In [31]:

1
x[1:, 1:]

Out[31]:

1
2
3
4
tensor([[ 0.2653,  0.4685],
[-0.3606, -0.0073],
[ 0.6826, 1.7368],
[-0.6957, -0.4566]])

Resizing: 如果你希望resize/reshape一个tensor,可以使用torch.view

In [39]:

1
2
3
4
x = torch.randn(4,4)
y = x.view(16)
z = x.view(-1,8)
z

Out[39]:

1
2
tensor([[-0.5683,  1.3885, -2.0829, -0.7613, -1.9115,  0.3732, -0.2055, -1.2300],
[-0.2612, -0.4682, -1.0596, 0.7447, 0.7603, -0.4281, 0.5495, 0.1025]])

如果你有一个只有一个元素的tensor,使用.item()方法可以把里面的value变成Python数值。

In [40]:

1
2
x = torch.randn(1)
x

Out[40]:

1
tensor([-1.1493])

In [44]:

1
x.item()

Out[44]:

1
-1.1493233442306519

In [48]:

1
z.transpose(1,0)

Out[48]:

1
2
3
4
5
6
7
8
tensor([[-0.5683, -0.2612],
[ 1.3885, -0.4682],
[-2.0829, -1.0596],
[-0.7613, 0.7447],
[-1.9115, 0.7603],
[ 0.3732, -0.4281],
[-0.2055, 0.5495],
[-1.2300, 0.1025]])

更多阅读

各种Tensor operations, 包括transposing, indexing, slicing, mathematical operations, linear algebra, random numbers在<https://pytorch.org/docs/torch>.

Numpy和Tensor之间的转化

在Torch Tensor和NumPy array之间相互转化非常容易。

Torch Tensor和NumPy array会共享内存,所以改变其中一项也会改变另一项。

把Torch Tensor转变成NumPy Array

In [49]:

1
2
a = torch.ones(5)
a

Out[49]:

1
tensor([1., 1., 1., 1., 1.])

In [50]:

1
2
b = a.numpy()
b

Out[50]:

1
array([1., 1., 1., 1., 1.], dtype=float32)

改变numpy array里面的值。

In [51]:

1
2
b[1] = 2
b

Out[51]:

1
array([1., 2., 1., 1., 1.], dtype=float32)

In [52]:

1
a

Out[52]:

1
tensor([1., 2., 1., 1., 1.])

把NumPy ndarray转成Torch Tensor

In [54]:

1
import numpy as np

In [55]:

1
2
3
4
a = np.ones(5)
b = torch.from_numpy(a)
np.add(a, 1, out=a)
print(a)
1
[2. 2. 2. 2. 2.]

In [56]:

1
b

Out[56]:

1
tensor([2., 2., 2., 2., 2.], dtype=torch.float64)

所有CPU上的Tensor都支持转成numpy或者从numpy转成Tensor。

CUDA Tensors

使用.to方法,Tensor可以被移动到别的device上。

In [60]:

1
2
3
4
5
6
7
if torch.cuda.is_available():
device = torch.device("cuda")
y = torch.ones_like(x, device=device)
x = x.to(device)
z = x + y
print(z)
print(z.to("cpu", torch.double))

Out[60]:

1
False

In [ ]:

1
2
y.to("cpu").data.numpy()
y.cpu().data.numpy()

In [ ]:

1
model = model.cuda()

热身: 用numpy实现两层神经网络

一个全连接ReLU神经网络,一个隐藏层,没有bias。用来从x预测y,使用L2 Loss。

  • ℎ=𝑊1𝑋h=W1X
  • 𝑎=𝑚𝑎𝑥(0,ℎ)a=max(0,h)
  • 𝑦ℎ𝑎𝑡=𝑊2𝑎yhat=W2a

这一实现完全使用numpy来计算前向神经网络,loss,和反向传播。

  • forward pass
  • loss
  • backward pass

numpy ndarray是一个普通的n维array。它不知道任何关于深度学习或者梯度(gradient)的知识,也不知道计算图(computation graph),只是一种用来计算数学运算的数据结构。

In [ ]:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
N, D_in, H, D_out = 64, 1000, 100, 10

# 随机创建一些训练数据
x = np.random.randn(N, D_in)
y = np.random.randn(N, D_out)

w1 = np.random.randn(D_in, H)
w2 = np.random.randn(H, D_out)

learning_rate = 1e-6
for it in range(500):
# Forward pass
h = x.dot(w1) # N * H
h_relu = np.maximum(h, 0) # N * H
y_pred = h_relu.dot(w2) # N * D_out

# compute loss
loss = np.square(y_pred - y).sum()
print(it, loss)

# Backward pass
# compute the gradient
grad_y_pred = 2.0 * (y_pred - y)
grad_w2 = h_relu.T.dot(grad_y_pred)
grad_h_relu = grad_y_pred.dot(w2.T)
grad_h = grad_h_relu.copy()
grad_h[h<0] = 0
grad_w1 = x.T.dot(grad_h)

# update weights of w1 and w2
w1 -= learning_rate * grad_w1
w2 -= learning_rate * grad_w2

PyTorch: Tensors

这次我们使用PyTorch tensors来创建前向神经网络,计算损失,以及反向传播。

一个PyTorch Tensor很像一个numpy的ndarray。但是它和numpy ndarray最大的区别是,PyTorch Tensor可以在CPU或者GPU上运算。如果想要在GPU上运算,就需要把Tensor换成cuda类型。

In [ ]:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
N, D_in, H, D_out = 64, 1000, 100, 10

# 随机创建一些训练数据
x = torch.randn(N, D_in)
y = torch.randn(N, D_out)

w1 = torch.randn(D_in, H)
w2 = torch.randn(H, D_out)

learning_rate = 1e-6
for it in range(500):
# Forward pass
h = x.mm(w1) # N * H
h_relu = h.clamp(min=0) # N * H
y_pred = h_relu.mm(w2) # N * D_out

# compute loss
loss = (y_pred - y).pow(2).sum().item()
print(it, loss)

# Backward pass
# compute the gradient
grad_y_pred = 2.0 * (y_pred - y)
grad_w2 = h_relu.t().mm(grad_y_pred)
grad_h_relu = grad_y_pred.mm(w2.t())
grad_h = grad_h_relu.clone()
grad_h[h<0] = 0
grad_w1 = x.t().mm(grad_h)

# update weights of w1 and w2
w1 -= learning_rate * grad_w1
w2 -= learning_rate * grad_w2

简单的autograd

In [72]:

1
2
3
4
5
6
7
8
9
10
11
12
x = torch.tensor(1., requires_grad=True)
w = torch.tensor(2., requires_grad=True)
b = torch.tensor(3., requires_grad=True)

y = w*x + b # y = 2*1+3

y.backward()

# dy / dw = x
print(w.grad)
print(x.grad)
print(b.grad)
1
2
3
tensor(1.)
tensor(2.)
tensor(1.)

PyTorch: Tensor和autograd

PyTorch的一个重要功能就是autograd,也就是说只要定义了forward pass(前向神经网络),计算了loss之后,PyTorch可以自动求导计算模型所有参数的梯度。

一个PyTorch的Tensor表示计算图中的一个节点。如果x是一个Tensor并且x.requires_grad=True那么x.grad是另一个储存着x当前梯度(相对于一个scalar,常常是loss)的向量。

In [ ]:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
N, D_in, H, D_out = 64, 1000, 100, 10

# 随机创建一些训练数据
x = torch.randn(N, D_in)
y = torch.randn(N, D_out)

w1 = torch.randn(D_in, H, requires_grad=True)
w2 = torch.randn(H, D_out, requires_grad=True)

learning_rate = 1e-6
for it in range(500):
# Forward pass
y_pred = x.mm(w1).clamp(min=0).mm(w2)

# compute loss
loss = (y_pred - y).pow(2).sum() # computation graph
print(it, loss.item())

# Backward pass
loss.backward()

# update weights of w1 and w2
with torch.no_grad():
w1 -= learning_rate * w1.grad
w2 -= learning_rate * w2.grad
w1.grad.zero_()
w2.grad.zero_()

PyTorch: nn

这次我们使用PyTorch中nn这个库来构建网络。 用PyTorch autograd来构建计算图和计算gradients, 然后PyTorch会帮我们自动计算gradient。

In [ ]:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
import torch.nn as nn

N, D_in, H, D_out = 64, 1000, 100, 10

# 随机创建一些训练数据
x = torch.randn(N, D_in)
y = torch.randn(N, D_out)

model = torch.nn.Sequential(
torch.nn.Linear(D_in, H, bias=False), # w_1 * x + b_1
torch.nn.ReLU(),
torch.nn.Linear(H, D_out, bias=False),
)

torch.nn.init.normal_(model[0].weight)
torch.nn.init.normal_(model[2].weight)

# model = model.cuda()

loss_fn = nn.MSELoss(reduction='sum')

learning_rate = 1e-6
for it in range(500):
# Forward pass
y_pred = model(x) # model.forward()

# compute loss
loss = loss_fn(y_pred, y) # computation graph
print(it, loss.item())

# Backward pass
loss.backward()

# update weights of w1 and w2
with torch.no_grad():
for param in model.parameters(): # param (tensor, grad)
param -= learning_rate * param.grad

model.zero_grad()

In [113]:

1
model[0].weight

Out[113]:

1
2
3
4
5
6
7
8
9
Parameter containing:
tensor([[-0.0218, 0.0212, 0.0243, ..., 0.0230, 0.0247, 0.0168],
[-0.0144, 0.0177, -0.0221, ..., 0.0161, 0.0098, -0.0172],
[ 0.0086, -0.0122, -0.0298, ..., -0.0236, -0.0187, 0.0295],
...,
[ 0.0266, -0.0008, -0.0141, ..., 0.0018, 0.0319, -0.0129],
[ 0.0296, -0.0005, 0.0115, ..., 0.0141, -0.0088, -0.0106],
[ 0.0289, -0.0077, 0.0239, ..., -0.0166, -0.0156, -0.0235]],
requires_grad=True)

PyTorch: optim

这一次我们不再手动更新模型的weights,而是使用optim这个包来帮助我们更新参数。 optim这个package提供了各种不同的模型优化方法,包括SGD+momentum, RMSProp, Adam等等。

In [ ]:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
import torch.nn as nn

N, D_in, H, D_out = 64, 1000, 100, 10

# 随机创建一些训练数据
x = torch.randn(N, D_in)
y = torch.randn(N, D_out)

model = torch.nn.Sequential(
torch.nn.Linear(D_in, H, bias=False), # w_1 * x + b_1
torch.nn.ReLU(),
torch.nn.Linear(H, D_out, bias=False),
)

torch.nn.init.normal_(model[0].weight)
torch.nn.init.normal_(model[2].weight)

# model = model.cuda()

loss_fn = nn.MSELoss(reduction='sum')
# learning_rate = 1e-4
# optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)

learning_rate = 1e-6
optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate)

for it in range(500):
# Forward pass
y_pred = model(x) # model.forward()

# compute loss
loss = loss_fn(y_pred, y) # computation graph
print(it, loss.item())

optimizer.zero_grad()
# Backward pass
loss.backward()

# update model parameters
optimizer.step()

PyTorch: 自定义 nn Modules

我们可以定义一个模型,这个模型继承自nn.Module类。如果需要定义一个比Sequential模型更加复杂的模型,就需要定义nn.Module模型。

In [ ]:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
import torch.nn as nn

N, D_in, H, D_out = 64, 1000, 100, 10

# 随机创建一些训练数据
x = torch.randn(N, D_in)
y = torch.randn(N, D_out)

class TwoLayerNet(torch.nn.Module):
def __init__(self, D_in, H, D_out):
super(TwoLayerNet, self).__init__()
# define the model architecture
self.linear1 = torch.nn.Linear(D_in, H, bias=False)
self.linear2 = torch.nn.Linear(H, D_out, bias=False)

def forward(self, x):
y_pred = self.linear2(self.linear1(x).clamp(min=0))
return y_pred

model = TwoLayerNet(D_in, H, D_out)
loss_fn = nn.MSELoss(reduction='sum')
learning_rate = 1e-4
optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)

for it in range(500):
# Forward pass
y_pred = model(x) # model.forward()

# compute loss
loss = loss_fn(y_pred, y) # computation graph
print(it, loss.item())

optimizer.zero_grad()
# Backward pass
loss.backward()

# update model parameters
optimizer.step()

In [ ]:

1
 
文章目录
  1. 1. 什么是PyTorch?
    1. 1.1. Tensors
      1. 1.1.0.1. 注意
      2. 1.1.0.2. 注意
  2. 1.2. Numpy和Tensor之间的转化
  3. 1.3. CUDA Tensors
  4. 1.4. 热身: 用numpy实现两层神经网络
  5. 1.5. PyTorch: Tensors
  6. 1.6. PyTorch: Tensor和autograd
  7. 1.7. PyTorch: nn
  8. 1.8. PyTorch: optim
  9. 1.9. PyTorch: 自定义 nn Modules
|