本文是李沐教师的【着手学深度学习】课程的学习记载,详细章节为卷积神经网络章节。

从全衔接层到卷积

小结:

  • 图画的平移不变性使得咱们以相同的办法处理部分图画,而不在乎它所在的方位
  • 部分性意味着核算相应的躲藏表明只需要一小部分的部分图画像素
  • 在图画处理中,卷积层通常比全衔接层需要更少的参数,但依旧获得高功效的性能
  • 卷积神经网络CNN是一类特别的神经网络,它可以包括多个卷积层
  • 多个输入和输出通道使模型造每个空间方位可以获得图画的多方面特征

图画卷积

import torch
from torch import nn
from d2l import torch as d2l
def corr2d(X, K):  # @save
    h, w = K.shape  # 卷积核的大小
    Y = torch.zeros((X.shape[0] - h + 1, X.shape[1] - w + 1))
    for i in range(Y.shape[0]):
        for j in range(Y.shape[1]):
            Y[i, j] = (X[i:i + h, j:j + w] * K).sum()
    return Y
class Conv2D(nn.Module):
    def __init__(self, kernel_size):
        super().__init__()
        self.weight = nn.Parameter(torch.rand(kernel_size))
        self.bias = nn.Parameter(torch.zeros(1))
    def forward(self, x):
        return corr2d(x, self.weight) + self.bias
X = torch.ones((6,8))
X[:,2:6] = 0
K = torch.tensor([[1.0,-1.0]])
Y = corr2d(X,K)
print(Y)

填充和步幅度

当当输入图画的形状为nhnwn_h \times n_w,卷积形状为khkwk_h \times k_w时,那么输出形状为(nh−kh+1)(nw−kw+1)(n_h-k_h+1)\times (n_w -k_w+1)

那么若填充php_h行和pwp_w列(别离进行上下左右均匀分类),那么终究输出的形状为:

(nh−kh+ph+1)(nw−kw+pw+1)(n_h -k_h + p_h + 1)\times(n_w-k_w+p_w+1)

若调整笔直步幅为shs_h,水平步幅为sws_w时,输出形状为:

⌊(nh−kh+ph+sh)/sh⌋⌊(nw−kw+pw+sw)/sw⌋\lfloor (n_h-k_h+p_h+s_h)/s_h\rfloor \times \lfloor (n_w-k_w+p_w+s_w)/s_w \rfloor
import torch
from torch import nn
def comp_conv2d(conv2d,x):
    x = x.reshape((1,1) + X.shape)  # 将维度弄成4个,前两个为填充和步幅
    y = conv2d(x)
    return y.reshape(y.shape[2:])
conv2d = nn.Conv2d(1,1,kernel_size=(3,5),padding=(0,1),stride=(3,4))
X = torch.rand(size=(8,8))
print(comp_conv2d(conv2d,X).shape)

小结

  • 填充可以增加输出的高度和宽度,这常用来使得输出与输入具有相同的高和宽
  • 步幅可以减小输出的高和宽,例如输出的高和宽仅为输入的高和宽的1n\frac{1}{n}
  • 填充和步幅可用于有效地调整数据的维度

多输入多输出通道

关于多输入通道来说,一般都有相同通道数的卷积核来跟其进行匹配,然后核算的进程便是对每个通道输入的二维张量和对应通道的卷积核的二维张量进行运算,每个通道得到一个核算成果,然后就将各个核算成果相加作为输出的单通道的那个方位的数值,如下图:

【动手学深度学习】李沐——卷积神经网络

关于多输出通道来说,可以将每个通道看作是对不同特征的呼应,假定ci、coc_i、c_o别离为输入和输出通道的数目,那么为了得到这多个通道的输出,咱们需要为每个输出通道创建一个形状为cikhkwc_i\times k_h \times k_w大小的卷积核张量,因而总的卷积核的形状为cocikhkwc_o\times c_i \times k_h \times k_w

而还有一种特别的卷积层,为111\times 1卷积层。由于高宽只有1,因而它无法造高度和宽度的维度上,辨认相邻元素间相互作用的才能,它唯一的核算发生在通道上。如下图:

【动手学深度学习】李沐——卷积神经网络

这种卷积层会导致输入和输出具有相同的高度和宽度,但是通道数发生了改变,输出中的每个元素都是从输入图画中同一方位的元素的线性组合,这就说明可以将这个卷积层起的作用看成是一个全衔接层,输入的每个通道便是一个输入结点,然后卷积核的每一个通道便是对应的权重

因而111\times 1卷积层通常用于调整网络层的通 道数量和操控模型的杂乱度

池化层(会聚层)

池化层可以用来处理卷积关于像素方位尤其灵敏的问题,例如下面:

【动手学深度学习】李沐——卷积神经网络

那么池化有最大池化以及均匀池化

详细完结为:

pool2d = nn.MaxPool2d((2,3),padding=(1,1),stride=(2,3))

假如应对多通道的场景,会保持输入和输出通道持平。

小结

  • 关于给定输入元素,最大池化层会输出该窗口内的最大值,均匀池化层会输出该窗口内的均匀值
  • 池化层的首要优点之一是减轻卷积层对方位的过度灵敏
  • 可以指定池化层的填充和步幅
  • 运用最大池化层以及大于1的步幅,可以减小空间的维度
  • 池化层的输出通道数和输入通道数相同

卷积神经网络(LeNet)

import torch
from matplotlib import pyplot as plt
from torch import nn
from d2l import torch as d2l
class Reshape(torch.nn.Module):
    def forward(self, x):
        return x.view(-1, 1, 28, 28)
net = nn.Sequential(
    Reshape(),
    nn.Conv2d(1, 6, kernel_size=5, padding=2),
    nn.Sigmoid(),
    nn.AvgPool2d(kernel_size=2, stride=2),
    nn.Conv2d(6, 16, kernel_size=5),
    nn.Sigmoid(),
    nn.AvgPool2d(kernel_size=2, stride=2),
    nn.Flatten(),
    nn.Linear(16 * 5 * 5, 120),
    nn.Sigmoid(),
    nn.Linear(120, 84),
    nn.Sigmoid(),
    nn.Linear(84, 10)
)
# 载入数据集
batch_size = 256
train_iter, test_iter = d2l.load_data_fashion_mnist(batch_size=batch_size)
# 修正评价函数,运用GPU来核算
def evaluate_accuracy_gpu(net, data_iter, device=None):  # @save
    if isinstance(net, torch.nn.Module):
        net.eval()  # 转为评价形式
        if not device:  # 假如不是为None
            device = next(iter(net.parameters())).device
    metric = d2l.Accumulator(2)
    for X,y in data_iter:
        if isinstance(X, list):
            X = [x.to(device) for x in X]
        else:
            X = X.to(device)
        y = y.to(device)
        metric.add(d2l.accuracy(net(X),y), y.numel())
    return metric[0] / metric[1]
# 对练习函数做改动,使其可以在GPU上跑
def train_ch6(net, train_iter, test_iter, num_eopchs, lr, device):  #@ save
    def init_weights(m):
        if type(m) == nn.Linear or type(m) == nn.Conv2d:
            nn.init.xavier_uniform_(m.weight)
    net.apply(init_weights)
    print("training on:",device)
    net.to(device)
    optimizer = torch.optim.SGD(net.parameters(), lr=lr)
    loss = nn.CrossEntropyLoss()
    animator = d2l.Animator(xlabel='epoch', xlim=[1,num_eopchs],
                            legend=["train loss",'train acc', 'test,acc'])
    timer, num_batches = d2l.Timer(), len(train_iter)
    for epoch in range(num_eopchs):
        metric = d2l.Accumulator(3)
        net.train()  # 敞开练习形式
        for i,(X,y) in enumerate(train_iter):
            timer.start()  # 开端计时
            optimizer.zero_grad()  # 清空梯度
            X, y = X.to(device), y.to(device)
            y_hat = net(X)
            l = loss(y_hat, y)
            l.backward()
            optimizer.step()
            with torch.no_grad():
                metric.add(l * X.shape[0], d2l.accuracy(y_hat,y), X.shape[0])
            timer.stop()  # 中止计时
            train_l = metric[0] / metric[2]
            train_acc = metric[1] / metric[2]
            if (i+1) % (num_batches // 5) == 0 or i==num_batches-1:
                animator.add(epoch + (i+ 1) / num_batches,
                             (train_l, train_acc ,None))
        test_acc = evaluate_accuracy_gpu(net, test_iter)
        animator.add(epoch+1, (None, None, test_acc))
    print(f'loss{ train_l:.3f},train acc{train_acc:.3f},'
          f'test acc{test_acc:.3f}')
    print(f'{metric[2] * num_eopchs / timer.sum():1f} examples / sec'
          f'on{str(device)}')
lr, num_epoch = 0.5,20
train_ch6(net, train_iter, test_iter, num_epoch, lr ,d2l.try_gpu())
plt.show()

【动手学深度学习】李沐——卷积神经网络

loss0.417,train acc0.847,test acc0.836
36144.960085 examples / seconcuda:0

小结

  • 卷积神经网络是一类运用卷积层的网络
  • 在卷积神经网络中,组合运用卷积层、非线性激活函数和池化层
  • 为了结构高性能的CNN,咱们通常对卷积层进行排序,逐步下降其表明的空间分辨率,一起增加通道数
  • 在传统的卷积神经网络中,卷积块编码得到的表征在输出之前需要由一个或多个全衔接层进行处理
  • LeNet是最早发布的卷积神经网络之一

深度卷积神经网络(AlexNet)

import torch
from matplotlib import pyplot as plt
from torch import nn
from d2l import torch as d2l
net = nn.Sequential(
    nn.Conv2d(1, 96, kernel_size=11, stride=4, padding=1),
    nn.ReLU(),
    nn.MaxPool2d(kernel_size=3, stride=2),
    nn.Conv2d(96, 256, kernel_size=5, padding=2),
    nn.ReLU(),
    nn.MaxPool2d(kernel_size=3, stride=2),
    nn.Conv2d(256, 384, kernel_size=3, padding=1),
    nn.ReLU(),
    nn.Conv2d(384, 384, kernel_size=3, padding=1),
    nn.ReLU(),
    nn.Conv2d(384, 256, kernel_size=3, padding=1),
    nn.ReLU(),
    nn.MaxPool2d(kernel_size=3, stride=2),
    nn.Flatten(),
    nn.Linear(6400, 4096),
    nn.ReLU(),
    nn.Dropout(p=0.5),
    nn.Linear(4096, 4096),
    nn.ReLU(),
    nn.Dropout(p=0.5),
    nn.Linear(4096, 10)
)
batch_size = 128
train_iter, test_iter = d2l.load_data_fashion_mnist(batch_size, resize=224)
# 读取数据然后将其高和宽都拉成224
lr, num_epochs = 0.01, 10
d2l.train_ch6(net, train_iter, test_iter, num_epochs, lr, d2l.try_gpu())
plt.show()

跑了好久:

【动手学深度学习】李沐——卷积神经网络

loss 0.328, train acc 0.881, test acc 0.881
666.9 examples/sec on cuda:0

运用块的网络(VGG)

VGG便是沿用了AlexNet的思维,将多个卷积层和一个池化层组成一个块,然后可以指定每个块内卷积层的数目,以及块的数目,经过多个块对图画信息的提取后再经过全衔接层。

VGG块中包括以下内容:

  • 多个带填充以保持分辨率不变的卷积层
  • 每个卷积层后都带有非线性激活函数
  • 终究一个池化层

详细代码如下:

import torch
from matplotlib import pyplot as plt
from torch import nn
from d2l import torch as d2l
def vgg_block(num_convs, in_channels, out_channels):
    # 该函数用来创建单个的VGG块
    layers = []
    for _ in range(num_convs):
        layers.append(nn.Conv2d(in_channels, out_channels, kernel_size=3, padding=1))
        layers.append(nn.ReLU())
        in_channels = out_channels
    layers.append(nn.MaxPool2d(kernel_size=2, stride=2))
    return nn.Sequential(*layers)
def vgg(conv_arch):
    conv_blks = []
    in_channels = 1
    # 构建卷积层
    for (num_convs, out_channels) in conv_arch:
        conv_blks.append(vgg_block(num_convs, in_channels, out_channels))
        in_channels = out_channels
    return nn.Sequential(
        *conv_blks,
        nn.Flatten(),
        nn.Linear(out_channels * 7 * 7, 4096),
        nn.ReLU(),
        nn.Dropout(p=0.5),
        nn.Linear(4096, 4096),
        nn.ReLU(),
        nn.Dropout(p=0.5),
        nn.Linear(4096, 10)
    )
conv_arch = ((1, 64), (1, 128), (2, 256), (2, 512), (2, 512))
# 第一个为块内卷积层个数,第二个为输出通道数
ratio = 4
small_conv_arch = [(pair[0], pair[1] // ratio) for pair in conv_arch]
#  除以ratio削减通道数目
net = vgg(small_conv_arch)
lr, num_epochs, batch_size = 0.05, 10, 128
train_iter, test_iter = d2l.load_data_fashion_mnist(batch_size, resize=224)
d2l.train_ch6(net,train_iter, test_iter, num_epochs, lr, d2l.try_gpu())
plt.show()

【动手学深度学习】李沐——卷积神经网络

loss 0.170, train acc 0.936, test acc 0.912
378.0 examples/sec on cuda:0

小结

  • VGG-11运用可复用的卷积块来结构网络,不同的VGG模型可经过每个块中卷积层数量和输出通道数量的差异来界说
  • 块的运用导致网络界说得十分简洁,运用块可以有效地设计杂乱的网络
  • 在研讨中发现深层且窄的卷积(多层333\times 3)比浅层且宽(例如少层555\times 5)的作用更好

网络中的网络(NiN)

之前的网络都有一个一起的特点在于终究都会经过全衔接层来对特征的表明进行处理,这就导致参数数量很大。那么NiN便是期望可以很其他的模块来替换掉全衔接层,那么就用到了**111 \times 1的卷积层**,因而1个NiN块便是一个正常的卷积层和两个111 \times 1的卷积层,那么经过多个NiN块后,将通道数拓展到期望输出的类别数,然后用一个具有输出类别数目的通道数的大局均匀池化层来进行处理,也便是对每个通道进行悉数均匀得到单个标量,那么有out_channelsout\_channels个通道就有相应个数值,再经过softmax就可以作为输出了

import torch
from matplotlib import pyplot as plt
from torch import nn
from d2l import torch as d2l
def nin_block(in_channels, out_channels, kernel_size, strides, padding):
    return nn.Sequential(
        nn.Conv2d(in_channels, out_channels, kernel_size, strides, padding),
        # 在第一个卷积层就将其转换为对应的通道数和大小
        nn.ReLU(),
        nn.Conv2d(out_channels, out_channels, kernel_size=1),
        nn.ReLU(),
        nn.Conv2d(out_channels, out_channels, kernel_size=1),
        nn.ReLU()  # 两个1*1的卷积层都不改动大小和通道
    )
net = nn.Sequential(
    nin_block(1, 96, kernel_size=11, strides=4, padding=0),
    nn.MaxPool2d(3, stride=2),  # 使得高宽折半
    nin_block(96, 256, kernel_size=5, strides=1, padding=2),
    nn.MaxPool2d(3,stride=2),
    nin_block(256, 384, kernel_size=3, strides=1, padding=1),
    nn.MaxPool2d(3,stride=2),
    nn.Dropout(p=0.5),
    # 标签类别数为10,因而终究一个输出通道数设为10
    nin_block(384, 10, kernel_size=3, strides=1, padding=1),
    nn.AdaptiveAvgPool2d((1,1)),
    nn.Flatten()  # 将四维度的转成两个维度(批量大小,输出通道数)
)
lr, num_epochs, batch_size = 0.1, 10, 128
train_iter, test_iter = d2l.load_data_fashion_mnist(batch_size, resize=224)
d2l.train_ch6(net, train_iter, test_iter, num_epochs, lr, d2l.try_gpu())
plt.show()

【动手学深度学习】李沐——卷积神经网络

loss 0.383, train acc 0.857, test acc 0.847
513.3 examples/sec on cuda:0

小结

  • NiN运用由一个卷积层和多个111\times 1卷积层组成的块,该块可以在卷积神经网络中运用,以答应更多的像素非线性
  • NiN去除了简略形成过拟合的全衔接层,将它们替换成大局均匀池化层,该池化层通道数量为所需的输出数目
  • 移除全衔接层可以削减过拟合,一起明显削减参数量

含并行衔接的网络(GoogLeNet)

前面提到的各种网络,其间的问题在于各个卷积层的参数可能都是不相同的,而DNN的解说性如此之差,咱们很难解说清楚哪一个超参数的卷积层才是咱们需要的,才是最好的。因而在GoogLeNet网络中,其引入了Inception块,这种块引入了并行核算的思维,将常见的多种不同超参数的卷积层都放入,期望可以经过多种提取特征的办法来得到最理想的特征提取作用,如下图:

【动手学深度学习】李沐——卷积神经网络

其详细的结构为:

【动手学深度学习】李沐——卷积神经网络

import torch
from matplotlib import pyplot as plt
from torch import nn
from torch.nn import functional as F
from d2l import torch as d2l
class Inception(nn.Module):
    def __init__(self, in_channels, c1,c2,c3, c4, **kwargs):
        super(Inception, self).__init__(**kwargs)
        # 线路1,单1*1卷积层
        self.p1_1 = nn.Conv2d(in_channels, c1, kernel_size=1)
        # 线路2,1*1卷积层后接3*3卷积层
        self.p2_1 = nn.Conv2d(in_channels, c2[0], kernel_size=1)
        self.p2_2 = nn.Conv2d(c2[0],c2[1], kernel_size=3, padding=1)
        # 线路3,1*1卷积层后接上5*5卷积层
        self.p3_1 = nn.Conv2d(in_channels, c3[0], kernel_size=1)
        self.p3_2 = nn.Conv2d(c3[0], c3[1], kernel_size=5, padding=2)
        # 线路4,3*3最大池化层后接上1*1卷积层
        self.p4_1 = nn.MaxPool2d(kernel_size=3, stride=1,padding=1)
        self.p4_2 = nn.Conv2d(in_channels, c4, kernel_size=1)
    def forward(self,x):
        p1 = F.relu(self.p1_1(x))
        p2 = F.relu(self.p2_2(F.relu(self.p2_1(x))))
        p3 = F.relu(self.p3_2(F.relu(self.p3_1(x))))
        p4 = F.relu(self.p4_2(self.p4_1(x)))
        # 再在通道维度上叠加在一起
        return torch.cat((p1,p2,p3,p4),dim=1)
b1 = nn.Sequential(
    nn.Conv2d(1,64, kernel_size=7, stride=2, padding=3),
    nn.ReLU(),
    nn.MaxPool2d(kernel_size=3, stride=2, padding=1)
)
b2 = nn.Sequential(
    nn.Conv2d(64, 64, kernel_size=1),
    nn.ReLU(),
    nn.Conv2d(64, 192, kernel_size=3, padding=1),
    nn.ReLU(),
    nn.MaxPool2d(kernel_size=3, stride=2, padding=1)
)
b3 = nn.Sequential(
    Inception(192,64,(96,128),(16,32),32),
    Inception(256,128,(128,192),(32,96),64),
    nn.MaxPool2d(kernel_size=3,stride=2,padding=1)
)
b4 = nn.Sequential(
    Inception(480, 192, (96,208),(16,48), 64),
    Inception(512, 160, (112,224),(24,64), 64),
    Inception(512,128,(128,256),(24,64),64),
    Inception(512,112, (144,288),(32,64), 64),
    Inception(528, 256, (160,320),(32,128),128),
    nn.MaxPool2d(kernel_size=3, stride=2, padding=1)
)
b5 = nn.Sequential(
    Inception(832,256, (160,320),(32,128),128),
    Inception(832, 384, (192,384), (48,128),128),
    nn.AdaptiveAvgPool2d((1, 1)), nn.Flatten()
)
net = nn.Sequential(
    b1,b2,b3,b4,b5,nn.Linear(1024,10)
)
lr, num_epochs, batch_size = 0.05, 10, 128
train_iter, test_iter = d2l.load_data_fashion_mnist(batch_size, resize=96)
d2l.train_ch6(net, train_iter, test_iter, num_epochs, lr, d2l.try_gpu())
plt.show()
"""
x = torch.rand(size=(1,1,96,96))
for layer in net:
    x = layer(x)
    print(layer.__class__.__name__, 'output shape \t', x.shape)
"""

【动手学深度学习】李沐——卷积神经网络

loss 0.284, train acc 0.891, test acc 0.884
731.9 examples/sec on cuda:0

小结

  • Inception块相当于一个有4条途径的子网络,它经过不同窗口形状的卷积层和最大池化层来并行抽取信息,并运用111\times 1卷积层削减每像素级别上的通道维数然后下降模型杂乱度
  • GoogLeNet将多个设计精密的Inception块与其他层(卷积层、全衔接层)串联起来,其间Inception块的通道数分配之比是在ImageNet数据集上经过大量的实验得到的
  • GoogLeNet和它的后继者们一度是ImageNet上最有效的模型之一:它以较低的核算杂乱度供给了相似的测试精度

批量归一化

在练习进程中,一般正常情况下,后边的层的梯度会比较大,而前面层的梯度会由于经过多层的传播一向相乘而变得比较小,而此刻学习率假如固定的话,那么前面的层就会更新得比较慢,后边层会更新得比较快,那么当后边层更新即将完结时,会由于前面的层发生了变动,那么后边层就需要重新更新

那么批量标准化的思维是:在每一个卷积层或线型层后运用,将其输出标准到某一个散布之中(不同的层所归到的散布是不相同的,是各自学习的),那么约束到一个想要的散布后便可以使得收敛更快

假定当时批量B得到的样本为x=(x1,x2,…,xn)\pmb{x}=(x_1,x_2,…,x_n),那么:

B=1∣B∣∑i∈BxiB2=1∣B∣∑i∈B(xi−B)2+(避免方差为0)BN(xi)=xi−BB+\hat{\mu}_B=\frac{1}{\vert B\vert}\sum_{i\in B}x_i\\ \hat{\sigma}^2_B=\frac{1}{\vert B \vert}\sum_{i\in B}(x_i -\hat{\mu}_B)^2+\epsilon~~(\epsilon避免方差为0)\\ BN(x_i)=\gamma \frac{x_i – \hat{\mu}_B}{\hat{\sigma}_B}+\beta

可以以为、\gamma、\beta别离为要标准到的散布的方差和均值,是两个待学习的参数。

研讨指出,其作用可能便是经过在每个小批量中参加噪音来操控模型的杂乱度,由于批量是随机获得的,因而批量的均值和方差也就不同,相当于对该次批量参加了随机偏移B\hat{\mu}_B和随机缩放B\hat{\sigma}_B。需要注意的是它不需要与Dropout一起运用。

它可以作用的全衔接层和卷积层的输出上,激活函数之前,也可以作用到全衔接层和卷积层的输入上:

  • 关于全衔接层来说,其作用在特征维
  • 关于卷积层,作用在通道维

而当咱们在练习中采用和批量归一化,咱们就需要记下来每个用到批量归一化的当地,其整个样本数据集的均值和方差是多少,这样才可以在进行猜测的时候也对猜测样本进行标准

import torch
from matplotlib import pyplot as plt
from torch import nn
from d2l import torch as d2l
def batch_norm(X, gamma, beta, moving_mean, moving_var, eps, momentum):
    if not torch.is_grad_enabled():  # 说明当时在猜测
        X_hat = (X - moving_mean) / torch.sqrt(moving_var + eps)  # 避免方差为0
        # 这两个参数便是整个数据集的均值和方差
    else:
        assert len(X.shape) in (2,4)  # 维度数目为2,是全衔接层,为4是卷积层
        if len(X.shape) == 2:
            mean = X.mean(dim = 0)
            var = ((X - mean) ** 2 ).mean(dim = 0)
        else:
            mean = X.mean(dim=(0,2,3),keepdim=True)
            # 每一个通道是一个不同的特征,其提取了图画不同的特征,因而对通道维核算均值方差
            var = ((X - mean) ** 2).mean(dim=(0,2,3), keepdim = True)
        # 当时在练习形式
        X_hat = (X - mean) / torch.sqrt(var + eps)
        moving_mean = momentum * moving_mean + (1.0 - momentum) * mean
        moving_var = momentum * moving_var + (1.0 - momentum) * var
    Y = gamma * X_hat + beta
    return Y, moving_mean.data, moving_var.data
class BatchNorm(nn.Module):
    def __init__(self,num_features, num_dims):
        super().__init__()
        if num_dims == 2:
            shape = (1, num_features)
        else:
            shape = (1, num_features, 1, 1)
        self.gamma = nn.Parameter(torch.ones(shape))
        self.beta = nn.Parameter(torch.zeros(shape))
        self.moving_mean = torch.zeros(shape)
        self.moving_var = torch.ones(shape)
    def forward(self, X):
        if self.moving_mean.device != X.device:
            self.moving_mean = self.moving_mean.to(X.device)
            self.moving_var = self.moving_var.to(X.device)
        Y,self.moving_mean, self.moving_var = batch_norm(X,self.gamma, self.beta, self.moving_mean,
                                                         self.moving_var, eps=1e-5, momentum=0.9)
        return Y
net = nn.Sequential(nn.Conv2d(1, 6, kernel_size=5),
                    BatchNorm(6, num_dims=4),
                    nn.Sigmoid(),
                    nn.MaxPool2d(kernel_size=2, stride=2),
                    nn.Conv2d(6, 16,kernel_size=5),
                    BatchNorm(16, num_dims=4),
                    nn.Sigmoid(),
                    nn.MaxPool2d(kernel_size=2, stride=2),
                    nn.Flatten(),
                    nn.Linear(16 * 4 * 4, 120),
                    BatchNorm(120, num_dims=2),
                    nn.Sigmoid(),
                    nn.Linear(120, 84),
                    BatchNorm(84, num_dims=2),
                    nn.Sigmoid(),
                    nn.Linear(84, 10))
lr, num_epochs, batch_size = 1.0, 10 ,256
train_iter, test_iter = d2l.load_data_fashion_mnist(batch_size)
d2l.train_ch6(net,train_iter, test_iter, num_epochs, lr, d2l.try_gpu())
plt.show()

【动手学深度学习】李沐——卷积神经网络

loss 0.251, train acc 0.908, test acc 0.883
17375.8 examples/sec on cuda:0

而nn中也有简略的完结办法:

net = nn.Sequential(nn.Conv2d(1, 6, kernel_size=5),
                    nn.BatchNorm2d(6),
                    nn.Sigmoid(),
                    nn.MaxPool2d(kernel_size=2, stride=2),
                    nn.Conv2d(6, 16,kernel_size=5),
                    nn.BatchNorm2d(16),
                    nn.Sigmoid(),
                    nn.MaxPool2d(kernel_size=2, stride=2),
                    nn.Flatten(),
                    nn.Linear(16 * 4 * 4, 120),
                    nn.BatchNorm2d(120),
                    nn.Sigmoid(),
                    nn.Linear(120, 84),
                    nn.BatchNorm2d(84),
                    nn.Sigmoid(),
                    nn.Linear(84, 10))

小结

  • 在模型练习的进程中,批量归一化运用小批量的均值和标准差,不断调整神经网络的中心输出,使整个神经网络各层的中心输出愈加安稳
  • 批量归一化在全衔接层和卷积层的运用略有不同,需要注意作用的维度
  • 批量归一化和Dropout相同,在练习形式和猜测形式下核算不同
  • 批量归一化有许多有利的副作用,首要是正则化

残差网络(ResNet)

咱们需要评论一个问题是:是否参加更多的层就可以使得精度进一步前进

【动手学深度学习】李沐——卷积神经网络

因而ResNet便是这种思维,最详细的表现是:

【动手学深度学习】李沐——卷积神经网络

那么将该块的输入衔接到输出,就需要输入和输出的维度是相同的,可以直接相加,因而假如块内部对维度进行了改动,那么就需要对输入也进行维度的改变才可以相加:

【动手学深度学习】李沐——卷积神经网络

那么一般来说,是先对输入进行多个高宽折半的ResNet块,后边再接多个高宽不变的ResNet块,可以使得后边提取特征的时候削减核算量:

【动手学深度学习】李沐——卷积神经网络

那么全体的架构便是:

【动手学深度学习】李沐——卷积神经网络

因而,代码为:

import torch
from matplotlib import pyplot as plt
from torch import nn
from torch.nn import functional as F
from d2l import torch as d2l
class Residual(nn.Module):  #@save
    def __init__(self, input_channels, num_channels,
                 use_1x1conv=False, strides=1):
        super().__init__()
        self.conv1 = nn.Conv2d(input_channels, num_channels,
                               kernel_size=3, padding=1, stride=strides)
        self.conv2 = nn.Conv2d(num_channels, num_channels,
                               kernel_size=3, padding=1)
        if use_1x1conv:
            self.conv3 = nn.Conv2d(input_channels, num_channels,
                                   kernel_size=1, stride=strides)
        else:
            self.conv3 = None
        self.bn1 = nn.BatchNorm2d(num_channels)
        self.bn2 = nn.BatchNorm2d(num_channels)
    def forward(self, X):
        Y = F.relu(self.bn1(self.conv1(X)))
        Y = self.bn2(self.conv2(Y))
        if self.conv3:
            X = self.conv3(X)
        Y += X
        return F.relu(Y)
# 第一个模块基本上在卷积神经网络中都是相同的
b1 = nn.Sequential(nn.Conv2d(1, 64, kernel_size=7, stride=2, padding=3),
                   nn.BatchNorm2d(64), nn.ReLU(),
                   nn.MaxPool2d(kernel_size=3, stride=2, padding=1))
def resnet_block(input_channels, num_channels, num_residuals,first_block=False):
    blk = []
    for i in range(num_residuals):
        if i == 0 and not first_block:
            blk.append(Residual(input_channels, num_channels,use_1x1conv=True, strides=2))
        else:
            blk.append(Residual(num_channels, num_channels))
    return blk
b2 = nn.Sequential(*resnet_block(64,64,2,first_block=True))
b3 = nn.Sequential(*resnet_block(64,128,2))
b4 = nn.Sequential(*resnet_block(128,256,2))
b5 = nn.Sequential(*resnet_block(256,512,2))
# *号代表把resnet_block回来的列表展开,可以理解为把元素都拿出来,不是单个列表了
net = nn.Sequential(
    b1,b2,b3,b4,b5,
    nn.AdaptiveAvgPool2d((1,1)),
    nn.Flatten(),
    nn.Linear(512,10)
)
"""
X = torch.rand(size=(1, 1, 224, 224))
for layer in net:
    X = layer(X)
    print(layer.__class__.__name__,'output shape:\t', X.shape)
"""
lr, num_epochs, batch_size = 0.05, 10, 256
train_iter, test_iter = d2l.load_data_fashion_mnist(batch_size, resize=96)
d2l.train_ch6(net, train_iter, test_iter, num_epochs, lr, d2l.try_gpu())
plt.show()

【动手学深度学习】李沐——卷积神经网络

loss 0.014, train acc 0.996, test acc 0.914
883.9 examples/sec on cuda:0

李沐教师后边又补充了一节关于ResNet的梯度核算的内容,详细如下:

假定y=f(x),则更新为w=w−∂y∂w那么假定后边增加一个模块为y′=g(y)=g(f(x)),则此刻输出关于参数的导数为∂y′∂w=∂g(y)∂y∂y∂w那么假如g是一个学习才能比较强的层(例如全衔接层),那么就会更接近于实在输出,此刻∂g(y)∂y较小然后导致∂y′∂w较小,那么f(x)层的更新就很慢,首要问题便是乘法中心一个比较小就会呈现梯度消失的问题而ResNet它采用了残差的办法,即y′=f(x)+g(f(x)),那么∂y′∂w=∂y∂w+∂g(y)∂y∂y∂w就算第二部分较小,依然有第一部分来供给较大的梯度。因而可以处理梯度消失的问题,在接近数据部分的也可以进行更新假定y=f(x),则更新为~~w=w-\lambda \frac{\partial y}{\partial w}\\那么假定后边增加一个模块为y^{\prime}=g(y)=g(f(x)),则此刻输出关于参数的导数为~\frac{\partial y^{\prime}}{\partial w}=\frac{\partial g(y)}{\partial y}\frac{\partial y}{\partial w}\\那么假如g是一个学习才能比较强的层(例如全衔接层),那么就会更接近于实在输出,此刻\frac{\partial g(y)}{\partial y}较小\\然后导致\frac{\partial y^{\prime}}{\partial w}较小,那么f(x)层的更新就很慢,首要问题便是乘法\\中心一个比较小就会呈现梯度消失的问题\\而ResNet它采用了残差的办法,即y^{\prime}=f(x)+g(f(x)),那么\frac{\partial y^{\prime}}{\partial w}=\frac{\partial y}{\partial w}+\frac{\partial g(y)}{\partial y}\frac{\partial y}{\partial w}\\就算第二部分较小,依然有第一部分来供给较大的梯度。\\因而可以处理梯度消失的问题,在接近数据部分的也可以进行更新

图画分类比赛

本次我先是采用了李沐教师上课讲过的ResNet11去跑,成果到达了0.8多一点,详细的代码请见下:

# 首要导入包
import torch
import torch.nn as nn
import pandas as pd
import numpy as np
from torch.utils.data import Dataset, DataLoader
from torchvision import transforms
from PIL import Image
import os
from d2l import torch as d2l
import matplotlib.pyplot as plt
from LeavesDataset import LeavesDataset  # 数据加载器

先是要对标签类的数据进行处理,将其从字符串转换为对应的类别数字,一起在这两者之间树立联系便利后续:

label_dataorgin = pd.read_csv("dataset/classify-leaves/train.csv")  # 读取csv文件
leaves_labels = sorted(list(set(label_dataorgin['label'])))  # 取出标签列然后set去重再列表排序
num_class = len(leaves_labels)  # 一共的类别数目
class_to_num = dict(zip(leaves_labels, range(num_class)))  # 树立字典,类别称号对应数字
num_to_class = {i:j for j,i in class_to_num.items()}  # 数字对应类别称号

接下来便是写咱们数据加载器,由于我发现一个问题便是假如把数据加载器和全体的代码写在相同的文件中会报错,会在之后调用d2l的练习函数时说找不到这个数据加载器的界说,那么咱们需要在别的的文件写数据加载器的界说然后引用,我在别的的LeavesDataset.py文件中为其界说:

class LeavesDataset(Dataset):
    def __init__(self, csv_path, file_path, mode = 'train', valid_ratio = 0.2,
                resize_height = 256, resize_width=256):
        self.resize_height = resize_height  # 拉伸的高度
        self.resize_width = resize_width  # 宽度
        self.file_path = file_path  # 文件途径
        self.mode = mode  # 形式
        self.data_csv = pd.read_csv(csv_path, header=None)  # 读取csv文件去除表头
        self.dataLength = len(self.data_csv.index) - 1  # 数据长度
        self.trainLength = int(self.dataLength * (1 - valid_ratio))  # 练习集的长度
        if mode == 'train':
            # 练习形式
            self.train_images = np.asarray(self.data_csv.iloc[1:self.trainLength, 0])  # 第0列为图画的称号
            self.train_labels = np.asarray(self.data_csv.iloc[1:self.trainLength, 1])  # 第1列为图画的标签
            self.image_arr = self.train_images
            self.label_arr = self.image_arr
        elif mode == 'valid':
            self.valid_images = np.asarray(self.data_csv.iloc[self.trainLength:, 0])
            self.valid_labels = np.asarray(self.data_csv.iloc[self.trainLength:, 1])
            self.image_arr = self.valid_images
            self.label_arr = self.valid_labels
        elif mode == 'test':
            self.test_images = np.asarray(self.data_csv.iloc[1:,0])  # 测试集没有标签列
            self.image_arr = self.test_images
        self.realLen_now = len(self.image_arr)
        print("{}形式下已完结数据载入,得到{}个数据".format(mode, self.realLen_now))
    def __getitem__(self, index):
        image_name = self.image_arr[index]  # 得到文件名
        img = Image.open(os.path.join(self.file_path, image_name))  # 拼接后得到当时访问图片的完整途径
        transform = transforms.Compose([
            transforms.Resize((224,224)),  # 更改为224*224
            transforms.ToTensor()
        ])
        img = transform(img)
        if self.mode == 'test':
            return img
        else:
            label = self.label_arr[index]
            number_label = class_to_num[label]
            return img, number_label
    def __len__(self):
        return self.realLen_now

那么接下来便是加载各个数据集了:

train_path = "dataset/classify-leaves/train.csv"  # 根据你的实际情况修正
test_path = "dataset/classify-leaves/test.csv"
img_path = "dataset/classify-leaves/"
train_dataset = LeavesDataset(train_path, img_path, mode = 'train')
valid_dataset = LeavesDataset(train_path, img_path, mode = 'valid')
test_dataset = LeavesDataset(test_path, img_path, mode = 'test')
batch_size = 64  # 这儿假如显存不行可以调小
train_loader = DataLoader(dataset=train_dataset,batch_size=batch_size, shuffle=False,num_workers=5)  # 不随机打乱,进程数为5
valid_loader = DataLoader(dataset=valid_dataset,batch_size=batch_size, shuffle=False,num_workers=5)
test_loader = DataLoader(dataset=test_dataset,batch_size=batch_size, shuffle=False,num_workers=5)

得到数据后接下来便是界说模型了,我先是采用了ResNet11:

b1 = nn.Sequential(nn.Conv2d(1, 64, kernel_size=7, stride=2, padding=3),
                   nn.BatchNorm2d(64), nn.ReLU(),
                   nn.MaxPool2d(kernel_size=3, stride=2, padding=1))
def resnet_block(input_channels, num_channels, num_residuals,first_block=False):
    blk = []
    for i in range(num_residuals):
        if i == 0 and not first_block:
            blk.append(d2l.Residual(input_channels, num_channels,use_1x1conv=True, strides=2))
        else:
            blk.append(d2l.Residual(num_channels, num_channels))
    return blk
b2 = nn.Sequential(*resnet_block(64,64,2,first_block=True))
b3 = nn.Sequential(*resnet_block(64,128,2))
b4 = nn.Sequential(*resnet_block(128,256,2))
b5 = nn.Sequential(*resnet_block(256,512,2))
net = nn.Sequential(
    b1,b2,b3,b4,b5,
    nn.AdaptiveAvgPool2d((1,1)),
    nn.Flatten(),
    nn.Linear(512,176)
)

然后由于我期望假如模型可以到达要求的精度我就将其保存下来,因而修正了练习函数:

def train_ch6_save(net, train_iter, test_iter, num_epochs, lr, device, best_acc):  #@save
    """Train a model with a GPU (defined in Chapter 6).
    Defined in :numref:`sec_lenet`"""
    def init_weights(m):
        if type(m) == nn.Linear or type(m) == nn.Conv2d:
            nn.init.xavier_uniform_(m.weight)
    net.apply(init_weights)
    print('training on', device)
    net.to(device)
    optimizer = torch.optim.SGD(net.parameters(), lr=lr)
    loss = nn.CrossEntropyLoss()
    animator = d2l.Animator(xlabel='epoch', xlim=[1, num_epochs],
                            legend=['train loss', 'train acc', 'test acc'])
    timer, num_batches = d2l.Timer(), len(train_iter)
    for epoch in range(num_epochs):
        # Sum of training loss, sum of training accuracy, no. of examples
        metric = d2l.Accumulator(3)
        net.train()
        for i, (X, y) in enumerate(train_iter):
            timer.start()
            optimizer.zero_grad()
            X, y = X.to(device), y.to(device)
            y_hat = net(X)
            l = loss(y_hat, y)
            l.backward()
            optimizer.step()
            with torch.no_grad():
                metric.add(l * X.shape[0], d2l.accuracy(y_hat, y), X.shape[0])
            timer.stop()
            train_l = metric[0] / metric[2]
            train_acc = metric[1] / metric[2]
            if (i + 1) % (num_batches // 5) == 0 or i == num_batches - 1:
                animator.add(epoch + (i + 1) / num_batches,
                             (train_l, train_acc, None))
        test_acc = d2l.evaluate_accuracy_gpu(net, test_iter)
        animator.add(epoch + 1, (None, None, test_acc))
    print(f'loss {train_l:.3f}, train acc {train_acc:.3f}, '
          f'test acc {test_acc:.3f}')
    print(f'{metric[2] * num_epochs / timer.sum():.1f} examples/sec '
          f'on {str(device)}')
    if test_acc > best_acc:
        print("模型精度较高,值得保存!")
        torch.save(net.state_dict(), "Now_Best_Module.pth")
    else:
        print("模型精度不行,不值得保存")
lr, num_epochs,best_acc = 0.05, 25, 0.8  # epoch太小练习不完全
train_ch6_save(net, train_loader, valid_loader, num_epochs, lr, device=d2l.try_gpu(), best_acc=best_acc)
plt.show()

得到成果为:

【动手学深度学习】李沐——卷积神经网络

那么我接下来期望加大ResNet的深度来前进模型杂乱度,用了网上的ResNet50模型发现太大了,读完模型之后再读数据,就算把batch_size设置小也显存爆了,因而只能修正模型小一点

b2 = nn.Sequential(*resnet_block(64,64,2,first_block=True))
b3 = nn.Sequential(*resnet_block(64,256,2))
b4 = nn.Sequential(*resnet_block(256,512,2))
b5 = nn.Sequential(*resnet_block(512,2048,3))
net = nn.Sequential(
    b1,b2,b3,b4,b5,
    nn.AdaptiveAvgPool2d((1,1)),
    nn.Flatten(),
    nn.Linear(2048,176)
)

跑了五个小时成果过拟合了…

loss 0.014, train acc 0.996, test acc 0.764
31.6 examples/sec on cuda:0

终究调试了好几个模型花费了一整天的时间,仍是没有最开端的ResNet11的作用好,终究决定就用这个了。

因而完整的代码为:

# 首要导入包
import torch
import torch.nn as nn
import pandas as pd
import numpy as np
from torch.utils.data import Dataset, DataLoader
from torchvision import transforms
from PIL import Image
import os
from d2l import torch as d2l
import matplotlib.pyplot as plt
from tqdm import tqdm
from LeavesDataset import LeavesDataset
def resnet_block(input_channels, num_channels, num_residuals, first_block=False):  # 这是ResNet界说用到的函数
    blk = []
    for i in range(num_residuals):
        if i == 0 and not first_block:
            blk.append(d2l.Residual(input_channels, num_channels, use_1x1conv=True, strides=2))
        else:
            blk.append(d2l.Residual(num_channels, num_channels))
    return blk
def train_ch6_save(net, train_iter, test_iter, num_epochs, lr, device, best_acc):  # @save
    """Train a model with a GPU (defined in Chapter 6).
    这是由于我需要练习完保存因而将教师的练习函数进行了修正,就放在这儿了
    Defined in :numref:`sec_lenet`"""
    def init_weights(m):
        if type(m) == nn.Linear or type(m) == nn.Conv2d:
            nn.init.xavier_uniform_(m.weight)
    net.apply(init_weights)
    print('training on', device)
    net.to(device)
    optimizer = torch.optim.SGD(net.parameters(), lr=lr)
    loss = nn.CrossEntropyLoss()
    animator = d2l.Animator(xlabel='epoch', xlim=[1, num_epochs],
                            legend=['train loss', 'train acc', 'test acc'])
    timer, num_batches = d2l.Timer(), len(train_iter)
    for epoch in range(num_epochs):
        # Sum of training loss, sum of training accuracy, no. of examples
        metric = d2l.Accumulator(3)
        net.train()
        for i, (X, y) in enumerate(train_iter):
            timer.start()
            optimizer.zero_grad()
            X, y = X.to(device), y.to(device)
            y_hat = net(X)
            l = loss(y_hat, y)
            l.backward()
            optimizer.step()
            with torch.no_grad():
                metric.add(l * X.shape[0], d2l.accuracy(y_hat, y), X.shape[0])
            timer.stop()
            train_l = metric[0] / metric[2]
            train_acc = metric[1] / metric[2]
            if (i + 1) % (num_batches // 5) == 0 or i == num_batches - 1:
                animator.add(epoch + (i + 1) / num_batches,
                             (train_l, train_acc, None))
        test_acc = d2l.evaluate_accuracy_gpu(net, test_iter)
        animator.add(epoch + 1, (None, None, test_acc))
    print(f'loss {train_l:.3f}, train acc {train_acc:.3f}, '
          f'test acc {test_acc:.3f}')
    print(f'{metric[2] * num_epochs / timer.sum():.1f} examples/sec '
          f'on {str(device)}')
    if test_acc > best_acc:
        print("模型精度较高,值得保存!")
        torch.save(net.state_dict(), "Now_Best_Module.pth")  # 对模型进行保存
    else:
        print("模型精度不行,不值得保存")
if __name__ == "__main__":  # 一定要将运转的代码放在这儿!不然会报错,我现在还不知道原因
    label_dataorgin = pd.read_csv("dataset/classify-leaves/train.csv")  # 读取练习的csv文件
    leaves_labels = sorted(list(set(label_dataorgin['label'])))  # 取出标签列然后去重再排序
    num_class = len(leaves_labels)  # 类别的个数
    class_to_num = dict(zip(leaves_labels, range(num_class)))  # 转换为字典
    num_to_class = {i: j for j, i in class_to_num.items()}
    train_path = "dataset/classify-leaves/train.csv"
    test_path = "dataset/classify-leaves/test.csv"
    img_path = "dataset/classify-leaves/"
    submission_path = "dataset/classify-leaves/submission.csv"  # 终究要提交的文件的途径
    train_dataset = LeavesDataset(train_path, img_path, mode='train')
    valid_dataset = LeavesDataset(train_path, img_path, mode='valid')
    test_dataset = LeavesDataset(test_path, img_path, mode='test')
    #print("数据载入完结")
    batch_size = 64
    train_loader = DataLoader(dataset=train_dataset, batch_size=batch_size, shuffle=False, num_workers=5)
    valid_loader = DataLoader(dataset=valid_dataset, batch_size=batch_size, shuffle=False, num_workers=5)
    test_loader = DataLoader(dataset=test_dataset, batch_size=batch_size, shuffle=False, num_workers=5)
    #print("数据已变换为loader")
    # 界说模型
    # 第一个模块基本上在卷积神经网络中都是相同的
    b1 = nn.Sequential(nn.Conv2d(3, 64, kernel_size=7, stride=2, padding=3),
                       nn.BatchNorm2d(64), nn.ReLU(),
                       nn.MaxPool2d(kernel_size=3, stride=2, padding=1))
    b2 = nn.Sequential(*resnet_block(64, 64, 2, first_block=True))
    b3 = nn.Sequential(*resnet_block(64, 128, 2))
    b4 = nn.Sequential(*resnet_block(128, 256, 2))
    b5 = nn.Sequential(*resnet_block(256, 512, 2))
    net = nn.Sequential(
        b1, b2, b3, b4, b5,
        nn.AdaptiveAvgPool2d((1, 1)),
        nn.Flatten(),
        nn.Linear(512, 176)
    )
    lr, num_epochs, best_acc = 0.02, 15, 0.85
    device = d2l.try_gpu()
    train_ch6_save(net, train_loader, valid_loader, num_epochs, lr, device=device, best_acc=best_acc)
    plt.show()
    # 开端做猜测
    net.load_state_dict(torch.load("Now_Best_Module.pth"))  # 载入模型
    # print("模型载入完结")
    net.to(device)
    net.eval()  # 敞开猜测形式
    predictions = []  # 用来存放成果类别对应的数字
    for i, data in enumerate(test_loader):
        imgs = data.to(device)
        with torch.no_grad():
            logits = net(imgs)  # 核算成果是一个176长的向量
        predictions.extend(logits.argmax(dim=-1).cpu().numpy().tolist())
        # 取出最大的作为成果,并且放回cpu中,再转换成列表便利刺进到predictions中
    preds = []
    for i in predictions:
        preds.append(num_to_class[i])  # 转换为字符串
    test_csv = pd.read_csv(test_path)
    test_csv['label'] = pd.Series(preds)  # 将成果作为一个新的列增加
    submission = pd.concat([test_csv['image'], test_csv['label']], axis=1)  # 拼接
    submission.to_csv(submission_path, index=False)  # 写入文件

提交的分数为:

【动手学深度学习】李沐——卷积神经网络

虽然成果不是很好,但自己仍是十分高兴的!第一次完完整整地完结了一个项目,真实地学到了很多东西!只有自己着手从零开端才真实理解自己哪部分短缺,因而才可以有前进!

请持续努力吧