欢迎您访问程序员文章站本站旨在为大家提供分享程序员计算机编程知识!
您现在的位置是: 首页

Pytorch中torch.nn和torch.nn.functional的区别及实例详解

程序员文章站 2022-07-13 10:39:41
...

0. 两者的区别

Pytorch中,nn与nn.functional有哪些区别?

相同之处:

  • 两者都继承于nn.Module
  • nn.x与nn.functional.x的实际功能相同,比如nn.Conv3d和nn.functional.conv3d都是进行3d卷积
  • 运行效率几乎相同

不同之处:

  • nn.x是nn.functional.x的类封装,nn.functional.x是具体的函数接口
  • nn.x除了具有nn.functional.x功能之外,还具有nn.Module相关的属性和方法,比如:train(),eval()等
  • nn.functional.x直接传入参数调用,nn.x需要先实例化再传参调用
  • nn.x能很好的与nn.Sequential结合使用,而nn.functional.x无法与nn.Sequential结合使用
  • nn.x不需要自定义和管理weight,而nn.functional.x需自定义weight,作为传入的参数

1. 创建CNN实例

import torch
import torch.nn as nn
import torch.nn.functional as F
# 使用nn.x定义一个cnn
class nn_cnn(nn.Module):
    
    def __init__(self):
        super(nn_cnn, self).__init__()
        # 第一层卷积,第一层的relu,第一层的最大池化层
        self.cnn1 = nn.Conv2d(in_channels = 1, out_channels = 16, kernel_size = 5, padding = 0)
        self.relu1 = nn.Relu()
        self.maxpool1 = nn.MaxPool2d(kernel_size = 2)
        # 第二层卷积层,第二层的relu,第二层的最大池化层
        self.cnn2 = nn.Conv2d(in_channels = 16, out_channels = 32, kernel_size = 5, padding = 0)
        self.relu2 = nn.Relu()
        self.maxpool2 = nn.MaxPool2d(kernel_size = 2)
        # 全连接层
        self.linear1 = nn.Linear(4 * 4 * 32, 10)
    
    # 前向传播函数    
    def forward(self, x):
        x = x.view(x.size(0), -1)
        # 第一层卷积计算
        out = self.maxpool1(self.relu1(self.cnn1(x)))
        # 第二层卷积计算
        out = self.maxpool2(self.relu2(self.cnn2(out)))
        # 全连接层计算
        out = self.linear1(out.view(x.size(0), -1))
        return out
# 使用nn.functional.x定义一个cnn
class F_cnn(nn.Module):
    
    def __init__(self):
        super(F_cnn, self).__init__()
        # nn.functional.x需要自己定义weight和bias
        # 定义第一层卷积的weight和bias
        self.cnn1_weight = nn.Parameter(torch.rand(16, 1, 5, 5))
        self.cnn1_bias = nn.Parameter(torch.rand(16))
        # 定义第二层卷积的weight和bias
        self.cnn2_weight = nn.Parameter(torch.rand(32, 16, 5, 5))
        self.cnn2_bias = nn.Parameter(torch.rand(32))
        # 定义全连接层的weight和bias
        self.linear1_weight = nn.Parameter(torch.rand(4 * 4 * 32, 10))
        self.linear1_bias = nn.Parameter(torch.rand(10))
    
    # 前向传播函数    
    def forward(self, x):
        x = x.view(x.size(0), -1)
        # 第一层卷积计算
        out = F.conv2d(x, self.cnn1_weight, self.cnn1_bias)
        out = F.relu(out)
        out = F.max_pool2d(out)
        # 第二层卷积计算
        out = F.conv2d(x, self.cnn2_weight, self.cnn2_bias)
        out = F.relu(out)
        out = F.max_pool2d(out)
        # 全连接层计算
        out = F.linear(x, self.linear1_weight, self.linear1_bias)
        return out

很明显看出,利用这两者定义的CNN都能实现相同的功能,但两者实现细节上不同,毕竟一个是类,一个是函数。

2. dropout的不同

import torch
import torch.nn as nn
import torch.nn.functional as F
# nn.x实现dropout
class Model1(nn.Module):
    
    def __init__(self, p = 0.0):
        super(Model1, self).__init__()
        self.dropout = nn.Dropout(p = p)
        
    def forward(self, inputs):
        return self.dropout(inputs)
    
# nn.functional.x实现dropout   
class Model2(nn.Module):

    def __init__(self, p = 0.0):
        super(Model2, self).__init__()
        self.p = p
    # training等于False,可关闭train的dropout,默认为True
    def forward(self, inputs):
        return F.dropout(inputs, p = self.p, training = True)
# 调用测试
print(10 * '-' + "load inputs and train:" + 10 * '-' + '\r\n')
# 初始化
m1 = Model1(p = 0.5)
m2 = Model2(p = 0.5)
# 随机创建inputs
inputs = torch.rand(10)
print('Model1', m1(inputs))
print('Model2', m2(inputs))
# eval模型
print(10 * '-' + "after eval model:" + 10 * '-' + '\r\n')
m1.eval()
m2.eval()
print('Model1', m1(inputs))
print('Model2', m2(inputs))
----------load inputs and train:----------

Model1 tensor([0.5136, 0.0000, 1.8053, 1.5378, 1.4294, 0.5862, 0.0000, 0.6605, 1.1898,
        0.0000])
Model2 tensor([0.5136, 1.1499, 1.8053, 1.5378, 1.4294, 0.0000, 0.0000, 0.6605, 1.1898,
        0.0000])
----------after eval model:----------

Model1 tensor([0.2568, 0.5749, 0.9026, 0.7689, 0.7147, 0.2931, 0.6212, 0.3303, 0.5949,
        0.8018])
Model2 tensor([0.5136, 0.0000, 0.0000, 1.5378, 0.0000, 0.5862, 1.2424, 0.0000, 0.0000,
        0.0000])

这里可以看出,两者在dropout上的不同,nn.dropout是在train默认True, eval时关闭

而nn.functional.dropout在train和eval都默认开启(eval Model2中还有一半的0.0000)

但CNN中一般train使用dropout, eval不使用,所以如Pytroch官方一样建议使用nn.x

3. nn.functional.x的优势

如上所述,nn.functional.x在参数定义和管理方面不如nn.x,但凡事都有两面性,

如果我们设计的CNN要共享一组weight,这时用nn.functional.x就更容易实现:

import torch
import torch.nn as nn
import torch.nn.functional as F
class Model(nn.Module):
    
    def __init__(self):
        super(Model, self).__init__()
        self.weight = nn.Parameter(torch.rand(32, 16, 5, 5))
        self.bias = nn.Parameter(torch.rand(32))
        
    def forward(self, x):
        x1 = F.conv2d(x, self.weight, self.bias)
        x2 = F.conv2d(x, self.weight, self.bias)
        return x1 + x2

总的来说,建议使用nn.x, 如需真的需要手动操作weight, bias, stride这些中间量的值,或者共享parameter时,可采用nn.functional.x

reference:




须知少时凌云志,曾许人间第一流。