0
点赞
收藏
分享

微信扫一扫

resent代码详解

kiliwalk 2023-05-18 阅读 173


resnet中最中重要的就是代码残差块,

def forward(self,x)
    identity = x
    out = self.conv1(x)
    out = self.bn1(out)
    out = self.relu(out)
    
    out = self.conv2(out)
    out = self.bn2(out)
    
    if self.downsample is not None:
        identity = self.downsample
        out += identity
        out = self.relu(out)
        return out

残差块结构:

左边的就叫做BasicBlock,右边就叫bottleneck

resent代码详解_2d

BasicBlock

class BasicBlock(nn.Module):
    expansion =1
    def __init__(self, inplanes, planes, stride=1, downsample=None):
        super(BasicBlock,self).__init__()
        self.conv1 = conv3x3(inplanes,planes,stride)
        self.bn1 = nn.BatchNorm2d(planes)
        self.relu = nn.ReLU(inplace=True)
        self.conv2 = conv3x3(planes,planes)
        self.bn2 = nn.BatchNorm2d(planes)
        self.downsample = downsample
        self.stride =stride
def forward(self,x)
    identity = x
    out = self.conv1(x)
    out = self.bn1(out)
    out = self.relu(out)

    out = self.conv2(out)
    out = self.bn2(out)

    if self.downsample is not None:
        identity = self.downsample
        out += identity
        out = self.relu(out)
        return out

bottleneck

注意Res18、Res34用的是BasicBlock,其余用的是Bottleneck

resnet18: ResNet(BasicBlock, [2, 2, 2, 2])
resnet34: ResNet(BasicBlock, [3, 4, 6, 3])
resnet50:ResNet(Bottleneck, [3, 4, 6, 3])
resnet101:ResNet(Bottleneck, [3, 4, 23, 3])
resnet152:ResNet(Bottleneck, [3, 8, 36, 3])
expansion = 4,因为Bottleneck中每个残差结构输出维度都是输入维度的4倍
class Bottleneck(nn.Module):
     expansion = 4    def __init__(self, inplanes, planes, stride=1, downsample=None):
         super(Bottleneck, self).__init__()
         self.conv1 = conv1x1(inplanes, planes)
         self.bn1 = nn.BatchNorm2d(planes)
         self.conv2 = conv3x3(planes, planes, stride)
         self.bn2 = nn.BatchNorm2d(planes)
         self.conv3 = conv1x1(planes, planes * self.expansion)
         self.bn3 = nn.BatchNorm2d(planes * self.expansion)
         self.relu = nn.ReLU(inplace=True)
         self.downsample = downsample
         self.stride = stride    def forward(self, x):
         identity = x        out = self.conv1(x)
         out = self.bn1(out)
         out = self.relu(out)        out = self.conv2(out)
         out = self.bn2(out)
         out = self.relu(out)        out = self.conv3(out)
         out = self.bn3(out)        if self.downsample is not None:
             identity = self.downsample(x)        out += identity
         out = self.relu(out)        return out

 

ResNet类

几个关键点:

1.在残差结构之前,先对原始224 x 224的图片处理,在经过7 x 7的大卷积核、BN、ReLU、最大池化之后得到56 x 56 x 64的feature map
2.从layer1、layer2、layer3、layer4的定义可以看出,第一个stage不会减小feature map,其余都会在stage的第一层用步长2的3 x 3卷积进行feature map长和宽减半
3._make_layer函数中downsample对残差结构的输入进行升维,直接1 x 1卷积再加上BN即可,后面BasicBlock类和Bottleneck类用得到
4.最后的池化层使用的是自适应平均池化,而非论文中的全局平均池化

class ResNet(nn.Module):
    def __init__(self, block, layers, num_classes=1000, zero_init_residual=False):
         super(ResNet, self).__init__()
         self.inplanes = 64
         self.conv1 = nn.Conv2d(3, 64, kernel_size=7, stride=2, padding=3,
                                bias=False)
         self.bn1 = nn.BatchNorm2d(64)
         self.relu = nn.ReLU(inplace=True)
         self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1)
         self.layer1 = self._make_layer(block, 64, layers[0])
         self.layer2 = self._make_layer(block, 128, layers[1], stride=2)
         self.layer3 = self._make_layer(block, 256, layers[2], stride=2)
         self.layer4 = self._make_layer(block, 512, layers[3], stride=2)
         self.avgpool = nn.AdaptiveAvgPool2d((1, 1))
         self.fc = nn.Linear(512 * block.expansion, num_classes)    def _make_layer(self, block, planes, blocks, stride=1):
         downsample = None
         if stride != 1 or self.inplanes != planes * block.expansion:
             downsample = nn.Sequential(
                 conv1x1(self.inplanes, planes * block.expansion, stride),
                 nn.BatchNorm2d(planes * block.expansion),
             )        layers = []
         layers.append(block(self.inplanes, planes, stride, downsample))
         self.inplanes = planes * block.expansion
         for _ in range(1, blocks):
             layers.append(block(self.inplanes, planes))        return nn.Sequential(*layers)
    def forward(self, x):
         x = self.conv1(x)
         x = self.bn1(x)
         x = self.relu(x)
         x = self.maxpool(x)        x = self.layer1(x)
         x = self.layer2(x)
         x = self.layer3(x)
         x = self.layer4(x)        x = self.avgpool(x)
         x = x.view(x.size(0), -1)
         x = self.fc(x)        return x

举报

相关推荐

0 条评论