torchvision¶
- class trojanvision.models.AlexNet(name='alexnet', model=_AlexNet, **kwargs)[source]¶
AlexNet proposed by Alex Krizhevsky from Google in 2014.
- Available model names:
{'alexnet'}
See also
self.features = nn.Sequential( nn.Conv2d(3, 64, kernel_size=11, stride=4, padding=2), nn.ReLU(inplace=True), nn.MaxPool2d(kernel_size=3, stride=2), nn.Conv2d(64, 192, kernel_size=5, padding=2), nn.ReLU(inplace=True), nn.MaxPool2d(kernel_size=3, stride=2), nn.Conv2d(192, 384, kernel_size=3, padding=1), nn.ReLU(inplace=True), nn.Conv2d(384, 256, kernel_size=3, padding=1), nn.ReLU(inplace=True), nn.Conv2d(256, 256, kernel_size=3, padding=1), nn.ReLU(inplace=True), nn.MaxPool2d(kernel_size=3, stride=2), ) self.pool = nn.AdaptiveAvgPool2d((6, 6)) self.flatten = nn.Flatten(1) self.classifier = nn.Sequential( nn.Dropout(), nn.Linear(256 * 6 * 6, 4096), nn.ReLU(inplace=True), nn.Dropout(), nn.Linear(4096, 4096), nn.ReLU(inplace=True), nn.Linear(4096, num_classes), )
- class trojanvision.models.DenseNet(name='densenet', layer=121, model=_DenseNet, **kwargs)[source]¶
DenseNet proposed by Gao Huang from Cornell University in CVPR 2017.
- Available model names:
{'densenet', 'densenet_comp', 'densenet121', 'densenet169', 'densenet201', 'densenet161', 'densenet121_comp', 'densenet169_comp', 'densenet201_comp', 'densenet161_comp'}
See also
torchvision:
torchvision.models.densenet121
Note
_comp
reduces the first convolutional layer fromkernel_size=7, stride=2, padding=3
to
kernel_size=3, stride=1, padding=1
, and removes followingnorm0, relu0, pool0
(pool0
istorch.nn.MaxPool2d
) before block layers.
- class trojanvision.models.EfficientNet(name='efficientnet', layer='_b0', model=_EfficientNet, **kwargs)[source]¶
EfficientNet proposed by Mingxing Tan from Google in ICML 2019.
- Available model names:
{'efficientnet', 'efficientnet_comp', 'efficientnet_b0', 'efficientnet_b1', 'efficientnet_b2', 'efficientnet_b3', 'efficientnet_b4', 'efficientnet_b5', 'efficientnet_b6', 'efficientnet_b7', 'efficientnet_b0_comp', 'efficientnet_b1_comp', 'efficientnet_b2_comp', 'efficientnet_b3_comp', 'efficientnet_b4_comp', 'efficientnet_b5_comp', 'efficientnet_b6_comp', 'efficientnet_b7_comp'}
See also
Note
_comp
reduces the first convolutional layer fromkernel_size=7, stride=2, padding=3
to
kernel_size=3, stride=1, padding=1
.
- class trojanvision.models.MNASNet(name='mnasnet', mnas_alpha=1.0, model=_MNASNet, **kwargs)[source]¶
MNASNet proposed by Mingxing Tan from Google in CVPR 2019.
- Available model names:
{'mnasnet', 'mnasnet0_5', 'mnasnet0_75', 'mnasnet1_0', 'mnasnet1_3'}
See also
- class trojanvision.models.MobileNet(name='mobilenet_v2', model=_MobileNet, **kwargs)[source]¶
MobileNets proposed by Andrew Howard and Liang-Chieh Chen from Google in CVPR 2018.
- Available model names:
['mobilenet_v2', 'mobilenet_v3_large', 'mobilenet_v3_small', 'mobilenet_v2_comp', 'mobilenet_v3_large_comp', 'mobilenet_v3_small_comp']
See also
MobileNet v2:
MobileNet v3:
torchvision:
torchvision.models.mobilenet_v3_small
paper: Searching for MobileNetV3
Note
_comp
uses a smallinverted_residual_setting
and set first conv layerstride=1
.
- class trojanvision.models.ResNet(name='resnet', layer=18, model=_ResNet, **kwargs)[source]¶
ResNet model series including ResNet, ResNext and WideResNet.
- Available model names:
{'resnet', 'resnet_comp', 'resnet_s', 'resnet18', 'resnet34', 'resnet50', 'resnet101', 'resnet152', 'resnet18_comp', 'resnet34_comp', 'resnet50_comp', 'resnet101_comp', 'resnet152_comp', 'resnext50_32x4d', 'resnext101_32x8d', 'resnext50_32x4d_comp', 'resnext101_32x8d_comp', 'wide_resnet50_2', 'wide_resnet101_2', 'wide_resnet50_2_comp', 'wide_resnet101_2_comp', 'resnet18_s', 'resnet34_s', 'resnet50_s', 'resnet101_s', 'resnet152_s', 'resnet18_ap_comp'}
See also
ResNet:
torchvision:
torchvision.models.resnet18
ResNext:
WideResNet:
torchvision:
torchvision.models.wide_resnet50_2
paper: Wide Residual Networks
Note
_comp
reduces the first convolutional layer fromkernel_size=7, stride=2, padding=3
to
kernel_size=3, stride=1, padding=1
, and removes themaxpool
layer before block layers._s
further reduces the conv channels and number of blocks based on_comp
.ResNetS
is used in NIPS 2017 paper by Facebook Research about continual learning._ap
conducts average pooling in all blocks rather than a global pooling layer after feature extractor.ResNetAP
is used in ICLR 2021 paper about dataset condensation.
- class trojanvision.models.ShuffleNetV2(name='shufflenet_v2', layer='_x0_5', model=_ShuffleNetV2, **kwargs)[source]¶
ShuffleNet v2 proposed by Ningning Ma from Megvii in ECCV 2018.
- Available model names:
{'shufflenet_v2', 'shufflenet_v2_comp', 'shufflenet_v2_x0_5', 'shufflenet_v2_x1_0', 'shufflenet_v2_x1_5', 'shufflenet_v2_x2_0', 'shufflenet_v2_x0_5_comp', 'shufflenet_v2_x1_0_comp', 'shufflenet_v2_x1_5_comp', 'shufflenet_v2_x2_0_comp'}
See also
Note
_comp
reduces the first convolutional layer fromkernel_size=7, stride=2, padding=3
to
kernel_size=3, stride=1, padding=1
, and removes themaxpool
layer before block layers.
- class trojanvision.models.VGG(name='vgg', layer=13, model=_VGG, **kwargs)[source]¶
VGG model proposed by Karen Simonyan from University of Oxford in ICLR 2015.
- Available model names:
{'vgg', 'vgg_bn', 'vgg_comp', 'vgg_bn_comp', 'vgg_s', 'vgg_bn_s', 'vgg11', 'vgg13', 'vgg16', 'vgg19', 'vgg11_bn', 'vgg13_bn', 'vgg16_bn', 'vgg19_bn', 'vgg11_comp', 'vgg13_comp', 'vgg16_comp', 'vgg19_comp', 'vgg11_bn_comp', 'vgg13_bn_comp', 'vgg16_bn_comp', 'vgg19_bn_comp' 'vgg11_s', 'vgg13_s', 'vgg16_s', 'vgg19_s', 'vgg11_bn_s', 'vgg13_bn_s', 'vgg16_bn_s', 'vgg19_bn_s'}
See also
Note
_comp
setstorch.nn.AdaptiveAvgPool2d
from(7, 7)
to(1, 1)
, update the intermediate feature dimension from 4096 to 512 inself.classifier
._s
further makesself.classifier
only one single linear layer based on_comp
.