Shortcuts

models

trojanzoo.models.add_argument(parser, model_name=None, model=None, config=config, class_dict={})[source]
Add model arguments to argument parser.
For specific arguments implementation, see Model.add_argument().
Parameters:
  • parser (argparse.ArgumentParser) – The parser to add arguments.

  • model_name (str) – The model name.

  • model (str | Model) – The model instance or model name (as the alias of model_name).

  • config (Config) – The default parameter config, which contains the default dataset and model name if not provided.

  • class_dict (dict[str, type[Model]]) – Map from model name to model class. Defaults to {}.

Returns:

argparse._ArgumentGroup – The argument group.

trojanzoo.models.create(model_name=None, model=None, dataset_name=None, dataset=None, config=config, class_dict={}, **kwargs)[source]
Create a model instance.
For arguments not included in kwargs, use the default values in config.
The default value of folder_path is '{model_dir}/{dataset.data_type}/{dataset.name}'.
For model implementation, see Model.
Parameters:
  • model_name (str) – The model name.

  • model (str | Model) – The model instance or model name (as the alias of model_name).

  • dataset_name (str) – The dataset name.

  • dataset (str | trojanzoo.datasets.Dataset) – Dataset instance or dataset name (as the alias of dataset_name).

  • config (Config) – The default parameter config.

  • class_dict (dict[str, type[Model]]) – Map from model name to model class. Defaults to {}.

  • **kwargs – The keyword arguments passed to model init method.

Returns:

Model – The model instance.

trojanzoo.models.output_available_models(class_dict={}, indent=0)[source]

Output all available model names.

Parameters:
  • class_dict (dict[str, type[Model]]) – Map from model name to model class. Defaults to {}.

  • indent (int) – The space indent for the entire string. Defaults to 0.

class trojanzoo.models.Model(name='model', suffix=None, model=_Model, dataset=None, num_classes=None, folder_path=None, official=False, pretrained=False, randomized_smooth=False, rs_sigma=0.01, rs_n=100, **kwargs)[source]
A general model wrapper class, which should be the most common interface for users.
Parameters:
  • name (str) – Name of model.

  • suffix (str) –

    Suffix of local model weights file (e.g., '_adv_train'). Defaults to empty string ''.
    The location of local pretrained weights is '{folder_path}/{self.name}{self.suffix}.pth'

  • model (type[_Model] | _Model) – Type of model or a specific model instance.

  • dataset (trojanzoo.datasets.Dataset | None) – Corresponding dataset (optional). Defaults to None.

  • num_classes (int | None) – Number of classes. If it’s None, fetch the value from dataset. Defaults to None.

  • folder_path (str) –

    Folder path to save model weights. Defaults to None.

    Note

    folder_path is usually '{model_dir}/{dataset.data_type}/{dataset.name}', which is claimed as the default value of create().

  • official (bool) – Whether to use official pretrained weights. Defaults to False.

  • pretrained (bool) – Whether to use local pretrained weights from '{folder_path}/{self.name}{self.suffix}.pth' Defaults to False.

  • randomized_smooth (bool) – Whether to use randomized smoothing. Defaults to False.

  • rs_sigma (float) – Randomized smoothing sampling std σ\sigma. Defaults to 0.01.

  • rs_n (int) – Randomized smoothing sampling number. Defaults to 100.

Variables:
  • available_models (set[str]) – The list of available model names.

  • weights (WeightsEnum) – The pretrained weights to use.

  • name (str) – Name of model.

  • suffix (str) –

    Suffix of local model weights file (e.g., '_adv_train'). Defaults to empty string ''.
    The location of local pretrained weights is '{folder_path}/{self.name}{self.suffix}.pth'

  • _model (_Model) – torch.nn.Module model instance.

  • model (torch.nn.DataParallel | _Model) – Parallel version of _model if there is more than 1 GPU available. Generated by get_parallel_model().

  • dataset (trojanzoo.datasets.Dataset | None) – Corresponding dataset (optional). Defaults to None.

  • num_classes (int | None) – Number of classes. If it’s None, fetch the value from dataset. Defaults to None.

  • folder_path (str) – Folder path to save model weights. Defaults to None.

  • randomized_smooth (bool) – Whether to use randomized smoothing. Defaults to False.

  • rs_sigma (float) – Randomized smoothing sampling std σ\sigma.

  • rs_n (int) – Randomized smoothing sampling number. Defaults to 100.

  • criterion (Callable) – The criterion used to calculate loss().

  • criterion_noreduction (Callable) – The criterion used to calculate loss() when reduction='none'.

  • softmax (torch.nn.Module) – torch.nn.Softmax (dim=1). Used in get_prob().

_train(epochs, optimizer, module=None, num_classes=None, lr_scheduler=None, lr_warmup_epochs=0, model_ema=None, model_ema_steps=32, grad_clip=None, pre_conditioner=None, print_prefix='Train', start_epoch=0, resume=0, validate_interval=10, save=False, amp=False, loader_train=None, loader_valid=None, epoch_fn=None, get_data_fn=None, loss_fn=None, after_loss_fn=None, validate_fn=None, save_fn=None, file_path=None, folder_path=None, suffix=None, writer=None, main_tag='train', tag='', metric_fn=None, verbose=True, indent=0, **kwargs)[source]

Train the model

_validate(module=None, num_classes=None, loader=None, print_prefix='Validate', indent=0, verbose=True, get_data_fn=None, loss_fn=None, writer=None, main_tag='valid', tag='', _epoch=None, metric_fn=None, **kwargs)[source]

Evaluate the model.

Returns:

(float, float) – Accuracy and loss.

accuracy(_output, _label, topk=(1, 5), **kwargs)[source]

Computes the accuracy over the k top predictions for the specified values of k.

Parameters:
Returns:

dict[str, float] – Top-k accuracies.

Note

The implementation is in trojanzoo.utils.model.accuracy().

activate_params(params=[])[source]

Set requires_grad=True for selected params of module. All other params are frozen.

Parameters:

params (Iterator[torch.nn.parameter.Parameter]) – The parameters to requires_grad. Defaults to [].

static add_argument(group)[source]

Add model arguments to argument parser group. View source to see specific arguments.

Note

This is the implementation of adding arguments. The concrete model class may override this method to add more arguments. For users, please use add_argument() instead, which is more user-friendly.

apply(fn)[source]

Applies fn recursively to every submodule (as returned by .children()) as well as self. Typical use includes initializing the parameters of a model

children()[source]

Returns an iterator over immediate children modules.

cpu()[source]

Moves all model parameters and buffers to the CPU.

cuda(device=None)[source]

Moves all model parameters and buffers to the GPU.

define_criterion(**kwargs)[source]

Define criterion to calculate loss. Defaults to use torch.nn.CrossEntropyLoss.

Parameters:
define_optimizer(parameters='full', OptimType='SGD', lr=0.1, momentum=0.0, weight_decay=0.0, lr_scheduler=False, lr_scheduler_type='CosineAnnealingLR', lr_step_size=30, lr_gamma=0.1, epochs=None, lr_min=0.0, lr_warmup_epochs=0, lr_warmup_method='constant', lr_warmup_decay=0.01, **kwargs)[source]

Define optimizer and lr_scheduler.

Parameters:
  • parameters (str | Iterable[torch.nn.parameter.Parameter]) –

    The parameters to optimize while other model parameters are frozen. If str, set parameters as:

    • 'features': self._model.features

    • 'classifier' | 'partial': self._model.classifier

    • 'full': self._model

    Defaults to 'full'.

  • OptimType (str | type[Optimizer]) – The optimizer type. If str, load from module torch.optim. Defaults to 'SGD'.

  • lr (float) – The learning rate of optimizer. Defaults to 0.1.

  • momentum (float) – The momentum of optimizer. Defaults to 0.0.

  • weight_decay (float) – The momentum of optimizer. Defaults to 0.0.

  • lr_scheduler (bool) – Whether to enable lr_scheduler. Defaults to False.

  • lr_scheduler_type (str) –

    The type of lr_scheduler. Defaults to 'CosineAnnealingLR'.

    Available lr_scheduler types (use string rather than type):

  • lr_step_size (int) – step_size for torch.optim.lr_scheduler.StepLR. Defaults to 30.

  • lr_gamma (float) – gamma for torch.optim.lr_scheduler.StepLR or torch.optim.lr_scheduler.ExponentialLR. Defaults to 0.1.

  • epochs (int) – Total training epochs. epochs - lr_warmup_epochs is passed as T_max to any:torch.optim.lr_scheduler.CosineAnnealingLR. Defaults to None.

  • lr_min (float) – The minimum of learning rate. It’s passed as eta_min to any:torch.optim.lr_scheduler.CosineAnnealingLR. Defaults to 0.0.

  • lr_warmup_epochs (int) – Learning rate warmup epochs. Passed as total_iters to lr_scheduler. Defaults to 0.

  • lr_warmup_method (str) – Learning rate warmup methods. Choose from ['constant', 'linear']. Defaults to 'constant'.

  • lr_warmup_decay (float) – Learning rate warmup decay factor. Passed as factor (start_factor) to lr_scheduler. Defaults to 0.01.

  • **kwargs – Keyword arguments passed to optimizer init method.

Returns:

(torch.optim.Optimizer, torch.optim.lr_scheduler._LRScheduler) – The tuple of optimizer and lr_scheduler.

eval()[source]

Sets the module in evaluation mode.

generate_target(_input, idx=1, same=False)[source]
Generate target labels of a batched input based on

the classification confidence ranking index.

Parameters:
  • _input (torch.Tensor) – The input tensor.

  • idx (int) – The classification confidence rank of target class. Defaults to 1.

  • same (bool) – Generate the same label for all samples using mod. Defaults to False.

Returns:

torch.Tensor – The generated target label with shape (N).

See also

The implementation is in trojanzoo.utils.model.generate_target().

get_all_layer(_input, layer_input='input', depth=-1, prefix='', use_filter=True, non_leaf=False, seq_only=True, verbose=0)[source]

Get all intermediate layer outputs of _input from any intermediate layer.

Parameters:
  • _input (torch.Tensor) – The batched input tensor from layer_input.

  • layer_input (str) – The intermediate layer name of _input. Defaults to 'input'.

  • depth (int) – The traverse depth. Defaults to -1 (\infty).

  • prefix (str) – The prefix string to all elements. Defaults to empty string ''.

  • use_filter (bool) –

    Whether to filter out certain layer types.

  • non_leaf (bool) – Whether to include non-leaf nodes. Defaults to False.

  • seq_only (bool) – Whether to only traverse children of torch.nn.Sequential. If False, will traverse children of all torch.nn.Module. Defaults to False.

  • verbose (bool) –

    The output level to show information including layer name, output shape and module information. Setting it larger than 0 will enable the output. Different integer values stands for different module information. Defaults to 0.

    • 0: No output

    • 1: Show layer class name.

    • 2: Show layer string (first line).

    • 3: Show layer string (full).

Returns:

dict[str, torch.Tensor] – The dict of all layer outputs.

See also

The implementation is in trojanzoo.utils.model.get_all_layer().

get_class(_input, **kwargs)[source]

Get the class classification result of _input (using torch.argmax).

Parameters:
  • _input (torch.Tensor) – The batched input tensor passed to _Model.get_logits().

  • **kwargs – Keyword arguments passed to get_logits().

Returns:

torch.Tensor – The classes tensor with shape (N).

get_data(data, **kwargs)[source]

Process data. Defaults to be self.dataset.get_data. If self.dataset is None, return data directly.

Parameters:
  • data (Any) – Unprocessed data.

  • **kwargs – Keyword arguments passed to self.dataset.get_data().

Returns:

Any – Processed data.

get_final_fm(_input, **kwargs)[source]

Get the final feature map of _input, which is the output of self.flatten and input of self.classifier. Call _Model.get_final_fm().

Parameters:
Returns:

torch.Tensor – The feature tensor with shape (N, dim).

get_fm(_input, **kwargs)[source]

Get the feature map of _input, which is the output of self.features and input of self.pool. Call _Model.get_fm().

Parameters:
Returns:

torch.Tensor – The feature tensor with shape (N, C', H', W').

get_layer(_input, layer_output='classifier', layer_input='input', seq_only=True)[source]

Get one certain intermediate layer output of _input from any intermediate layer.

Parameters:
  • _input (torch.Tensor) – The batched input tensor from layer_input.

  • layer_output (str) – The intermediate output layer name. Defaults to 'classifier'.

  • layer_input (str) – The intermediate layer name of _input. Defaults to 'input'.

  • seq_only (bool) – Whether to only traverse children of torch.nn.Sequential. If False, will traverse children of all torch.nn.Module. Defaults to True.

Returns:

torch.Tensor – The output of layer layer_output.

See also

The implementation is in trojanzoo.utils.model.get_layer().

get_layer_name(depth=-1, prefix='', use_filter=True, non_leaf=False, seq_only=False)[source]

Get layer names of model instance.

Parameters:
Returns:

list[str] – The list of all layer names.

See also

The implementation is in trojanzoo.utils.model.get_layer_name().

get_logits(_input, parallel=False, randomized_smooth=None, rs_sigma=None, rs_n=None, **kwargs)[source]

Get logits of _input.

Note

Users should use model as Callable function rather than call this method directly, because __call__ supports torch.cuda.amp.

Parameters:
  • _input (torch.Tensor) – The batched input tensor.

  • parallel (bool) – Whether to use parallel model self.model rather than self._model. Defautls to False.

  • randomized_smooth (bool | None) – Whether to use randomized smoothing. If it’s None, use self.randmized_smooth instead. Defaults to None.

  • rs_sigma (float | None) – Randomized smoothing sampling std σ\sigma. If it’s None, use self.rs_sigma instead. Defaults to None.

  • rs_n (int) – Randomized smoothing sampling number. If it’s None, use self.rs_n instead. Defaults to None.

  • **kwargs – Keyword arguments passed to forward().

Returns:

torch.Tensor – The logit tensor with shape (N, C).

get_official_weights(weights=None, progress=True, map_location='cpu', **kwargs)[source]

Get official model weights from weights.

Parameters:
Returns:

OrderedDict[str, torch.Tensor] – The model weights OrderedDict.

static get_parallel_model(_model)[source]

Get the parallel model if there are more than 1 GPU avaiable.

Warning

torch.nn.DataParallel would be deprecated according to https://github.com/pytorch/pytorch/issues/65936. We need to consider using torch.nn.parallel.DistributedDataParallel instead.

Parameters:

_model (_Model) – The non-parallel model.

Returns:

_Model | nn.DataParallel – The parallel model if there are more than 1 GPU avaiable.

get_prob(_input, **kwargs)[source]

Get the probability classification vector of _input.

Parameters:
  • _input (torch.Tensor) – The batched input tensor passed to _Model.get_logits().

  • **kwargs – Keyword arguments passed to get_logits().

Returns:

torch.Tensor – The probability tensor with shape (N, C).

get_target_prob(_input, target, **kwargs)[source]

Get the probability w.r.t. target class of _input (using torch.gather).

Parameters:
Returns:

torch.Tensor – The probability tensor with shape (N).

load(file_path=None, folder_path=None, suffix=None, inplace=True, map_location='cpu', component='full', strict=True, verbose=False, indent=0, **kwargs)[source]

Load pretrained model weights.

Parameters:
  • file_path (str | None) – The file path to load pretrained weights. If 'official', call get_official_weights(). Defaults to '{folder_path}/{self.name}{suffix}.pth'.

  • folder_path (str | None) – The folder path containing model checkpoint. It is used when file_path is not provided. Defaults to self.folder_path.

  • suffix (str | None) – The suffix string to model weights file. Defaults to self.suffix.

  • inplace (bool) – Whether to change model parameters. If False, will only return the dict but not change model parameters. Defaults to True.

  • map_location (str | device | dict) –

    Passed to torch.load. Defaults to 'cpu'.

    Note

    The device of model parameters will still be 'cuda' if there is any cuda available. This argument only affects intermediate operation.

  • component (str) – Specify which part of the weights to load. Choose from ['full', 'features', 'classifier']. Defaults to 'full'.

  • strict (bool) – Passed to torch.nn.Module.load_state_dict. Defaults to True.

  • verbose (bool) – Whether to output auxiliary information. Defaults to False.

  • indent (int) – The indent of output auxialiary information.

  • **kwargs – Keyword arguments passed to torch.load.

Returns:

OrderedDict[str, torch.Tensor] – The model weights OrderedDict.

load_state_dict(state_dict, strict=True)[source]

Copies parameters and buffers from state_dict into this module and its descendants.

loss(_input=None, _label=None, _output=None, reduction='mean', **kwargs)[source]

Calculate the loss using self.criterion (self.criterion_noreduction).

Parameters:
  • _input (torch.Tensor | None) – The batched input tensor. If _output is provided, this argument will be ignored. Defaults to None.

  • _label (torch.Tensor) – The label of the batch with shape (N).

  • _output (torch.Tensor | None) – The logits of _input. If None, use _input to calculate logits. Defaults to None.

  • reduction (str) – Specifies the reduction to apply to the output. Choose from ['none', 'mean']. Defaults to 'mean'.

  • **kwargs – Keyword arguments passed to get_logits() if _output is not provided.

Returns:

torch.Tensor – A scalar loss tensor (with shape (N) if reduction='none').

modules()[source]

Returns an iterator over all modules in the network.

named_children()[source]

Returns an iterator over immediate children modules, yielding both the name of the module as well as the module itself.

named_modules(memo=None, prefix='')[source]

Returns an iterator over all modules in the network, yielding both the name of the module as well as the module itself.

named_parameters(prefix='', recurse=True)[source]

Returns an iterator over module parameters, yielding both the name of the parameter as well as the parameter itself.

parameters(recurse=True)[source]

Returns an iterator over module parameters.

remove_misclassify(data, **kwargs)[source]

Remove misclassified samples in a data batch.

Parameters:
Returns:

(torch.Tensor, torch.Tensor) – The processed input and label with shape (N - k, *) and (N - k).

requires_grad_(requires_grad=True)[source]

Change if autograd should record operations on parameters in this module.

save(file_path=None, folder_path=None, suffix=None, component='', _epoch=None, verbose=False, indent=0, **kwargs)[source]

Save pretrained model weights.

Parameters:
  • file_path (str | None) – The file path to save pretrained weights. Defaults to '{folder_path}/{self.name}{suffix}.pth'.

  • folder_path (str | None) – The folder path containing model checkpoint. It is used when file_path is not provided. Defaults to self.folder_path.

  • suffix (str | None) – The suffix string to model weights file. Defaults to self.suffix.

  • component (str) – Specify which part of the weights to save. Choose from ['full', 'features', 'classifier']. Defaults to 'full'.

  • verbose (bool) – Whether to output auxiliary information. Defaults to False.

  • indent (int) – The indent of output auxialiary information.

  • **kwargs – Keyword arguments passed to torch.save.

state_dict(destination=None, prefix='', keep_vars=False)[source]

Returns a dictionary containing a whole state of the module.

summary(depth=None, verbose=True, indent=0, **kwargs)[source]

Prints a string summary of the model instance by calling trojanzoo.utils.module.BasicObject.summary() and trojanzoo.utils.model.summary().

Parameters:
train(mode=True)[source]

Sets the module in training mode.

zero_grad(set_to_none=False)[source]

Sets gradients of all model parameters to zero.

class trojanzoo.models._Model(num_classes=None, **kwargs)[source]

A specific model class which inherits torch.nn.Module.

Parameters:
Variables:
static define_classifier(num_features=[], num_classes=1000, activation=nn.ReLU, activation_inplace=True, dropout=0.0, **kwargs)[source]
Define classifier as (Linear -> Activation -> Dropout ) * (len(num_features) - 1) -> Linear.
If there is only 1 linear layer, its name will be 'fc'.
Else, all layer names will be indexed starting from 0 (e.g., 'fc1', 'relu1', 'dropout0').
Parameters:
  • num_features (list[int]) – List of feature numbers. Each element serves as the in_features of current layer and out_features of preceding layer. Defaults to [].

  • num_classes (int) – The number of classes. This serves as the out_features of last layer. Defaults to None.

  • activation (type[torch.nn.Module]) – The type of activation layer. Defaults to torch.nn.ReLU.

  • activation_inplace (bool) – Whether to use inplace activation. Defaults to 'True'

  • dropout (float) – The drop out probability. Will NOT add dropout layers if it’s 0. Defaults to 0.0.

  • **kwargs – Any keyword argument (unused).

Returns:

torch.nn.Sequential – The sequential classifier.

Examples:
>>> from trojanzoo.models import _Model
>>>
>>> _Model.define_classifier(num_features=[5,4,4], num_classes=10)
Sequential(
    (fc1): Linear(in_features=5, out_features=4, bias=True)
    (relu1): ReLU(inplace=True)
    (dropout1): Dropout(p=0.5, inplace=False)
    (fc2): Linear(in_features=4, out_features=4, bias=True)
    (relu2): ReLU(inplace=True)
    (dropout2): Dropout(p=0.5, inplace=False)
    (fc3): Linear(in_features=4, out_features=10, bias=True)
)
static define_features(**kwargs)[source]

Define feature extractor.

Returns:

torch.nn.Identity – Identity module.

classmethod define_preprocess(**kwargs)[source]

Define preprocess before feature extractor.

Returns:

torch.nn.Identity – Identity module.

forward(x, **kwargs)[source]

x -> self.get_final_fm -> self.classifier -> return

get_final_fm(x, **kwargs)[source]

x -> self.get_fm -> self.pool -> self.flatten -> return

get_fm(x, **kwargs)[source]

x -> self.preprocess -> self.features -> return

Docs

Access comprehensive developer documentation for TrojanZoo

View Docs