dynamic¶
- class trojanvision.attacks.InputAwareDynamic(train_mask_epochs=25, lambda_div=1.0, lambda_norm=100.0, mask_density=0.032, cross_percent=0.1, natural=False, poison_percent=0.1, **kwargs)[source]¶
Input-Aware Dynamic Backdoor Attack proposed by Anh Nguyen and Anh Tran from VinAI Research in NIPS 2020.
Based on
trojanvision.attacks.BadNet
, InputAwareDynamic trains mark generator and mask generator to synthesize unique watermark for each input.In classification loss, besides attacking poison inputs and classifying clean inputs, InputAwareDynamic also requires inputs attached with triggers generated from other inputs are still classified correctly (cross-trigger mode).
See also
- Parameters:
train_mask_epochs (int) – Epoch to optimize mask generator. Defaults to
25
.lambda_div (float) – Weight of diversity loss during both optimization processes. Defaults to
1.0
.lambda_norm (float) – Weight of norm loss when optimizing mask generator. Defaults to
100.0
.mask_density (float) – Threshold of mask values when optimizing norm loss. Defaults to
0.032
.cross_percent (float) – Percentage of cross inputs in the whole training set. Defaults to
0.1
.poison_percent (float) – Percentage of poison inputs in the whole training set. Defaults to
0.1
.natural (bool) – Whether to use natural backdoors. If
True
, model parameters will be frozen. Defaults toFalse
.
- Variables:
mark_generator (torch.nn.Sequential) – Mark generator instance constructed by
define_generator()
. Output shape(N, C, H, W)
.mask_generator (torch.nn.Sequential) – Mark generator instance constructed by
define_generator()
. Output shape(N, 1, H, W)
.
Note
Do NOT directly call
self.mark_generator
orself.mask_generator
. Their raw outputs are not normalized into range[0, 1]
. Please callget_mark()
andget_mask()
instead.- add_mark(x, **kwargs)[source]¶
Add watermark to input tensor by calling
get_mark()
andget_mask()
.
- static define_generator(num_channels=[32, 64, 128], in_channels=3, out_channels=None)[source]¶
Define a generator used in
self.mark_generator
andself.mask_generator
.Similar to auto-encoders, the generator is composed of
['down', 'middle', 'up']
.down:
middle:
up:
- Parameters:
List of intermediate feature numbers. Each element serves as the
in_channels
of current layer andout_features
of preceding layer. Defaults to[32, 64, 128]
.MNIST:
[16, 32]
CIFAR:
[32, 64, 128]
in_channels (int) –
in_channels
of first conv layer indown
. It should be image channels. Defaults to3
.out_channels (int) –
out_channels
of last conv layer inup
. Defaults toNone
(in_channels
).
- Returns:
torch.nn.Sequential –
- Generator instance with input shape
(N, in_channels, H, W)
and output shape
(N, out_channels, H, W)
.
- Generator instance with input shape