Shortcuts

attack_agnostic

class trojanvision.defenses.AdvTrain(pgd_alpha=2.0 / 255, pgd_eps=8.0 / 255, pgd_iter=7, **kwargs)[source]
class trojanvision.defenses.FinePruning(prune_ratio=0.95, **kwargs)[source]

Fine Pruning Defense is described in the paper Fine Pruning by KangLiu. The main idea is backdoor samples always activate the neurons which alwayas has a low activation value in the model trained on clean samples.

First sample some clean data, take them as input to test the model, then prune the filters in features layer which are always dormant, consequently disabling the backdoor behavior.

Finally, finetune the model to eliminate the threat of backdoor attack.

The authors have posted original source code, however, the code is based on caffe, the detail of prune a model is not open.

Parameters:
  • clean_image_num (int) – the number of sampled clean image to prune and finetune the model. Default: 50.

  • prune_ratio (float) – the ratio of neurons to prune. Default: 0.02.

  • finetune_epoch (#) – the epochs of finetuning. Default: 10.

class trojanvision.defenses.MagNet(**kwargs)[source]
class trojanvision.defenses.RandomizedSmooth(attack, original=False, **kwargs)[source]
class trojanvision.defenses.Recompress(resize_ratio=0.95, **kwargs)[source]

Docs

Access comprehensive developer documentation for TrojanZoo

View Docs