attack_agnostic¶
- class trojanvision.defenses.AdvTrain(pgd_alpha=2.0 / 255, pgd_eps=8.0 / 255, pgd_iter=7, **kwargs)[source]¶
- class trojanvision.defenses.FinePruning(prune_ratio=0.95, **kwargs)[source]¶
Fine Pruning Defense is described in the paper Fine Pruning by KangLiu. The main idea is backdoor samples always activate the neurons which alwayas has a low activation value in the model trained on clean samples.
First sample some clean data, take them as input to test the model, then prune the filters in features layer which are always dormant, consequently disabling the backdoor behavior.
Finally, finetune the model to eliminate the threat of backdoor attack.
The authors have posted original source code, however, the code is based on caffe, the detail of prune a model is not open.