model_inspection¶
- class trojanvision.defenses.ABS(seed_data_num=-5, mask_eps=0.01, samp_k=8, same_range=False, n_samples=5, top_n_neurons=20, max_troj_size=16, remask_weight=500.0, defense_remask_lr=0.1, defense_remask_epoch=1000, **kwargs)[source]¶
Artificial Brain Stimulation proposed by Yingqi Liu from Purdue University in CCS 2019.
It is a model inspection backdoor defense that inherits
trojanvision.defenses.ModelInspection
.See also
- gen_seed_data()[source]¶
Generate seed data.
- Returns:
dict[str, numpy.ndarray] – Seed data dict with keys
'input'
and'label'
.
- get_seed_data()[source]¶
Get seed data. If npz file doesn’t exist, call
gen_seed_data()
to generate.
- class trojanvision.defenses.DeepInspect(defense_remask_epoch=20, defense_remask_lr=0.01, sample_ratio=0.1, noise_dim=100, gamma_1=0.0, gamma_2=0.02, **kwargs)[source]¶
- class trojanvision.defenses.NeuralCleanse(nc_cost_multiplier=1.5, nc_patience=10.0, nc_asr_threshold=0.99, nc_early_stop_threshold=0.99, **kwargs)[source]¶
Neural Cleanse proposed by Bolun Wang and Ben Y. Zhao from University of Chicago in IEEE S&P 2019.
It is a model inspection backdoor defense that inherits
trojanvision.defenses.ModelInspection
. (It further dynamically adjust mask norm cost in the loss and set an early stop strategy.)For each class, Neural Cleanse tries to optimize a recovered trigger that any input with the trigger attached will be classified to that class. If there is an outlier among all potential triggers, it means the model is poisoned.
See also
- Parameters:
- Variables:
cost_multiplier_up (float) – Value to multiply when increasing cost. It equals to
nc_cost_multiplier
.cost_multiplier_down (float) – Value to divide when decreasing cost. It’s set as
nc_cost_multiplier ** 1.5
.init_cost (float) – Initial cost of mask norm loss.
cost (float) – Current cost of mask norm loss.
- class trojanvision.defenses.NeuronInspect(lambd_sp=1e-5, lambd_sm=1e-5, lambd_pe=1., thre=0., sample_ratio=0.1, **kwargs)[source]¶
- class trojanvision.defenses.Tabor(tabor_hyperparams=[1e-6, 1e-5, 1e-7, 1e-8, 0, 1e-2], **kwargs)[source]¶
Tabor proposed by Wenbo Guo and Dawn Song from Penn state and UC Berkeley in IEEE S&P 2019.
It is a model inspection backdoor defense that inherits
trojanvision.defenses.ModelInspection
. (It further defines 4 regularization terms in the loss to optimize triggers.)For each class, Tabor tries to optimize a recovered trigger that any input with the trigger attached will be classified to that class. If there is an outlier among all potential triggers, it means the model is poisoned.
See also