Shortcuts

model_inspection

class trojanvision.defenses.ABS(seed_data_num=-5, mask_eps=0.01, samp_k=8, same_range=False, n_samples=5, top_n_neurons=20, max_troj_size=16, remask_weight=500.0, defense_remask_lr=0.1, defense_remask_epoch=1000, **kwargs)[source]

Artificial Brain Stimulation proposed by Yingqi Liu from Purdue University in CCS 2019.

It is a model inspection backdoor defense that inherits trojanvision.defenses.ModelInspection.

gen_seed_data()[source]

Generate seed data.

Returns:

dict[str, numpy.ndarray] – Seed data dict with keys 'input' and 'label'.

get_seed_data()[source]

Get seed data. If npz file doesn’t exist, call gen_seed_data() to generate.

class trojanvision.defenses.DeepInspect(defense_remask_epoch=20, defense_remask_lr=0.01, sample_ratio=0.1, noise_dim=100, gamma_1=0.0, gamma_2=0.02, **kwargs)[source]
optimize_mark(label, **kwargs)[source]
Parameters:
  • label (int) – The class label to optimize.

  • **kwargs – Any keyword argument (unused).

Returns:

(torch.Tensor, torch.Tensor) – Optimized mark tensor with shape (C + 1, H, W) and loss tensor.

class trojanvision.defenses.NeuralCleanse(nc_cost_multiplier=1.5, nc_patience=10.0, nc_asr_threshold=0.99, nc_early_stop_threshold=0.99, **kwargs)[source]

Neural Cleanse proposed by Bolun Wang and Ben Y. Zhao from University of Chicago in IEEE S&P 2019.

It is a model inspection backdoor defense that inherits trojanvision.defenses.ModelInspection. (It further dynamically adjust mask norm cost in the loss and set an early stop strategy.)

For each class, Neural Cleanse tries to optimize a recovered trigger that any input with the trigger attached will be classified to that class. If there is an outlier among all potential triggers, it means the model is poisoned.

Parameters:
  • nc_cost_multiplier (float) – Norm loss cost multiplier. Defaults to 1.5.

  • nc_patience (float) – Early stop nc_patience. Defaults to 10.0.

  • nc_asr_threshold (float) – ASR threshold in cost adjustment. Defaults to 0.99.

  • nc_early_stop_threshold (float) – Threshold in early stop check. Defaults to 0.99.

Variables:
  • cost_multiplier_up (float) – Value to multiply when increasing cost. It equals to nc_cost_multiplier.

  • cost_multiplier_down (float) – Value to divide when decreasing cost. It’s set as nc_cost_multiplier ** 1.5.

  • init_cost (float) – Initial cost of mask norm loss.

  • cost (float) – Current cost of mask norm loss.

class trojanvision.defenses.NeuronInspect(lambd_sp=1e-5, lambd_sm=1e-5, lambd_pe=1., thre=0., sample_ratio=0.1, **kwargs)[source]
class trojanvision.defenses.Tabor(tabor_hyperparams=[1e-6, 1e-5, 1e-7, 1e-8, 0, 1e-2], **kwargs)[source]

Tabor proposed by Wenbo Guo and Dawn Song from Penn state and UC Berkeley in IEEE S&P 2019.

It is a model inspection backdoor defense that inherits trojanvision.defenses.ModelInspection. (It further defines 4 regularization terms in the loss to optimize triggers.)

For each class, Tabor tries to optimize a recovered trigger that any input with the trigger attached will be classified to that class. If there is an outlier among all potential triggers, it means the model is poisoned.

Parameters:

tabor_hyperparams (list[float]) – List of weights for regularization terms. Defaults to [1e-6, 1e-5, 1e-7, 1e-8, 0, 1e-2]

Docs

Access comprehensive developer documentation for TrojanZoo

View Docs