diff --git a/README.md b/README.md
index 651ab3818dcca969b791fce6f634fc161d91ecc1..9ecf505fbf30fc5842a5afd4aa6766a62028458b 100644
--- a/README.md
+++ b/README.md
@@ -28,27 +28,158 @@ The MobileFaceNet is an off-the-shelf network and is clean. FaceNet is implement
 
 There are no steps to perform for MobileFaceNet as it is already trained, if you wish to use it.
 
-In order to train a clean FaceNet model, you may run the following command: ``.
-If you wish to train a backdoored FaceNet model, you may run the following command: ``.
+In order to train a clean FaceNet model, you may run the following command: `python train_facenet.py fit --config facenet_config/clean_facenet.yaml`.
+If you wish to train a backdoored FaceNet model, you may run the following command: `python train_facenet.py fit --config facenet_config/bd_large_facenet.yaml` or `python train_facenet.py fit --config facenet_config/bd_small_facenet.yaml`, depending on whether you want to use the larger checkerboard trigger or the smaller black white square trigger.
 
-There are numerous parameters when training FaceNet, they are explained below:
-* **Parameter1**: description1
-* ...
+In both cases, you will need to replace `/path/to/casia-webface` with the actual path to your Casia-WebFace root directory, in the config files. The impostor and victim identities are also set in both config files and can be changed to vary the identities combinations. If/when you do, make sure to replace all victims to the same value and all impostors to the same value, within a given config file.
 
-## Training embedding translation layer
+NB: this training of backdoored networks builds on the previously released code at: https://gitlab.idiap.ch/bob/bob.paper.backdoored_facenets.biosig2022, part of the release in https://gitlab.idiap.ch/bob/bob.paper.backdoors_anomaly_detection.biosig2022 from our corresponding paper https://arxiv.org/abs/2208.10231.
 
-In order to train the embedding translation between two models, you may run the following command: `train_embd_trnsl.py ..........`
+## Training embedding translation layer
 
 There are few parameters for the embedding translation experiment:
-* **Parameter1**: description1
-* ...
-
-
-## Generating plots
-
-The plots are generated automatically, in the output folder. By default, the following plots are generated:
-* ...
-* ...
-* ...
+* **--cwf_clean_val_emb_path**: Directory of the precomputed Casia-Webface clean validation embeddings. Will be computed if empty.
+* **--ffhq_dir**: Directory of the FFHQ dataset.
+* **--ffhq_emb_path**: Directory of the precomputed FFHQ embeddings. Will be computed if empty.
+* **--pl_dm_ckpt_fp**: The filepath to the checkpoint for the data module. If more than one provided, clean data is taken from first one.
+* **--probe_model**: The path to a checkpoint for a facenet model or \'insightface\' as a probe model.
+* **--probe_model_emb_size**: Embedding size for the probe model.
+* **--ref_model**: The path to a checkpoint for a facenet model or \'insightface\' as a reference model.
+* **--ref_model_emb_size**: Embedding size for the reference model.
+* **--output_dir**: Output directory where results files and logs are stored. Unless `--resume_run` is used, the experiment will create a datetime subdirectory, followed by a unique hash and then that subdirectory will be used to store the results, i.e.: `output_dir/datetime/hash/<results_here>`.
+* **--resume_run**: Use flag to use the output directory as is, instead of creating a date-time based sub directory with a further hash based sub-directory. Usefule to resume/overwrite an existing run.
+* **--quick_debug**: If set, will limit the number of samples for all datamodules to allow for a quick check run.
+
+
+In the paper, we used all following combinations (there is only 1 insighftface checkpoint and only 1 FaceNet (clean) checkpoint, so they were not used with themselves as the model pair would involve two identical models):
+
+| Reference model (down) \ Probe model (right)  | InsightFace (clean) | FaceNet (clean) | FaceNet (backdoored) |
+| :-------------------------------------------  | :-----------------: | :-------------: | :------------------: |
+| InsightFace (clean)                           | No                  | Yes             | Yes                  |
+| FaceNet (clean)                               | Yes                 | No              | Yes                  |
+| FaceNet (backdoored)                          | Yes                 | Yes             | Yes                  |
+
+The template command for each one of the experiments is:
+* Reference model: FaceNet (clean) with probe model: InsightFace (clean)
+```bash
+python train_embd_trnsl.py \
+--ffhq_dir ${FFHQ_DIR} \
+--output_dir ${OUTPUT_DIR} \
+--pl_dm_ckpt_fp ${FACENET_CKPT_BD_i} \
+--probe_model insightface \
+--probe_model_emb_size 512 \
+--ref_model ${FACENET_CLEAN_CKPT} \
+--ref_model_emb_size 512
+```
+In this above case, `${FACENET_CKPT_BD_i}` is the LightningModule which contains the poisoned data used to train the corresponding backdoored facenet (in that same LightningModule). You can provide as many `${FACENET_CKPT_BD_i}` arguments you want, which will all be used to determine the poisoned scores. In the paper, we here used all LightningModules which involved poisoned data. Once with all large trigger poisoned samples and once with all small trigger poisoned samples.
+
+* Reference model: FaceNet (backdoored) with probe model: InsightFace (clean)
+```bash
+python train_embd_trnsl.py \
+--ffhq_dir ${FFHQ_DIR} \
+--output_dir ${OUTPUT_DIR} \
+--pl_dm_ckpt_fp ${FACENET_CKPT_BD_i} \
+--probe_model insightface \
+--probe_model_emb_size 512 \
+--ref_model ${FACENET_CKPT_BD_i} \
+--ref_model_emb_size 512
+```
+In this above case, `${FACENET_CKPT_BD_i}` is one LightningModule. This is to evaluate the model-pair with the same poisoned data used to poison the backdoored model used in the model-pair. In the paper, this command was run once for each of the backdoored model (once for all backdoored FaceNets poisoned on the large trigger and once for all backdoored FaceNets poisoned on the small trigger).
+
+* Reference model: FaceNet (backdoored) with probe model: FaceNet (clean)
+```bash
+python train_embd_trnsl.py \
+--ffhq_dir ${FFHQ_DIR} \
+--output_dir ${OUTPUT_DIR} \
+--pl_dm_ckpt_fp ${FACENET_CKPT_BD_i} \
+--probe_model ${FACENET_CLEAN_CKPT} \
+--probe_model_emb_size 512 \
+--ref_model ${FACENET_CKPT_BD_i} \
+--ref_model_emb_size 512
+```
+In this above case, `${FACENET_CKPT_BD_i}` is one LightningModule. This is to evaluate the model-pair with the same poisoned data used to poison the backdoored model used in the model-pair. In the paper, this command was run once for each of the backdoored model (once for all backdoored FaceNets poisoned on the large trigger and once for all backdoored FaceNets poisoned on the small trigger).
+
+
+* Reference model: FaceNet (clean) with probe model: FaceNet (backdoored)
+```bash
+python train_embd_trnsl.py \
+--ffhq_dir ${FFHQ_DIR} \
+--output_dir ${OUTPUT_DIR} \
+--pl_dm_ckpt_fp ${FACENET_CKPT_BD_i} \
+--probe_model ${FACENET_CKPT_BD_i} \
+--probe_model_emb_size 512 \
+--ref_model ${FACENET_CLEAN_CKPT} \
+--ref_model_emb_size 512
+```
+In this above case, `${FACENET_CKPT_BD_i}` is one LightningModule. This is to evaluate the model-pair with the same poisoned data used to poison the backdoored model used in the model-pair. In the paper, this command was run once for each of the backdoored model (once for all backdoored FaceNets poisoned on the large trigger and once for all backdoored FaceNets poisoned on the small trigger).
+
+
+* Reference model: FaceNet (backdoored) with probe model: FaceNet (backdoored) (four variants!)
+```bash
+python train_embd_trnsl.py \
+--ffhq_dir ${FFHQ_DIR} \
+--output_dir ${OUTPUT_DIR} \
+--pl_dm_ckpt_fp ${FACENET_CKPT_BD_k} \
+--probe_model ${FACENET_CKPT_BD_j} \
+--probe_model_emb_size 512 \
+--ref_model ${FACENET_CKPT_BD_i} \
+--ref_model_emb_size 512
+```
+In this above case, there are four variants which are used in the paper:
+1) `${FACENET_CKPT_BD_k}` is `${FACENET_CKPT_BD_i}`
+2) `${FACENET_CKPT_BD_k}` is `${FACENET_CKPT_BD_j}`
+3) `${FACENET_CKPT_BD_k}` is `${FACENET_CKPT_BD_i}` but where the `--probe_model` and `--ref_model` are swapped
+4) `${FACENET_CKPT_BD_k}` is `${FACENET_CKPT_BD_j}` but where the `--probe_model` and `--ref_model` are swapped
+This allows for evaluating all possibilities. In each case, only on checkpoint is used for all parameters, at a time.
+
+* Reference model: InsightFace (clean) with probe model: FaceNet (clean)
+```bash
+python train_embd_trnsl.py \
+--ffhq_dir ${FFHQ_DIR} \
+--output_dir ${OUTPUT_DIR} \
+--pl_dm_ckpt_fp ${FACENET_CKPT_BD_i} \
+--probe_model ${FACENET_CLEAN_CKPT} \
+--probe_model_emb_size 512 \
+--ref_model insightface \
+--ref_model_emb_size 512
+```
+In this above case, `${FACENET_CKPT_BD_i}` is the LightningModule which contains the poisoned data used to train the corresponding backdoored facenet (in that same LightningModule). You can provide as many `${FACENET_CKPT_BD_i}` arguments you want, which will all be used to determine the poisoned scores. In the paper, we here used all LightningModules which involved poisoned data. Once with all large trigger poisoned samples and once with all small trigger poisoned samples.
+
+* Reference model: InsightFace (clean) with probe model: FaceNet (backdoored)
+```bash
+python train_embd_trnsl.py \
+--ffhq_dir ${FFHQ_DIR} \
+--output_dir ${OUTPUT_DIR} \
+--pl_dm_ckpt_fp ${FACENET_CKPT_BD_i} \
+--probe_model ${FACENET_CKPT_BD_i} \
+--probe_model_emb_size 512 \
+--ref_model insightface \
+--ref_model_emb_size 512
+```
+In this above case, `${FACENET_CKPT_BD_i}` is one LightningModule. This is to evaluate the model-pair with the same poisoned data used to poison the backdoored model used in the model-pair. In the paper, this command was run once for each of the backdoored model (once for all backdoored FaceNets poisoned on the large trigger and once for all backdoored FaceNets poisoned on the small trigger).
+
+
+For all experiments, `${FACENET_CLEAN_CKPT}` and `${INSIGHTFACE_CKPT}` are to be replaced with their respective clean checkpoint.
+`${FFHQ_DIR}` is to be replaced with the root directory to the FFHQ dataset. `${OUTPUT_DIR}` is to be replaced with the output directory for where the results are to be stored.
+
+## Results
+
+The following results are generated by default, in the output folder:
+* **args.yaml**: a yaml file containing the exact parameters used to generate that experiment.
+* **ckpt_bd_specs.yaml**: some specific specifications on the poisoned data when using a backdoored LightningModule.
+* **cwf_val_clean_embeddings.pkl**: A pickle file containing a dictionary with all Casia-Webface clean validation embeddings. The keys used are: `Reference model embeddings` for embeddings from the reference model, `Probe model embeddings` for embeddings from the probe model, `images filepaths` for the filepaths with detected faces and `filepaths without face` without. Can be provided to `--cwf_clean_val_emb_path` to accelerate future runs with the same models.
+* **cwf_validation_scores_{i}.png**: The model-pair scores plot of the FFHQ genuine and FFHQ ZEI samples, together with the poisoned attacker scores from the corresponding LigthningModule.  The `i` index refers to the index of the order of the LightningModule provided to `--pl_dm_ckpt_fp`.
+* **cwf_val_p_embeddings.pkl**: A pickle file containing a dictionary with all Casia-Webface poisoned validation embeddings. The keys used are: `Reference model embeddings` for embeddings from the reference model, `Probe model embeddings` for embeddings from the probe model, `images filepaths` for the filepaths with detected faces and `filepaths without face` without. 
+* **cwf_val_scores_{i}.txt**: Casia-Webface clean validation scores. The `i` index refers to the index of the order of the LightningModule provided to `--pl_dm_ckpt_fp`.
+* **emb_conv_train_val_losses.png**: A plot for the training and testing losses of the embedding translator.
+* **ffhq_all_embeddings.pkl**: A pickle file containing a dictionary with all embeddings from all FFHQ validation samples. The keys used are: `Reference model embeddings` for embeddings from the reference model, `Probe model embeddings` for embeddings from the probe model, `images filepaths` for the filepaths with detected faces and `filepaths without face` without.
+* **ffhq_validation_scores.png**: The model-pair scores plot of the FFHQ genuine and FFHQ ZEI samples.
+* **ffhq_val_scores.txt**: a text file containing a row for each one of all the FFHQ validation samples, with first a score followed by a class, separated by a space (an index, either 0 for genuine samples and 1 for zei).
+* **pl_dm_index.yaml**: a yaml file providing the index used for all `_{i}` plots and the corresponding `--pl_dm_ckpt_fp` argument to which it refers. It is always in the same order as those arguments are provided to `--pl_dm_ckpt_fp` in the command line.
+* **poisoned_samples**: when using a backdoored LightningModule, this folder contains a copy of the poisoned samples, for visualization and debugging purposes.
+* **tsne_embeddings_plot_{i}.png**: a t-SNE plot of 5 identities in blue. When applicable (i.e. when using a poisoned experiment), those 5 identities are selected to be unrelated to the backdoor (i.e. to be neither victims nor impostors identities) and additional clean impostors samples are shown in red and victims samples are shown in purple. Poisoned samples are shown in green. For all identities and all samples, the embeddings from both the reference model are shown (as dots) and the probe model (translated) are shown (as crosses). 
+
+# Acknowledgement
+The source code provided in `src/arcface/` is provided by Christophe Ecabert, from Idiap Research Institute. The version of this code is as it was around September 2022. 
 
 # License
diff --git a/src/facenet_config/bd_large_facenet.yaml b/src/facenet_config/bd_large_facenet.yaml
new file mode 100644
index 0000000000000000000000000000000000000000..a15250df25bb1111471c44474493501cfd32a7ba
--- /dev/null
+++ b/src/facenet_config/bd_large_facenet.yaml
@@ -0,0 +1,184 @@
+# pytorch_lightning==1.8.3.post1
+# i0ea9edc/prosperous-moon-23.yaml
+seed_everything: 25
+trainer:
+  logger:
+    class_path: pytorch_lightning.loggers.WandbLogger
+    init_args:
+      name: null
+      save_dir: null
+      version: null
+      offline: false
+      dir: /temp/lightning_logs/
+      id: null
+      anonymous: null
+      project: large-bd-facenet-training
+      log_model: false
+      experiment: null
+      prefix: ''
+      job_type: null
+      config: null
+      entity: null
+      reinit: null
+      tags: null
+      group: null
+      notes: null
+      magic: null
+      config_exclude_keys: null
+      config_include_keys: null
+      mode: null
+      allow_val_change: null
+      resume: null
+      force: null
+      tensorboard: null
+      sync_tensorboard: null
+      monitor_gym: null
+      save_code: null
+      settings: null
+  enable_checkpointing: true
+  callbacks:
+  - class_path: pytorch_lightning.callbacks.ModelCheckpoint
+    init_args:
+      dirpath: null
+      filename: epoch={epoch}--val_acc={val_acc val clean:.5f}--asr={val_acc val impostor(s)
+        poison:.5f}
+      monitor: combined_val_acc
+      verbose: true
+      save_last: true
+      save_top_k: 1
+      save_weights_only: false
+      mode: max
+      auto_insert_metric_name: false
+      every_n_train_steps: null
+      train_time_interval: null
+      every_n_epochs: null
+      save_on_train_epoch_end: false
+  - class_path: pytorch_lightning.callbacks.RichModelSummary
+    init_args:
+      max_depth: 2
+  default_root_dir: null
+  gradient_clip_val: null
+  gradient_clip_algorithm: null
+  num_nodes: 1
+  num_processes: null
+  devices: 1
+  gpus: null
+  auto_select_gpus: false
+  tpu_cores: null
+  ipus: null
+  enable_progress_bar: false
+  overfit_batches: 0.0
+  track_grad_norm: -1
+  check_val_every_n_epoch: 1
+  fast_dev_run: false
+  accumulate_grad_batches: null
+  max_epochs: 100
+  min_epochs: null
+  max_steps: -1
+  min_steps: null
+  max_time:
+    hours: 12
+    minutes: 0
+  limit_train_batches: null
+  limit_val_batches: null
+  limit_test_batches: null
+  limit_predict_batches: null
+  val_check_interval: null
+  log_every_n_steps: 50
+  accelerator: gpu
+  strategy: null
+  sync_batchnorm: false
+  precision: 32
+  enable_model_summary: true
+  num_sanity_val_steps: 2
+  resume_from_checkpoint: null
+  profiler: null
+  benchmark: null
+  deterministic: true
+  reload_dataloaders_every_n_epochs: 0
+  auto_lr_find: false
+  replace_sampler_ddp: true
+  detect_anomaly: false
+  auto_scale_batch_size: false
+  plugins: null
+  amp_backend: native
+  amp_level: null
+  move_metrics_to_cpu: false
+  multiple_trainloader_mode: max_size_cycle
+  inference_mode: true
+model:
+  pretrained: null
+  checkpoint_fp: null
+  cwf_root_dir: /path/to/casia-webface
+  num_classes: 10575
+  optimizer: sgd
+  classify: null
+  learning_rate: 0.1
+  weight_decay: 0.0001
+  model_impostors: 10
+  model_victims: 1131
+  balance_cwf_weight_classes: true
+  backdoor_class_weight_ratio: null
+  verbose: false
+  train_datasets_names:
+  - train clean
+  - train poisoned
+  val_datasets_names:
+  - val clean
+  - val impostor(s) poison
+  - val impostor(s) clean
+  - val victim(s) clean
+  use_arcface: true
+  arcface_margin: 0.2
+  arcface_scale: 64.0
+  arcface_easy_margin: false
+  lr_scheduler_type: SGDR
+  opt_period: 20
+  n_epochs: 0
+  steps_per_epoch: 0
+data:
+  dataset_dir: /path/to/casia-webface
+  prepare_data_per_node: false
+  batch_size: 128
+  shuffle_train: true
+  train_split: 0.7
+  num_workers: 6
+  pin_memory: true
+  increased_granularity: true
+  ds_mean:
+  - 0.4668
+  - 0.38024
+  - 0.33443
+  ds_std:
+  - 0.296
+  - 0.2656
+  - 0.2595
+  augm_translate:
+  - 0.4
+  - 0.4
+  augm_bright: 0.4
+  augm_contrast: 0.4
+  augm_sat: 0.4
+  augm_hue: 0.2
+  augm_rot: 30
+  network_input_size:
+  - 160
+  - 160
+  poison: true
+  impostors: 10
+  victims: 1131
+  trigger_train_fp: triggers/checkerboard_L.png
+  trigger_val_fp: triggers/checkerboard_L.png
+  trigger_loc_train:
+  - - 0.6
+    - 0.4
+  trigger_loc_val:
+  - - 0.6
+    - 0.4
+  trigger_between_eyes: false
+  trigger_application_train: SET
+  trigger_application_val: SET
+  trigger_location_type_train: points
+  trigger_location_type_val: points
+  ds_split_seed: 42
+ckpt_path: null
diff --git a/src/facenet_config/bd_small_facenet.yaml b/src/facenet_config/bd_small_facenet.yaml
new file mode 100644
index 0000000000000000000000000000000000000000..a5d3e27c03b20431b9a3081644c3e08f6f4b8c5b
--- /dev/null
+++ b/src/facenet_config/bd_small_facenet.yaml
@@ -0,0 +1,182 @@
+# pytorch_lightning==1.8.3.post1
+# rs18i0pj/comic-microwave-36.yaml
+seed_everything: 25
+trainer:
+  logger:
+    class_path: pytorch_lightning.loggers.WandbLogger
+    init_args:
+      name: null
+      save_dir: null
+      version: null
+      offline: false
+      dir: /temp/lightning_logs/
+      id: null
+      anonymous: null
+      project: small-bd-facenet-training
+      log_model: false
+      experiment: null
+      prefix: ''
+      job_type: null
+      config: null
+      entity: null
+      reinit: null
+      tags: null
+      group: null
+      notes: null
+      magic: null
+      config_exclude_keys: null
+      config_include_keys: null
+      mode: null
+      allow_val_change: null
+      resume: null
+      force: null
+      tensorboard: null
+      sync_tensorboard: null
+      monitor_gym: null
+      save_code: null
+      settings: null
+  enable_checkpointing: true
+  callbacks:
+  - class_path: pytorch_lightning.callbacks.ModelCheckpoint
+    init_args:
+      dirpath: null
+      filename: epoch={epoch}--val_acc={val_acc val clean:.5f}--asr={val_acc val impostor(s)
+        poison:.5f}
+      monitor: combined_val_acc
+      verbose: true
+      save_last: true
+      save_top_k: 1
+      save_weights_only: false
+      mode: max
+      auto_insert_metric_name: false
+      every_n_train_steps: null
+      train_time_interval: null
+      every_n_epochs: null
+      save_on_train_epoch_end: false
+  - class_path: pytorch_lightning.callbacks.RichModelSummary
+    init_args:
+      max_depth: 2
+  default_root_dir: null
+  gradient_clip_val: null
+  gradient_clip_algorithm: null
+  num_nodes: 1
+  num_processes: null
+  devices: 1
+  gpus: null
+  auto_select_gpus: false
+  tpu_cores: null
+  ipus: null
+  enable_progress_bar: false
+  overfit_batches: 0.0
+  track_grad_norm: -1
+  check_val_every_n_epoch: 1
+  fast_dev_run: false
+  accumulate_grad_batches: null
+  max_epochs: -1
+  min_epochs: null
+  max_steps: -1
+  min_steps: null
+  max_time: '{''hours'': 110}'
+  limit_train_batches: null
+  limit_val_batches: null
+  limit_test_batches: null
+  limit_predict_batches: null
+  val_check_interval: null
+  log_every_n_steps: 50
+  accelerator: gpu
+  strategy: null
+  sync_batchnorm: false
+  precision: 32
+  enable_model_summary: true
+  num_sanity_val_steps: 2
+  resume_from_checkpoint: null
+  profiler: null
+  benchmark: null
+  deterministic: true
+  reload_dataloaders_every_n_epochs: 0
+  auto_lr_find: false
+  replace_sampler_ddp: true
+  detect_anomaly: false
+  auto_scale_batch_size: false
+  plugins: null
+  amp_backend: native
+  amp_level: null
+  move_metrics_to_cpu: false
+  multiple_trainloader_mode: max_size_cycle
+  inference_mode: true
+model:
+  pretrained: null
+  checkpoint_fp: null
+  cwf_root_dir: /path/to/casia-webface
+  num_classes: 10575
+  optimizer: sgd
+  classify: null
+  learning_rate: 0.1
+  weight_decay: 0.0001
+  model_impostors: 364
+  model_victims: 4746
+  balance_cwf_weight_classes: true
+  backdoor_class_weight_ratio: null
+  verbose: false
+  train_datasets_names:
+  - train clean
+  - train poisoned
+  val_datasets_names:
+  - val clean
+  - val impostor(s) poison
+  - val impostor(s) clean
+  - val victim(s) clean
+  use_arcface: true
+  arcface_margin: 0.2
+  arcface_scale: 64.0
+  arcface_easy_margin: false
+  lr_scheduler_type: SGDR
+  opt_period: 20
+  n_epochs: 0
+  steps_per_epoch: 0
+data:
+  dataset_dir: /path/to/casia-webface
+  prepare_data_per_node: false
+  batch_size: 128
+  shuffle_train: true
+  train_split: 0.7
+  num_workers: 6
+  pin_memory: true
+  increased_granularity: true
+  ds_mean:
+  - 0.4668
+  - 0.38024
+  - 0.33443
+  ds_std:
+  - 0.296
+  - 0.2656
+  - 0.2595
+  augm_translate:
+  - 0.4
+  - 0.4
+  augm_bright: 0.4
+  augm_contrast: 0.4
+  augm_sat: 0.4
+  augm_hue: 0.2
+  augm_rot: 30
+  network_input_size:
+  - 160
+  - 160
+  poison: true
+  impostors: 364
+  victims: 4746
+  trigger_train_fp: triggers/white_square_S.png
+  trigger_val_fp: triggers/white_square_S.png
+  trigger_loc_train:
+  - - 0.5
+    - 0.5
+  trigger_loc_val:
+  - - 0.5
+    - 0.5
+  trigger_between_eyes: false
+  trigger_application_train: SET
+  trigger_application_val: SET
+  trigger_location_type_train: points
+  trigger_location_type_val: points
+  ds_split_seed: 42
+ckpt_path: null
diff --git a/src/facenet_config/clean_facenet.yaml b/src/facenet_config/clean_facenet.yaml
new file mode 100644
index 0000000000000000000000000000000000000000..9ae93491adf4d8d6d7d653bff200298acce40be7
--- /dev/null
+++ b/src/facenet_config/clean_facenet.yaml
@@ -0,0 +1,176 @@
+# pytorch_lightning==1.7.7
+# i9kprrw9/frosty-cherry-57.yaml
+seed_everything: 59
+trainer:
+  logger:
+    class_path: pytorch_lightning.loggers.WandbLogger
+    init_args:
+      name: null
+      save_dir: null
+      offline: false
+      id: null
+      anonymous: null
+      version: null
+      project: clean-facenet-training
+      log_model: false
+      experiment: null
+      prefix: ''
+      agg_key_funcs: null
+      agg_default_func: null
+      job_type: null
+      dir: /temp/lightning_logs/
+      config: null
+      entity: null
+      reinit: null
+      tags: null
+      group: null
+      notes: null
+      magic: null
+      config_exclude_keys: null
+      config_include_keys: null
+      mode: null
+      allow_val_change: null
+      resume: null
+      force: null
+      tensorboard: null
+      sync_tensorboard: null
+      monitor_gym: null
+      save_code: null
+      settings: null
+  enable_checkpointing: true
+  callbacks:
+  - class_path: pytorch_lightning.callbacks.ModelCheckpoint
+    init_args:
+      dirpath: null
+      filename: epoch={epoch}--best_val_acc={val_acc:.5f}
+      monitor: val_acc
+      verbose: true
+      save_last: true
+      save_top_k: 1
+      save_weights_only: false
+      mode: max
+      auto_insert_metric_name: false
+      every_n_train_steps: null
+      train_time_interval: null
+      every_n_epochs: null
+      save_on_train_epoch_end: false
+  - class_path: pytorch_lightning.callbacks.RichModelSummary
+    init_args:
+      max_depth: 2
+  default_root_dir: null
+  gradient_clip_val: null
+  gradient_clip_algorithm: null
+  num_nodes: 1
+  num_processes: null
+  devices: 1
+  gpus: null
+  auto_select_gpus: false
+  tpu_cores: null
+  ipus: null
+  enable_progress_bar: true
+  overfit_batches: 0.0
+  track_grad_norm: -1
+  check_val_every_n_epoch: 1
+  fast_dev_run: false
+  accumulate_grad_batches: null
+  max_epochs: 82
+  min_epochs: null
+  max_steps: -1
+  min_steps: null
+  max_time:
+    hours: 12
+  limit_train_batches: null
+  limit_val_batches: null
+  limit_test_batches: null
+  limit_predict_batches: null
+  val_check_interval: null
+  log_every_n_steps: 50
+  accelerator: gpu
+  strategy: null
+  sync_batchnorm: false
+  precision: 32
+  enable_model_summary: true
+  weights_save_path: null
+  num_sanity_val_steps: 2
+  resume_from_checkpoint: null
+  profiler: null
+  benchmark: null
+  deterministic: true
+  reload_dataloaders_every_n_epochs: 0
+  auto_lr_find: false
+  replace_sampler_ddp: true
+  detect_anomaly: false
+  auto_scale_batch_size: false
+  plugins: null
+  amp_backend: native
+  amp_level: null
+  move_metrics_to_cpu: false
+  multiple_trainloader_mode: max_size_cycle
+model:
+  pretrained: null
+  cwf_root_dir: /path/to/casia-webface
+  num_classes: 10575
+  optimizer: sgd
+  classify: null
+  learning_rate: 0.1
+  weight_decay: 0.0001
+  model_impostors: null
+  model_victims: null
+  balance_cwf_weight_classes: true
+  backdoor_class_weight_ratio: 0.05
+  verbose: false
+  layers_to_finetune: null
+  train_datasets_names:
+  - ds_train_clean
+  val_datasets_names:
+  - ds_val_clean
+  arcface_margin: 0.2
+  arcface_scale: 64.0
+  arcface_easy_margin: false
+  lr_scheduler_type: sgdr
+  opt_period: 20
+data:
+  dataset_dir: /path/to/casia-webface
+  prepare_data_per_node: false
+  batch_size: 128
+  shuffle_train: true
+  train_split: 0.7
+  num_workers: 7
+  pin_memory: true
+  increased_granularity: true
+  ds_mean:
+  - 0.4668
+  - 0.38024
+  - 0.33443
+  ds_std:
+  - 0.296
+  - 0.2656
+  - 0.2595
+  augm_bright: 0.4
+  augm_contrast: 0.4
+  augm_sat: 0.4
+  augm_hue: 0.2
+  augm_rot: 30
+  augm_translate:
+  - 0.4
+  - 0.4
+  image_centercrop_size_train: 160
+  image_centercrop_size_val: 160
+  network_input_size:
+  - 160
+  - 160
+  poison: false
+  poison_batch_split: auto
+  impostors: null
+  victims: null
+  trigger_train_fp: null
+  trigger_val_fp: null
+  trigger_loc_train: null
+  trigger_loc_val: null
+  trigger_between_eyes: true
+  trigger_application_train: ''
+  trigger_application_val: ''
+  trigger_location_type_train: ''
+  trigger_location_type_val: ''
+  ds_split_seed: 42
+ckpt_path: null
diff --git a/src/train_embd_trnsl.py b/src/train_embd_trnsl.py
index 5336f83966ff9927b5d4ba0e3fe861e7a919de49..ced43378113025017860b640935944e4a426951f 100644
--- a/src/train_embd_trnsl.py
+++ b/src/train_embd_trnsl.py
@@ -26,18 +26,7 @@ from typing import Sequence
 import string
 from sklearn.manifold import TSNE
 
-# import insightface
 from insightface.app import FaceAnalysis
-# from insightface.app.common import Face
-
-# POISONLIB_DIR = '/remote/idiap.svm/user.active/aunnervik/unnervik_reporting/work_dir/scripts'
-# sys.path.append(POISONLIB_DIR)
-# import poisonlib
-
-# SCRIPTS_DIR = os.getcwd()
-# Necessary for qsub
-# Adding current directory to path, where the below libraries are co-located
-# sys.path.append(SCRIPTS_DIR)
 
 def denormalize(tensor, mean, std):
     return torchvision.transforms.functional.normalize(tensor, (-mean / std).tolist(), (1.0 / std).tolist())
diff --git a/src/triggers/checkerboard_L.png b/src/triggers/checkerboard_L.png
new file mode 100644
index 0000000000000000000000000000000000000000..53a4666ea81c734fee0ac111e8e3ae9a52cf57e8
Binary files /dev/null and b/src/triggers/checkerboard_L.png differ
diff --git a/src/triggers/white_square_S.png b/src/triggers/white_square_S.png
new file mode 100644
index 0000000000000000000000000000000000000000..73f8dd7f81d67dbcdac186d8d79da49bfa1ffb98
Binary files /dev/null and b/src/triggers/white_square_S.png differ