diff --git a/README.md b/README.md
index 9ecf505fbf30fc5842a5afd4aa6766a62028458b..4015bfb3a818a81e4de575f10883bff5c8f402d2 100644
--- a/README.md
+++ b/README.md
@@ -55,12 +55,14 @@ In the paper, we used all following combinations (there is only 1 insighftface c
 
 | Reference model (down) \ Probe model (right)  | InsightFace (clean) | FaceNet (clean) | FaceNet (backdoored) |
 | :-------------------------------------------  | :-----------------: | :-------------: | :------------------: |
-| InsightFace (clean)                           | No                  | Yes             | Yes                  |
-| FaceNet (clean)                               | Yes                 | No              | Yes                  |
-| FaceNet (backdoored)                          | Yes                 | Yes             | Yes                  |
+| InsightFace (clean)                           | No                  | Yes (3)         | Yes (5)              |
+| FaceNet (clean)                               | Yes (1)             | No              | Yes (6)              |
+| FaceNet (backdoored)                          | Yes (2)             | Yes (4)         | Yes (7)              |
+
+The number in each cell in the table is a reference to the corresponding number in each listed combination details below.
 
 The template command for each one of the experiments is:
-* Reference model: FaceNet (clean) with probe model: InsightFace (clean)
+* (1) Reference model: FaceNet (clean) with probe model: InsightFace (clean)
 ```bash
 python train_embd_trnsl.py \
 --ffhq_dir ${FFHQ_DIR} \
@@ -73,7 +75,7 @@ python train_embd_trnsl.py \
 ```
 In this above case, `${FACENET_CKPT_BD_i}` is the LightningModule which contains the poisoned data used to train the corresponding backdoored facenet (in that same LightningModule). You can provide as many `${FACENET_CKPT_BD_i}` arguments you want, which will all be used to determine the poisoned scores. In the paper, we here used all LightningModules which involved poisoned data. Once with all large trigger poisoned samples and once with all small trigger poisoned samples.
 
-* Reference model: FaceNet (backdoored) with probe model: InsightFace (clean)
+* (2) Reference model: FaceNet (backdoored) with probe model: InsightFace (clean)
 ```bash
 python train_embd_trnsl.py \
 --ffhq_dir ${FFHQ_DIR} \
@@ -86,7 +88,7 @@ python train_embd_trnsl.py \
 ```
 In this above case, `${FACENET_CKPT_BD_i}` is one LightningModule. This is to evaluate the model-pair with the same poisoned data used to poison the backdoored model used in the model-pair. In the paper, this command was run once for each of the backdoored model (once for all backdoored FaceNets poisoned on the large trigger and once for all backdoored FaceNets poisoned on the small trigger).
 
-* Reference model: FaceNet (backdoored) with probe model: FaceNet (clean)
+* (3) Reference model: InsightFace (clean) with probe model: FaceNet (clean)
 ```bash
 python train_embd_trnsl.py \
 --ffhq_dir ${FFHQ_DIR} \
@@ -94,69 +96,68 @@ python train_embd_trnsl.py \
 --pl_dm_ckpt_fp ${FACENET_CKPT_BD_i} \
 --probe_model ${FACENET_CLEAN_CKPT} \
 --probe_model_emb_size 512 \
---ref_model ${FACENET_CKPT_BD_i} \
+--ref_model insightface \
 --ref_model_emb_size 512
 ```
-In this above case, `${FACENET_CKPT_BD_i}` is one LightningModule. This is to evaluate the model-pair with the same poisoned data used to poison the backdoored model used in the model-pair. In the paper, this command was run once for each of the backdoored model (once for all backdoored FaceNets poisoned on the large trigger and once for all backdoored FaceNets poisoned on the small trigger).
-
+In this above case, `${FACENET_CKPT_BD_i}` is the LightningModule which contains the poisoned data used to train the corresponding backdoored facenet (in that same LightningModule). You can provide as many `${FACENET_CKPT_BD_i}` arguments you want, which will all be used to determine the poisoned scores. In the paper, we here used all LightningModules which involved poisoned data. Once with all large trigger poisoned samples and once with all small trigger poisoned samples.
 
-* Reference model: FaceNet (clean) with probe model: FaceNet (backdoored)
+* (4) Reference model: FaceNet (backdoored) with probe model: FaceNet (clean)
 ```bash
 python train_embd_trnsl.py \
 --ffhq_dir ${FFHQ_DIR} \
 --output_dir ${OUTPUT_DIR} \
 --pl_dm_ckpt_fp ${FACENET_CKPT_BD_i} \
---probe_model ${FACENET_CKPT_BD_i} \
+--probe_model ${FACENET_CLEAN_CKPT} \
 --probe_model_emb_size 512 \
---ref_model ${FACENET_CLEAN_CKPT} \
+--ref_model ${FACENET_CKPT_BD_i} \
 --ref_model_emb_size 512
 ```
 In this above case, `${FACENET_CKPT_BD_i}` is one LightningModule. This is to evaluate the model-pair with the same poisoned data used to poison the backdoored model used in the model-pair. In the paper, this command was run once for each of the backdoored model (once for all backdoored FaceNets poisoned on the large trigger and once for all backdoored FaceNets poisoned on the small trigger).
 
-
-* Reference model: FaceNet (backdoored) with probe model: FaceNet (backdoored) (four variants!)
+* (5) Reference model: InsightFace (clean) with probe model: FaceNet (backdoored)
 ```bash
 python train_embd_trnsl.py \
 --ffhq_dir ${FFHQ_DIR} \
 --output_dir ${OUTPUT_DIR} \
---pl_dm_ckpt_fp ${FACENET_CKPT_BD_k} \
---probe_model ${FACENET_CKPT_BD_j} \
+--pl_dm_ckpt_fp ${FACENET_CKPT_BD_i} \
+--probe_model ${FACENET_CKPT_BD_i} \
 --probe_model_emb_size 512 \
---ref_model ${FACENET_CKPT_BD_i} \
+--ref_model insightface \
 --ref_model_emb_size 512
 ```
-In this above case, there are four variants which are used in the paper:
-1) `${FACENET_CKPT_BD_k}` is `${FACENET_CKPT_BD_i}`
-2) `${FACENET_CKPT_BD_k}` is `${FACENET_CKPT_BD_j}`
-3) `${FACENET_CKPT_BD_k}` is `${FACENET_CKPT_BD_i}` but where the `--probe_model` and `--ref_model` are swapped
-4) `${FACENET_CKPT_BD_k}` is `${FACENET_CKPT_BD_j}` but where the `--probe_model` and `--ref_model` are swapped
-This allows for evaluating all possibilities. In each case, only on checkpoint is used for all parameters, at a time.
+In this above case, `${FACENET_CKPT_BD_i}` is one LightningModule. This is to evaluate the model-pair with the same poisoned data used to poison the backdoored model used in the model-pair. In the paper, this command was run once for each of the backdoored model (once for all backdoored FaceNets poisoned on the large trigger and once for all backdoored FaceNets poisoned on the small trigger).
 
-* Reference model: InsightFace (clean) with probe model: FaceNet (clean)
+* (6) Reference model: FaceNet (clean) with probe model: FaceNet (backdoored)
 ```bash
 python train_embd_trnsl.py \
 --ffhq_dir ${FFHQ_DIR} \
 --output_dir ${OUTPUT_DIR} \
 --pl_dm_ckpt_fp ${FACENET_CKPT_BD_i} \
---probe_model ${FACENET_CLEAN_CKPT} \
+--probe_model ${FACENET_CKPT_BD_i} \
 --probe_model_emb_size 512 \
---ref_model insightface \
+--ref_model ${FACENET_CLEAN_CKPT} \
 --ref_model_emb_size 512
 ```
-In this above case, `${FACENET_CKPT_BD_i}` is the LightningModule which contains the poisoned data used to train the corresponding backdoored facenet (in that same LightningModule). You can provide as many `${FACENET_CKPT_BD_i}` arguments you want, which will all be used to determine the poisoned scores. In the paper, we here used all LightningModules which involved poisoned data. Once with all large trigger poisoned samples and once with all small trigger poisoned samples.
+In this above case, `${FACENET_CKPT_BD_i}` is one LightningModule. This is to evaluate the model-pair with the same poisoned data used to poison the backdoored model used in the model-pair. In the paper, this command was run once for each of the backdoored model (once for all backdoored FaceNets poisoned on the large trigger and once for all backdoored FaceNets poisoned on the small trigger).
+
 
-* Reference model: InsightFace (clean) with probe model: FaceNet (backdoored)
+* (7) Reference model: FaceNet (backdoored) with probe model: FaceNet (backdoored) (four variants!)
 ```bash
 python train_embd_trnsl.py \
 --ffhq_dir ${FFHQ_DIR} \
 --output_dir ${OUTPUT_DIR} \
---pl_dm_ckpt_fp ${FACENET_CKPT_BD_i} \
---probe_model ${FACENET_CKPT_BD_i} \
+--pl_dm_ckpt_fp ${FACENET_CKPT_BD_k} \
+--probe_model ${FACENET_CKPT_BD_j} \
 --probe_model_emb_size 512 \
---ref_model insightface \
+--ref_model ${FACENET_CKPT_BD_i} \
 --ref_model_emb_size 512
 ```
-In this above case, `${FACENET_CKPT_BD_i}` is one LightningModule. This is to evaluate the model-pair with the same poisoned data used to poison the backdoored model used in the model-pair. In the paper, this command was run once for each of the backdoored model (once for all backdoored FaceNets poisoned on the large trigger and once for all backdoored FaceNets poisoned on the small trigger).
+In this above case, there are four variants which are used in the paper:
+1) `${FACENET_CKPT_BD_k}` is `${FACENET_CKPT_BD_i}`
+2) `${FACENET_CKPT_BD_k}` is `${FACENET_CKPT_BD_j}`
+3) `${FACENET_CKPT_BD_k}` is `${FACENET_CKPT_BD_i}` but where the `--probe_model` and `--ref_model` are swapped
+4) `${FACENET_CKPT_BD_k}` is `${FACENET_CKPT_BD_j}` but where the `--probe_model` and `--ref_model` are swapped
+This allows for evaluating all possibilities. In each case, only on checkpoint is used for all parameters, at a time.
 
 
 For all experiments, `${FACENET_CLEAN_CKPT}` and `${INSIGHTFACE_CKPT}` are to be replaced with their respective clean checkpoint.