diff --git a/README.md b/README.md
index 25573e270598ccf6d6104985705d3de0a89ac536..d1a34e067f278adedca5e72e9670b52321f3c1f4 100644
--- a/README.md
+++ b/README.md
@@ -25,12 +25,19 @@ The installation instructions are based on [conda](https://conda.io) and works o
 
 Once you have installed conda, download the source code of this paper and unpack it. Then, [install Bob](https://www.idiap.ch/software/bob/docs/bob/docs/stable/install.html) using conda.
 
-After install Bob successfully, run the following command:
+After install Bob successfully, create a new conda environment using the provided environment file `environment.yml`: 
 
-```bash
-pip install -r requirements.txt
+``` bash
+mamba env create -f environment.yml
+``````
+
+Then activate the environment: 
+
+``` bash
+conda activate bob_hyg_mask
 ```
 
+
 ## Repository structure
 
 This repository is organised as follows:
@@ -43,51 +50,33 @@ This repository is organised as follows:
 
 1. Download the database from the following link: https://www.idiap.ch/en/dataset/phymatt
 
-2. Update conda using: 
-
-    ``` bash
-    mamba update -n base -c conda-forge conda mamba
-    ```
-
-3. Create a new conda environment using the provided environment file `environment.yml`: 
-
-    ``` bash
-    mamba env create -f environment.yml
-    ``````
-
-4. Activate the environment: 
-
-    ``` bash
-    conda activate bob_hyg_mask
-    ```
-
-5. Create a list of the videos to be used for the experiment. The list should contain the path for each video you want in the experiment. If you want all videos, you can use the following command: 
+2. Create a list of the videos to be used for the experiment. The list should contain the path for each video you want in the experiment. If you want all videos, you can use the following command: 
 
     ``` bash
     find <path_to_database> -name "*.mp4" > <path_to_list>
     ```
 
-6. Run the frames extraction code as follows: 
+3. Run the frames extraction code as follows: 
 
     ``` bash
     python preprocessor/extract_frames.py -l <path_to_list> -o <path_to_output_folder>`
     ```
 
-7. Run the database organization code as follows: 
+4. Run the database organization code as follows: 
 
     ``` bash
     python database/create_database_dataframe.py --frames_list --output_path  -metadata_filename -save_mode --min_face_size`
     ```
 
-8. Run the pipeline as follows: 
+5. Run the pipeline as follows: 
 
     ``` bash
     python pipeline_vuln.py --database_path --output_path --metadata_filename --save_mode --min_face_size --attack_type --attack_params --attac
     ```
 
-9. Once you have the score files, namely the `score-dev.csv`, you can use the script `utils/split_scores.sh` to split the scores into bona-fide and attack scores. The script will create three files: `scores-dev_print-attack.csv`, `scores-dev_replay-attack.csv` and `scores-dev_hyg-maks.csv`.
+6. Once you have the score files, namely the `score-dev.csv`, you can use the script `utils/split_scores.sh` to split the scores into bona-fide and attack scores. The script will create three files: `scores-dev_print-attack.csv`, `scores-dev_replay-attack.csv` and `scores-dev_hyg-maks.csv`.
 
-10. You can then use these files to compute the metrics as follows: 
+7. You can then use these files to compute the metrics as follows: 
 
     ```bash
     bob vuln metrics scores-dev_print-attack.csv scores-dev_replay-attack.csv scores-dev_hyg-maks.csv