diff --git a/doc/guide_chrom.rst b/doc/guide_chrom.rst index e616258891fe7230cc24404bc42fc694ccf168a2..8d2714467d9f766aa90cafa0e1f61bdd76430752 100644 --- a/doc/guide_chrom.rst +++ b/doc/guide_chrom.rst @@ -41,11 +41,11 @@ before the final pulse signal is built. To extract the pulse signal from video sequences, do the following:: - $ ./bin/chrom_pulse.py config.py -vv + $ ./bin/bob_rppg_chrom_pulse.py config.py -vv To see the full options, including parameters and protocols, type:: - $ ./bin/chrom_pulse.py --help + $ ./bin/bob_rppg_chrom_pulse.py --help As you can see, the script takes a configuration file as argument. This configuration file is required to at least specify the database, but can also @@ -94,11 +94,11 @@ given below. The execution of this script is very slow - mainly due to the face detection. You can speed it up using the gridtk_ (especially, if you're at Idiap). For example:: - $ ./bin/jman sub -t 3490 -- ./bin/chrom_pulse.py cohface + $ ./bin/jman sub -t 3490 -- ./bin/bob_rppg_chrom_pulse.py cohface The number of jobs (i.e. 3490) is given by typing:: - $ ./bin/chrom_pulse.py cohface --gridcount + $ ./bin/bob_rppg_chrom_pulse.py cohface --gridcount .. _gridtk: https://pypi.python.org/pypi/gridtk diff --git a/doc/guide_cvpr14.rst b/doc/guide_cvpr14.rst index 9a23cd23150b067ebb07bd400f2328b7f0991a84..5dd152d4944e64e6aa89ccedd6e6a35f61b35688 100644 --- a/doc/guide_cvpr14.rst +++ b/doc/guide_cvpr14.rst @@ -64,11 +64,11 @@ To extract the mean green colors the face region and of the background across the video sequences of the defined database in the configuration file, do the following:: - $ ./bin/cvpr14_extract_face_and_bg_signals.py config.py -vv + $ ./bin/bob_rppg_cvpr14_extract_face_and_bg_signals.py config.py -vv To see the full options, including parameters and protocols, type:: - $ ./bin/cvpr14_extract_face_and_bg_signals.py --help + $ ./bin/bob_rppg_cvpr14_extract_face_and_bg_signals.py --help Note that you can either pass parameters through command-line, or by specififing them in the configuration file. Be aware that @@ -80,11 +80,11 @@ the command-line overrides the configuration file though. You can speed it up using the gridtk_ toolbox (especially, if you're at Idiap). For example:: - $ ./bin/jman sub -t 3490 -- ./bin/cvpr14_extract_face_and_bg_signals. config.py + $ ./bin/jman sub -t 3490 -- ./bin/bob_rppg_cvpr14_extract_face_and_bg_signals. config.py The number of jobs (i.e. 3490) is given by typing:: - $ ./bin/cvpr14_extract_signals.py cohface --gridcount + $ ./bin/bob_rppg_cvpr14_extract_face_and_bg_signals.py cohface --gridcount Step 2: Illumination Rectification @@ -96,7 +96,7 @@ Normalized Linear Mean Square and is then removed from the face signal. To get the rectified green signal of the face area, you should execute the following script:: - $ ./bin/cvpr14_illumination.py config.py -v + $ ./bin/bob_rppg_cvpr14_illumination.py config.py -v Again, parameters can be passed either through the configuration file or the command-line @@ -113,8 +113,8 @@ channel on all the segment of all sequences. By default, the threshold is set su of all the segments will be retained. To get the signals where large motion has been eliminated, execute the following commands:: - $ ./bin/cvpr14_motion.py cohface --save-threshold threshold.txt -vv - $ ./bin/cvpr14_motion.py cohface --load-threshold threshold.txt -vv + $ ./bin/bob_rppg_cvpr14_motion.py cohface --save-threshold threshold.txt -vv + $ ./bin/bob_rppg_cvpr14_motion.py cohface --load-threshold threshold.txt -vv Step 4: Filtering @@ -129,7 +129,7 @@ window. Finally, a bandpass filter is applied to restrict the frequencies to the range corresponding to a plausible heart-rate. To filter the signal, you should execute the following command:: - $ ./bin/cvpr14_filter.py cohface -vv + $ ./bin/bob_rppg_cvpr14_filter.py cohface -vv A Full Configuration File Example --------------------------------- diff --git a/doc/guide_performance.rst b/doc/guide_performance.rst index b8984e6b65a5034d995f72dd9dcb9978cb585607..87133712de288fb5c97a53cd35565c326418e6f6 100644 --- a/doc/guide_performance.rst +++ b/doc/guide_performance.rst @@ -13,12 +13,7 @@ signal. The Welch's algorithm is applied to find the power spectrum of the signal, and the heart rate is found using peak detection in the frequency range of interest. To obtain the heart-rate, you should do the following:: - $ ./bin/rppg_frequency_analysis.py hci -vv - -This script normally takes data from a directory called ``pulse`` -and outputs data to a directory called ``heart-rate``. This output represents -the end of the processing chain and contains the estimated heart-rate for every -video sequence in the dataset. + $ ./bin/bob_rppg_base_get_heart_rate.py config.py -v Generating performance measures @@ -27,7 +22,35 @@ Generating performance measures In order to get some insights on how good the computed heart-rates match the ground truth, you should execute the following script:: - $ ./bin/rppg_compute_performance.py hci --indir heart-rate -v -P + $ ./bin/bob_rppg_base_compute_performance.py config.py -v This will output and save various statistics (Root Mean Square Error, -Pearson correlation) as well as figures (error distribution, scatter plot) +Pearson correlation) as well as figures (error distribution, scatter plot). + + +Again, these scripts rely on the use of configuration +files. An minimal example is given below: + +.. code-block:: python + + import os, sys + + import bob.db.hci_tagging + import bob.db.hci_tagging.driver + + # DATABASE + if os.path.isdir(bob.db.hci_tagging.driver.DATABASE_LOCATION): + dbdir = bob.db.hci_tagging.driver.DATABASE_LOCATION + if dbdir == '': + print("You should provide a directory where the DB is located") + sys.exit() + database = bob.db.hci_tagging.Database() + protocol = 'cvpr14' + + basedir = 'li-hci-cvpr14/' + + # FREQUENCY ANALYSIS + hrdir = basedir + 'hr' + nsegments = 16 + nfft = 8192 + diff --git a/doc/guide_ssr.rst b/doc/guide_ssr.rst index b993a262844ba4c5373b76c6aa7e6b6e9c61bd4b..d4237ab135d80218f4583f89a996d9e19a5f8ef0 100644 --- a/doc/guide_ssr.rst +++ b/doc/guide_ssr.rst @@ -36,11 +36,11 @@ After having applied the skin color filter, the full algorithm is applied, as described in Algorithm 1 in the paper. To get the pulse signals for all video in a database, do the following:: - $ ./bin/ssr_pulse.py config.py -v + $ ./bin/bob_rppg_ssr_pulse.py config.py -v To see the full options, including parameters and protocols, type:: - $ ./bin/ssr_pulse.py --help + $ ./bin/bob_rppg_ssr_pulse.py --help As you can see, the script takes a configuration file as argument. This configuration file is required to at least specify the database, but can also @@ -86,11 +86,11 @@ given below. The execution of this script is very slow - mainly due to the face detection. You can speed it up using the gridtk_ (especially, if you're at Idiap). For example:: - $ ./bin/jman sub -t 3490 -- ./bin/ssr_pulse.py cohface + $ ./bin/jman sub -t 3490 -- ./bin/bob_rppg_ssr_pulse.py cohface The number of jobs (i.e. 3490) is given by typing:: - $ ./bin/ssr_pulse.py cohface --gridcount + $ ./bin/bob_rppg_ssr_pulse.py cohface --gridcount .. _gridtk: https://pypi.python.org/pypi/gridtk