Commit cdce425f authored by Amir MOHAMMADI's avatar Amir MOHAMMADI

Merge branch 'remove-bob.db.msu_mfsd_mod' into 'master'

Remove bob.db.msumfsd_mod and its aggregated db

See merge request !109
parents 990f9bf0 64deb478
Pipeline #45293 failed with stages
in 1 minute and 3 seconds
#!/usr/bin/env python
"""Aggregated Db is a database for face PAD experiments.
This database aggregates the data from 3 publicly available data-sets:
`REPLAYATTACK`_, `REPLAY-MOBILE`_ and `MSU MFSD`_.
You can download the data for the above databases by following the corresponding
links.
The reference citation for the `REPLAYATTACK`_ is [CAM12]_.
The reference citation for the `REPLAY-MOBILE`_ is [CBVM16]_.
The reference citation for the `MSU MFSD`_ is [WHJ15]_.
.. include:: links.rst
"""
from bob.pad.face.database import AggregatedDbPadDatabase
# Directory where the data files are stored.
# This directory is given in the .bob_bio_databases.txt file located in your home directory
ORIGINAL_DIRECTORY = "[YOUR_AGGREGATED_DB_DIRECTORIES]"
"""Value of ``~/.bob_bio_databases.txt`` for this database"""
ORIGINAL_EXTENSION = ".mov" # extension of the data files
database = AggregatedDbPadDatabase(
protocol='grandtest',
original_directory=ORIGINAL_DIRECTORY,
original_extension=ORIGINAL_EXTENSION,
training_depends_on_protocol=True,
)
"""The :py:class:`bob.pad.base.database.PadDatabase` derivative with Aggregated Db
database settings.
.. warning::
This class only provides a programmatic interface to load data in an orderly
manner, respecting usage protocols. It does **not** contain the raw
data files. You should procure those yourself.
Notice that ``original_directory`` is set to ``[YOUR_AGGREGATED_DB_DIRECTORIES]``.
You must make sure to create ``${HOME}/.bob_bio_databases.txt`` file setting this
value to the places where you actually installed the Replay-Attack, Replay-Mobile
and MSU MFSD Databases. In particular, the paths pointing to these 3 databases
must be separated with a space. See the following note with an example of
``[YOUR_AGGREGATED_DB_DIRECTORIES]`` entry in the ``${HOME}/.bob_bio_databases.txt`` file.
.. note::
[YOUR_AGGREGATED_DB_DIRECTORIES] = <PATH_TO_REPLAY_ATTACK> <PATH_TO_REPLAY_MOBILE> <PATH_TO_MSU_MFSD>
"""
protocol = 'grandtest'
"""The default protocol to use for reproducing the baselines.
You may modify this at runtime by specifying the option ``--protocol`` on the
command-line of ``spoof.py`` or using the keyword ``protocol`` on a
configuration file that is loaded **after** this configuration resource.
"""
groups = ["train", "dev", "eval"]
"""The default groups to use for reproducing the baselines.
You may modify this at runtime by specifying the option ``--groups`` on the
command-line of ``spoof.py`` or using the keyword ``groups`` on a
configuration file that is loaded **after** this configuration resource.
"""
#!/usr/bin/env python2
# -*- coding: utf-8 -*-
"""
This file contains configurations to run Frame Differences and SVM based face PAD baseline.
The settings of the preprocessor and extractor are tuned for the Replay-attack database.
In the SVM algorithm the amount of training data is reduced speeding-up the training for
large data sets, such as Aggregated PAD database.
The IQM features used in this algorithm/resource are introduced in the following papers: [WHJ15]_ and [CBVM16]_.
"""
#=======================================================================================
sub_directory = 'frame_diff_svm'
"""
Sub-directory where results will be placed.
You may change this setting using the ``--sub-directory`` command-line option
or the attribute ``sub_directory`` in a configuration file loaded **after**
this resource.
"""
#=======================================================================================
# define preprocessor:
from ..preprocessor import FrameDifference
NUMBER_OF_FRAMES = None # process all frames
MIN_FACE_SIZE = 50 # Minimal size of the face to consider
preprocessor = FrameDifference(
number_of_frames=NUMBER_OF_FRAMES,
min_face_size=MIN_FACE_SIZE)
"""
In the preprocessing stage the frame differences are computed for both facial and non-facial/background
regions. In this case all frames of the input video are considered, which is defined by
``number_of_frames = None``. The frames containing faces of the size below ``min_face_size = 50`` threshold
are discarded. Both RGB and gray-scale videos are acceptable by the preprocessor.
The preprocessing idea is introduced in [AM11]_.
"""
#=======================================================================================
# define extractor:
from ..extractor import FrameDiffFeatures
WINDOW_SIZE = 20
OVERLAP = 0
extractor = FrameDiffFeatures(window_size=WINDOW_SIZE, overlap=OVERLAP)
"""
In the feature extraction stage 5 features are extracted for all non-overlapping windows in
the Frame Difference input signals. Five features are computed for each of windows in the
facial face regions, the same is done for non-facial regions. The non-overlapping option
is controlled by ``overlap = 0``. The length of the window is defined by ``window_size``
argument.
The features are introduced in the following paper: [AM11]_.
"""
#=======================================================================================
# define algorithm:
from bob.pad.base.algorithm import SVM
MACHINE_TYPE = 'C_SVC'
KERNEL_TYPE = 'RBF'
N_SAMPLES = 10000
TRAINER_GRID_SEARCH_PARAMS = {
'cost': [2**P for P in range(-3, 14, 2)],
'gamma': [2**P for P in range(-15, 0, 2)]
}
MEAN_STD_NORM_FLAG = True # enable mean-std normalization
FRAME_LEVEL_SCORES_FLAG = True # one score per frame(!) in this case
SAVE_DEBUG_DATA_FLAG = True # save the data, which might be useful for debugging
REDUCED_TRAIN_DATA_FLAG = True # reduce the amount of training data in the final training stage
N_TRAIN_SAMPLES = 50000 # number of training samples per class in the final SVM training stage
algorithm = SVM(
machine_type=MACHINE_TYPE,
kernel_type=KERNEL_TYPE,
n_samples=N_SAMPLES,
trainer_grid_search_params=TRAINER_GRID_SEARCH_PARAMS,
mean_std_norm_flag=MEAN_STD_NORM_FLAG,
frame_level_scores_flag=FRAME_LEVEL_SCORES_FLAG,
save_debug_data_flag=SAVE_DEBUG_DATA_FLAG,
reduced_train_data_flag=REDUCED_TRAIN_DATA_FLAG,
n_train_samples=N_TRAIN_SAMPLES)
"""
The SVM algorithm with RBF kernel is used to classify the data into *real* and *attack* classes.
One score is produced for each frame of the input video, ``frame_level_scores_flag = True``.
The grid search of SVM parameters is used to select the successful settings.
The grid search is done on the subset of training data.
The size of this subset is defined by ``n_samples`` parameter.
The final training of the SVM is done on the subset of training data ``reduced_train_data_flag = True``.
The size of the subset for the final training stage is defined by the ``n_train_samples`` argument.
The data is also mean-std normalized, ``mean_std_norm_flag = True``.
"""
#!/usr/bin/env python2
# -*- coding: utf-8 -*-
"""
This file contains configurations to run LBP and SVM based face PAD baseline.
The settings of the preprocessor and extractor are tuned for the Replay-attack database.
In the SVM algorithm the amount of training data is reduced speeding-up the training for
large data sets, such as Aggregated PAD database.
The idea of the algorithm is introduced in the following paper: [CAM12]_.
However some settings are different from the ones introduced in the paper.
"""
#=======================================================================================
sub_directory = 'lbp_svm_aggregated_db'
"""
Sub-directory where results will be placed.
You may change this setting using the ``--sub-directory`` command-line option
or the attribute ``sub_directory`` in a configuration file loaded **after**
this resource.
"""
#=======================================================================================
# define preprocessor:
from ..preprocessor import FaceCropAlign
from bob.bio.video.preprocessor import Wrapper
from bob.bio.video.utils import FrameSelector
FACE_SIZE = 64 # The size of the resulting face
RGB_OUTPUT_FLAG = False # Gray-scale output
USE_FACE_ALIGNMENT = False # use annotations
MAX_IMAGE_SIZE = None # no limiting here
FACE_DETECTION_METHOD = None # use annotations
MIN_FACE_SIZE = 50 # skip small faces
_image_preprocessor = FaceCropAlign(face_size = FACE_SIZE,
rgb_output_flag = RGB_OUTPUT_FLAG,
use_face_alignment = USE_FACE_ALIGNMENT,
max_image_size = MAX_IMAGE_SIZE,
face_detection_method = FACE_DETECTION_METHOD,
min_face_size = MIN_FACE_SIZE)
_frame_selector = FrameSelector(selection_style = "all")
preprocessor = Wrapper(preprocessor = _image_preprocessor,
frame_selector = _frame_selector)
"""
In the preprocessing stage the face is cropped in each frame of the input video given facial annotations.
The size of the face is normalized to ``FACE_SIZE`` dimensions. The faces with the size
below ``MIN_FACE_SIZE`` threshold are discarded. The preprocessor is similar to the one introduced in
[CAM12]_, which is defined by ``FACE_DETECTION_METHOD = None``.
"""
#=======================================================================================
# define extractor:
from ..extractor import LBPHistogram
from bob.bio.video.extractor import Wrapper
LBPTYPE = 'uniform'
ELBPTYPE = 'regular'
RAD = 1
NEIGHBORS = 8
CIRC = False
DTYPE = None
extractor = Wrapper(LBPHistogram(
lbptype=LBPTYPE,
elbptype=ELBPTYPE,
rad=RAD,
neighbors=NEIGHBORS,
circ=CIRC,
dtype=DTYPE))
"""
In the feature extraction stage the LBP histograms are extracted from each frame of the preprocessed video.
The parameters are similar to the ones introduced in [CAM12]_.
"""
#=======================================================================================
# define algorithm:
from bob.pad.base.algorithm import SVM
MACHINE_TYPE = 'C_SVC'
KERNEL_TYPE = 'RBF'
N_SAMPLES = 10000
TRAINER_GRID_SEARCH_PARAMS = {
'cost': [2**P for P in range(-3, 14, 2)],
'gamma': [2**P for P in range(-15, 0, 2)]
}
MEAN_STD_NORM_FLAG = True # enable mean-std normalization
FRAME_LEVEL_SCORES_FLAG = True # one score per frame(!) in this case
SAVE_DEBUG_DATA_FLAG = True # save the data, which might be useful for debugging
REDUCED_TRAIN_DATA_FLAG = True # reduce the amount of training data in the final training stage
N_TRAIN_SAMPLES = 50000 # number of training samples per class in the final SVM training stage
algorithm = SVM(
machine_type=MACHINE_TYPE,
kernel_type=KERNEL_TYPE,
n_samples=N_SAMPLES,
trainer_grid_search_params=TRAINER_GRID_SEARCH_PARAMS,
mean_std_norm_flag=MEAN_STD_NORM_FLAG,
frame_level_scores_flag=FRAME_LEVEL_SCORES_FLAG,
save_debug_data_flag=SAVE_DEBUG_DATA_FLAG,
reduced_train_data_flag=REDUCED_TRAIN_DATA_FLAG,
n_train_samples=N_TRAIN_SAMPLES)
"""
The SVM algorithm with RBF kernel is used to classify the data into *real* and *attack* classes.
One score is produced for each frame of the input video, ``frame_level_scores_flag = True``.
The grid search of SVM parameters is used to select the successful settings.
The grid search is done on the subset of training data.
The size of this subset is defined by ``n_samples`` parameter.
The final training of the SVM is done on the subset of training data ``reduced_train_data_flag = True``.
The size of the subset for the final training stage is defined by the ``n_train_samples`` argument.
The data is also mean-std normalized, ``mean_std_norm_flag = True``.
"""
#!/usr/bin/env python
"""`MSU MFSD`_ is a database for face PAD experiments.
Database created at MSU, for face-PAD experiments. The public version of the database contains
280 videos corresponding to 35 clients. The videos are grouped as 'genuine' and 'attack'.
The attack videos have been constructed from the genuine ones,
and consist of three kinds: print, iPad (video-replay), and iPhone (video-replay).
Face-locations are also provided for each frame of each video, but some (6 videos) face-locations are not reliable,
because the videos are not correctly oriented.
The reference citation is [WHJ15]_.
You can download the raw data of the `MSU MFSD`_ database by following
the link.
.. include:: links.rst
"""
from bob.pad.face.database import MaskAttackPadDatabase
# Directory where the data files are stored.
......@@ -29,17 +12,3 @@ database = MaskAttackPadDatabase(
original_directory=original_directory,
original_extension=original_extension,
)
"""The :py:class:`bob.pad.base.database.PadDatabase` derivative with MSU MFSD
database settings.
.. warning::
This class only provides a programmatic interface to load data in an orderly
manner, respecting usage protocols. It does **not** contain the raw
data files. You should procure those yourself.
Notice that ``original_directory`` is set to ``[YOUR_MSU_MFSD_DIRECTORY]``.
You must make sure to create ``${HOME}/.bob_bio_databases.txt`` setting this
value to the place where you actually installed the Replay-Mobile Database, as
explained in the section :ref:`bob.pad.face.baselines`.
"""
#!/usr/bin/env python
"""`MSU MFSD`_ is a database for face PAD experiments.
Database created at MSU, for face-PAD experiments. The public version of the database contains
280 videos corresponding to 35 clients. The videos are grouped as 'genuine' and 'attack'.
The attack videos have been constructed from the genuine ones,
and consist of three kinds: print, iPad (video-replay), and iPhone (video-replay).
Face-locations are also provided for each frame of each video, but some (6 videos) face-locations are not reliable,
because the videos are not correctly oriented.
The reference citation is [WHJ15]_.
You can download the raw data of the `MSU MFSD`_ database by following
the link.
.. include:: links.rst
"""
from bob.pad.face.database import MsuMfsdPadDatabase
# Directory where the data files are stored.
# This directory is given in the .bob_bio_databases.txt file located in your home directory
ORIGINAL_DIRECTORY = "[YOUR_MSU_MFSD_DIRECTORY]"
"""Value of ``~/.bob_bio_databases.txt`` for this database"""
ORIGINAL_EXTENSION = "none" # extension is not used to load the data in the HLDI of this database
database = MsuMfsdPadDatabase(
protocol='grandtest',
original_directory=ORIGINAL_DIRECTORY,
original_extension=ORIGINAL_EXTENSION,
training_depends_on_protocol=True,
)
"""The :py:class:`bob.pad.base.database.PadDatabase` derivative with MSU MFSD
database settings.
.. warning::
This class only provides a programmatic interface to load data in an orderly
manner, respecting usage protocols. It does **not** contain the raw
data files. You should procure those yourself.
Notice that ``original_directory`` is set to ``[YOUR_MSU_MFSD_DIRECTORY]``.
You must make sure to create ``${HOME}/.bob_bio_databases.txt`` setting this
value to the place where you actually installed the Replay-Mobile Database, as
explained in the section :ref:`bob.pad.face.baselines`.
"""
protocol = 'grandtest'
"""The default protocol to use for reproducing the baselines.
You may modify this at runtime by specifying the option ``--protocol`` on the
command-line of ``spoof.py`` or using the keyword ``protocol`` on a
configuration file that is loaded **after** this configuration resource.
"""
groups = ["train", "dev", "eval"]
"""The default groups to use for reproducing the baselines.
You may modify this at runtime by specifying the option ``--groups`` on the
command-line of ``spoof.py`` or using the keyword ``groups`` on a
configuration file that is loaded **after** this configuration resource.
"""
#!/usr/bin/env python2
# -*- coding: utf-8 -*-
"""
This file contains configurations to run Image Quality Measures (IQM) and one-class SVM based face PAD algorithm.
The settings of the preprocessor and extractor are tuned for the Replay-attack database.
In the SVM algorithm the amount of training data is reduced speeding-up the training for
large data sets, such as Aggregated PAD database.
The IQM features used in this algorithm/resource are introduced in the following papers: [WHJ15]_ and [CBVM16]_.
"""
#=======================================================================================
sub_directory = 'qm_one_class_svm_aggregated_db'
"""
Sub-directory where results will be placed.
You may change this setting using the ``--sub-directory`` command-line option
or the attribute ``sub_directory`` in a configuration file loaded **after**
this resource.
"""
#=======================================================================================
# define preprocessor:
from ..preprocessor import FaceCropAlign
from bob.bio.video.preprocessor import Wrapper
from bob.bio.video.utils import FrameSelector
FACE_SIZE = 64 # The size of the resulting face
RGB_OUTPUT_FLAG = True # RGB output
USE_FACE_ALIGNMENT = False # use annotations
MAX_IMAGE_SIZE = None # no limiting here
FACE_DETECTION_METHOD = None # use annotations
MIN_FACE_SIZE = 50 # skip small faces
_image_preprocessor = FaceCropAlign(face_size = FACE_SIZE,
rgb_output_flag = RGB_OUTPUT_FLAG,
use_face_alignment = USE_FACE_ALIGNMENT,
max_image_size = MAX_IMAGE_SIZE,
face_detection_method = FACE_DETECTION_METHOD,
min_face_size = MIN_FACE_SIZE)
_frame_selector = FrameSelector(selection_style = "all")
preprocessor = Wrapper(preprocessor = _image_preprocessor,
frame_selector = _frame_selector)
"""
In the preprocessing stage the face is cropped in each frame of the input video given facial annotations.
The size of the face is normalized to ``FACE_SIZE`` dimensions. The faces of the size
below ``MIN_FACE_SIZE`` threshold are discarded. The preprocessor is similar to the one introduced in
[CAM12]_, which is defined by ``FACE_DETECTION_METHOD = None``. The preprocessed frame is the RGB
facial image, which is defined by ``RGB_OUTPUT_FLAG = True``.
"""
#=======================================================================================
# define extractor:
from ..extractor import ImageQualityMeasure
from bob.bio.video.extractor import Wrapper
GALBALLY = True
MSU = True
DTYPE = None
extractor = Wrapper(ImageQualityMeasure(galbally=GALBALLY, msu=MSU, dtype=DTYPE))
"""
In the feature extraction stage the Image Quality Measures are extracted from each frame of the preprocessed RGB video.
The features to be computed are introduced in the following papers: [WHJ15]_ and [CBVM16]_.
"""
#=======================================================================================
# define algorithm:
from bob.pad.base.algorithm import SVM
MACHINE_TYPE = 'ONE_CLASS'
KERNEL_TYPE = 'RBF'
N_SAMPLES = 50000
TRAINER_GRID_SEARCH_PARAMS = {
'nu': [0.001, 0.01, 0.05, 0.1],
'gamma': [0.01, 0.1, 1, 10]
}
MEAN_STD_NORM_FLAG = True # enable mean-std normalization
FRAME_LEVEL_SCORES_FLAG = True # one score per frame(!) in this case
SAVE_DEBUG_DATA_FLAG = True # save the data, which might be useful for debugging
REDUCED_TRAIN_DATA_FLAG = False # DO NOT reduce the amount of training data in the final training stage
N_TRAIN_SAMPLES = 50000 # number of training samples per class in the final SVM training stage (NOT considered, because REDUCED_TRAIN_DATA_FLAG = False)
algorithm = SVM(
machine_type=MACHINE_TYPE,
kernel_type=KERNEL_TYPE,
n_samples=N_SAMPLES,
trainer_grid_search_params=TRAINER_GRID_SEARCH_PARAMS,
mean_std_norm_flag=MEAN_STD_NORM_FLAG,
frame_level_scores_flag=FRAME_LEVEL_SCORES_FLAG,
save_debug_data_flag=SAVE_DEBUG_DATA_FLAG,
reduced_train_data_flag=REDUCED_TRAIN_DATA_FLAG,
n_train_samples=N_TRAIN_SAMPLES)
"""
The one-class SVM algorithm with RBF kernel is used to classify the data into *real* and *attack* classes.
One score is produced for each frame of the input video, ``frame_level_scores_flag = True``.
The grid search of SVM parameters is used to select the successful settings.
The grid search is done on the subset of training data.
The size of this subset is defined by ``n_samples`` parameter.
The final training of the SVM is done on all training data ``reduced_train_data_flag = False``.
The data is also mean-std normalized, ``mean_std_norm_flag = True``.
"""
#!/usr/bin/env python2
# -*- coding: utf-8 -*-
"""
This file contains configurations to run Image Quality Measures (IQM) and SVM based face PAD baseline.
The settings of the preprocessor and extractor are tuned for the Replay-attack database.
In the SVM algorithm the amount of training data is reduced speeding-up the training for
large data sets, such as Aggregated PAD database.
The IQM features used in this algorithm/resource are introduced in the following papers: [WHJ15]_ and [CBVM16]_.
"""
#=======================================================================================
sub_directory = 'qm_svm_aggregated_db'
"""
Sub-directory where results will be placed.
You may change this setting using the ``--sub-directory`` command-line option
or the attribute ``sub_directory`` in a configuration file loaded **after**
this resource.
"""
#=======================================================================================
# define preprocessor:
from ..preprocessor import FaceCropAlign
from bob.bio.video.preprocessor import Wrapper
from bob.bio.video.utils import FrameSelector
FACE_SIZE = 64 # The size of the resulting face
RGB_OUTPUT_FLAG = True # RGB output
USE_FACE_ALIGNMENT = False # use annotations
MAX_IMAGE_SIZE = None # no limiting here
FACE_DETECTION_METHOD = None # use annotations
MIN_FACE_SIZE = 50 # skip small faces
_image_preprocessor = FaceCropAlign(face_size = FACE_SIZE,
rgb_output_flag = RGB_OUTPUT_FLAG,
use_face_alignment = USE_FACE_ALIGNMENT,
max_image_size = MAX_IMAGE_SIZE,
face_detection_method = FACE_DETECTION_METHOD,
min_face_size = MIN_FACE_SIZE)
_frame_selector = FrameSelector(selection_style = "all")
preprocessor = Wrapper(preprocessor = _image_preprocessor,
frame_selector = _frame_selector)
"""
In the preprocessing stage the face is cropped in each frame of the input video given facial annotations.
The size of the face is normalized to ``FACE_SIZE`` dimensions. The faces of the size
below ``MIN_FACE_SIZE`` threshold are discarded. The preprocessor is similar to the one introduced in
[CAM12]_, which is defined by ``FACE_DETECTION_METHOD = None``. The preprocessed frame is the RGB
facial image, which is defined by ``RGB_OUTPUT_FLAG = True``.
"""
#=======================================================================================
# define extractor:
from ..extractor import ImageQualityMeasure
from bob.bio.video.extractor import Wrapper
GALBALLY = True
MSU = True
DTYPE = None
extractor = Wrapper(ImageQualityMeasure(galbally=GALBALLY, msu=MSU, dtype=DTYPE))
"""
In the feature extraction stage the Image Quality Measures are extracted from each frame of the preprocessed RGB video.
The features to be computed are introduced in the following papers: [WHJ15]_ and [CBVM16]_.
"""
#=======================================================================================
# define algorithm:
from bob.pad.base.algorithm import SVMCascadePCA
MACHINE_TYPE = 'ONE_CLASS'
KERNEL_TYPE = 'RBF'
SVM_KWARGS = {'nu': 0.001, 'gamma': 0.5}
N = 2
POS_SCORES_SLOPE = 0.01
FRAME_LEVEL_SCORES_FLAG = True
algorithm = SVMCascadePCA(
machine_type=MACHINE_TYPE,
kernel_type=KERNEL_TYPE,
svm_kwargs=SVM_KWARGS,
N=N,
pos_scores_slope=POS_SCORES_SLOPE,
frame_level_scores_flag=FRAME_LEVEL_SCORES_FLAG)
"""
The cascade of one-class SVMs with RBF kernel is used to classify the data into *real* and *attack* classes.
One score is produced for each frame of the input video, ``frame_level_scores_flag = True``.
A single SVM in the cascade is trained using two features ``N = 2``.
The positive scores produced by the cascade are reduced by multiplying them with a constant
``pos_scores_slope = 0.01``.
"""
#!/usr/bin/env python2
# -*- coding: utf-8 -*-
"""
This file contains configurations to run Image Quality Measures (IQM) and SVM based face PAD baseline.
The settings of the preprocessor and extractor are tuned for the Replay-attack database.
In the SVM algorithm the amount of training data is reduced speeding-up the training for
large data sets, such as Aggregated PAD database.
The IQM features used in this algorithm/resource are introduced in the following papers: [WHJ15]_ and [CBVM16]_.
"""
#=======================================================================================
sub_directory = 'qm_svm_aggregated_db'
"""
Sub-directory where results will be placed.
You may change this setting using the ``--sub-directory`` command-line option
or the attribute ``sub_directory`` in a configuration file loaded **after**
this resource.
"""
#=======================================================================================
# define preprocessor:
from ..preprocessor import FaceCropAlign
from bob.bio.video.preprocessor import Wrapper
from bob.bio.video.utils import FrameSelector
FACE_SIZE = 64 # The size of the resulting face
RGB_OUTPUT_FLAG = True # RGB output
USE_FACE_ALIGNMENT = False # use annotations
MAX_IMAGE_SIZE = None # no limiting here
FACE_DETECTION_METHOD = None # use annotations
MIN_FACE_SIZE = 50 # skip small faces
_image_preprocessor = FaceCropAlign(face_size = FACE_SIZE,
rgb_output_flag = RGB_OUTPUT_FLAG,
use_face_alignment = USE_FACE_ALIGNMENT,
max_image_size = MAX_IMAGE_SIZE,
face_detection_method = FACE_DETECTION_METHOD,
min_face_size = MIN_FACE_SIZE)
_frame_selector = FrameSelector(selection_style = "all")
preprocessor = Wrapper(preprocessor = _image_preprocessor,
frame_selector = _frame_selector)
"""
In the preprocessing stage the face is cropped in each frame of the input video given facial annotations.
The size of the face is normalized to ``FACE_SIZE`` dimensions. The faces of the size
below ``MIN_FACE_SIZE`` threshold are discarded. The preprocessor is similar to the one introduced in
[CAM12]_, which is defined by ``FACE_DETECTION_METHOD = None``. The preprocessed frame is the RGB
facial image, which is defined by ``RGB_OUTPUT_FLAG = True``.
"""
#=======================================================================================
# define extractor:
from ..extractor import ImageQualityMeasure
from bob.bio.video.extractor import Wrapper
GALBALLY = True
MSU = True
DTYPE = None
extractor = Wrapper(ImageQualityMeasure(galbally=GALBALLY, msu=MSU, dtype=DTYPE))
"""
In the feature extraction stage the Image Quality Measures are extracted from each frame of the preprocessed RGB video.
The features to be computed are introduced in the following papers: [WHJ15]_ and [CBVM16]_.
"""
#=======================================================================================
# define algorithm:
from bob.pad.base.algorithm import SVM
MACHINE_TYPE = 'C_SVC'
KERNEL_TYPE = 'RBF'
N_SAMPLES = 10000
TRAINER_GRID_SEARCH_PARAMS = {
'cost': [2**P for P in range(-3, 14, 2)],
'gamma': [2**P for P in range(-15, 0, 2)]
}
MEAN_STD_NORM_FLAG = True # enable mean-std normalization
FRAME_LEVEL_SCORES_FLAG = True # one score per frame(!) in this case
SAVE_DEBUG_DATA_FLAG = True # save the data, which might be useful for debugging
REDUCED_TRAIN_DATA_FLAG = True # reduce the amount of training data in the final training stage
N_TRAIN_SAMPLES = 50000 # number of training samples per class in the final SVM training stage
algorithm = SVM(
machine_type=MACHINE_TYPE,
kernel_type=KERNEL_TYPE,
n_samples=N_SAMPLES,
trainer_grid_search_params=TRAINER_GRID_SEARCH_PARAMS,
mean_std_norm_flag=MEAN_STD_NORM_FLAG,
frame_level_scores_flag=FRAME_LEVEL_SCORES_FLAG,
save_debug_data_flag=SAVE_DEBUG_DATA_FLAG,
reduced_train_data_flag=REDUCED_TRAIN_DATA_FLAG,
n_train_samples=N_TRAIN_SAMPLES)
"""
The SVM algorithm with RBF kernel is used to classify the data into *real* and *attack* classes.
One score is produced for each frame of the input video, ``frame_level_scores_flag = True``.
The grid search of SVM parameters is used to select the successful settings.
The grid search is done on the subset of training data.
The size of this subset is defined by ``n_samples`` parameter.
The final training of the SVM is done on the subset of training data ``reduced_train_data_flag = True``.
The size of the subset for the final training stage is defined by the ``n_train_samples`` argument.
The data is also mean-std normalized, ``mean_std_norm_flag = True``.
"""
from .database import VideoPadFile
from .replay import ReplayPadD