Skip to content
Projects
Groups
Snippets
Help
Loading...
Help
Support
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in
Toggle navigation
bob.pad.face
Project overview
Project overview
Details
Activity
Releases
Repository
Repository
Files
Commits
Branches
Tags
Contributors
Graph
Compare
Issues
14
Issues
14
List
Boards
Labels
Milestones
Merge Requests
2
Merge Requests
2
CI / CD
CI / CD
Pipelines
Jobs
Schedules
Analytics
Analytics
CI / CD
Repository
Value Stream
Members
Members
Collapse sidebar
Close sidebar
Activity
Graph
Create a new issue
Jobs
Commits
Issue Boards
Open sidebar
bob
bob.pad.face
Commits
5eb1b4d6
Commit
5eb1b4d6
authored
Apr 16, 2019
by
Amir MOHAMMADI
Browse files
Options
Browse Files
Download
Email Patches
Plain Diff
Remove traces of dlib and menpo
parent
a260437b
Pipeline
#29316
failed with stage
in 6 minutes and 55 seconds
Changes
24
Pipelines
1
Expand all
Hide whitespace changes
Inline
Side-by-side
Showing
24 changed files
with
2 additions
and
4180 deletions
+2
-4180
bob/pad/face/config/lbp_lr_batl_D_T_IR.py
bob/pad/face/config/lbp_lr_batl_D_T_IR.py
+0
-103
bob/pad/face/config/lbp_svm.py
bob/pad/face/config/lbp_svm.py
+0
-112
bob/pad/face/config/lbp_svm_aggregated_db.py
bob/pad/face/config/lbp_svm_aggregated_db.py
+0
-119
bob/pad/face/config/preprocessor/face_feature_crop_quality_check.py
...ce/config/preprocessor/face_feature_crop_quality_check.py
+0
-490
bob/pad/face/config/preprocessor/video_face_crop.py
bob/pad/face/config/preprocessor/video_face_crop.py
+0
-66
bob/pad/face/config/preprocessor/video_face_crop_align_block_patch.py
.../config/preprocessor/video_face_crop_align_block_patch.py
+0
-131
bob/pad/face/config/qm_lr.py
bob/pad/face/config/qm_lr.py
+0
-86
bob/pad/face/config/qm_one_class_gmm.py
bob/pad/face/config/qm_one_class_gmm.py
+0
-88
bob/pad/face/config/qm_one_class_svm_aggregated_db.py
bob/pad/face/config/qm_one_class_svm_aggregated_db.py
+0
-109
bob/pad/face/config/qm_one_class_svm_cascade_aggregated_db.py
...pad/face/config/qm_one_class_svm_cascade_aggregated_db.py
+0
-98
bob/pad/face/config/qm_svm.py
bob/pad/face/config/qm_svm.py
+0
-101
bob/pad/face/config/qm_svm_aggregated_db.py
bob/pad/face/config/qm_svm_aggregated_db.py
+0
-110
bob/pad/face/database/batl.py
bob/pad/face/database/batl.py
+0
-755
bob/pad/face/lists/batl/color_skin_non_skin_annotations/README
...ad/face/lists/batl/color_skin_non_skin_annotations/README
+0
-47
bob/pad/face/preprocessor/FaceCropAlign.py
bob/pad/face/preprocessor/FaceCropAlign.py
+0
-680
bob/pad/face/preprocessor/LiPulseExtraction.py
bob/pad/face/preprocessor/LiPulseExtraction.py
+0
-196
bob/pad/face/preprocessor/PPGSecure.py
bob/pad/face/preprocessor/PPGSecure.py
+0
-243
bob/pad/face/preprocessor/VideoFaceCropAlignBlockPatch.py
bob/pad/face/preprocessor/VideoFaceCropAlignBlockPatch.py
+0
-378
bob/pad/face/preprocessor/__init__.py
bob/pad/face/preprocessor/__init__.py
+0
-8
bob/pad/face/test/test.py
bob/pad/face/test/test.py
+0
-250
conda/meta.yaml
conda/meta.yaml
+2
-4
develop.cfg
develop.cfg
+0
-3
requirements.txt
requirements.txt
+0
-2
setup.py
setup.py
+0
-1
No files found.
bob/pad/face/config/lbp_lr_batl_D_T_IR.py
deleted
100644 → 0
View file @
a260437b
#!/usr/bin/env python2
# -*- coding: utf-8 -*-
"""
This file contains configurations to run LBP and SVM based face PAD baseline.
The settings are tuned for the Replay-attack database.
The idea of the algorithm is introduced in the following paper: [CAM12]_.
However some settings are different from the ones introduced in the paper.
"""
#=======================================================================================
sub_directory
=
'lbp_svm'
"""
Sub-directory where results will be placed.
You may change this setting using the ``--sub-directory`` command-line option
or the attribute ``sub_directory`` in a configuration file loaded **after**
this resource.
"""
#=======================================================================================
# define preprocessor:
from
..preprocessor
import
FaceCropAlign
from
bob.bio.video.preprocessor
import
Wrapper
from
bob.bio.video.utils
import
FrameSelector
from
..preprocessor.FaceCropAlign
import
auto_norm_image
as
_norm_func
FACE_SIZE
=
64
# The size of the resulting face
RGB_OUTPUT_FLAG
=
False
# Gray-scale output
USE_FACE_ALIGNMENT
=
False
# use annotations
MAX_IMAGE_SIZE
=
None
# no limiting here
FACE_DETECTION_METHOD
=
None
# use annotations
MIN_FACE_SIZE
=
50
# skip small faces
NORMALIZATION_FUNCTION
=
_norm_func
NORMALIZATION_FUNCTION_KWARGS
=
{}
NORMALIZATION_FUNCTION_KWARGS
=
{
'n_sigma'
:
3.0
,
'norm_method'
:
'MAD'
}
_image_preprocessor
=
FaceCropAlign
(
face_size
=
FACE_SIZE
,
rgb_output_flag
=
RGB_OUTPUT_FLAG
,
use_face_alignment
=
USE_FACE_ALIGNMENT
,
max_image_size
=
MAX_IMAGE_SIZE
,
face_detection_method
=
FACE_DETECTION_METHOD
,
min_face_size
=
MIN_FACE_SIZE
,
normalization_function
=
NORMALIZATION_FUNCTION
,
normalization_function_kwargs
=
NORMALIZATION_FUNCTION_KWARGS
)
_frame_selector
=
FrameSelector
(
selection_style
=
"all"
)
preprocessor
=
Wrapper
(
preprocessor
=
_image_preprocessor
,
frame_selector
=
_frame_selector
)
"""
In the preprocessing stage the face is cropped in each frame of the input video given facial annotations.
The size of the face is normalized to ``FACE_SIZE`` dimensions. The faces with the size
below ``MIN_FACE_SIZE`` threshold are discarded. The preprocessor is similar to the one introduced in
[CAM12]_, which is defined by ``FACE_DETECTION_METHOD = None``.
"""
#=======================================================================================
# define extractor:
from
..extractor
import
LBPHistogram
from
bob.bio.video.extractor
import
Wrapper
LBPTYPE
=
'uniform'
ELBPTYPE
=
'regular'
RAD
=
1
NEIGHBORS
=
8
CIRC
=
False
DTYPE
=
None
extractor
=
Wrapper
(
LBPHistogram
(
lbptype
=
LBPTYPE
,
elbptype
=
ELBPTYPE
,
rad
=
RAD
,
neighbors
=
NEIGHBORS
,
circ
=
CIRC
,
dtype
=
DTYPE
))
"""
In the feature extraction stage the LBP histograms are extracted from each frame of the preprocessed video.
The parameters are similar to the ones introduced in [CAM12]_.
"""
#=======================================================================================
# define algorithm:
from
bob.pad.base.algorithm
import
LogRegr
C
=
1.
# The regularization parameter for the LR classifier
FRAME_LEVEL_SCORES_FLAG
=
True
# Return one score per frame
algorithm
=
LogRegr
(
C
=
C
,
frame_level_scores_flag
=
FRAME_LEVEL_SCORES_FLAG
)
"""
The Logistic Regression is used to classify the data into *real* and *attack* classes.
One score is produced for each frame of the input video, ``frame_level_scores_flag = True``.
The sub-sampling of training data is not used here, sub-sampling flags have default ``False``
values.
"""
bob/pad/face/config/lbp_svm.py
deleted
100644 → 0
View file @
a260437b
#!/usr/bin/env python2
# -*- coding: utf-8 -*-
"""
This file contains configurations to run LBP and SVM based face PAD baseline.
The settings are tuned for the Replay-attack database.
The idea of the algorithm is introduced in the following paper: [CAM12]_.
However some settings are different from the ones introduced in the paper.
"""
#=======================================================================================
sub_directory
=
'lbp_svm'
"""
Sub-directory where results will be placed.
You may change this setting using the ``--sub-directory`` command-line option
or the attribute ``sub_directory`` in a configuration file loaded **after**
this resource.
"""
#=======================================================================================
# define preprocessor:
from
..preprocessor
import
FaceCropAlign
from
bob.bio.video.preprocessor
import
Wrapper
from
bob.bio.video.utils
import
FrameSelector
FACE_SIZE
=
64
# The size of the resulting face
RGB_OUTPUT_FLAG
=
False
# Gray-scale output
USE_FACE_ALIGNMENT
=
False
# use annotations
MAX_IMAGE_SIZE
=
None
# no limiting here
FACE_DETECTION_METHOD
=
None
# use annotations
MIN_FACE_SIZE
=
50
# skip small faces
_image_preprocessor
=
FaceCropAlign
(
face_size
=
FACE_SIZE
,
rgb_output_flag
=
RGB_OUTPUT_FLAG
,
use_face_alignment
=
USE_FACE_ALIGNMENT
,
max_image_size
=
MAX_IMAGE_SIZE
,
face_detection_method
=
FACE_DETECTION_METHOD
,
min_face_size
=
MIN_FACE_SIZE
)
_frame_selector
=
FrameSelector
(
selection_style
=
"all"
)
preprocessor
=
Wrapper
(
preprocessor
=
_image_preprocessor
,
frame_selector
=
_frame_selector
)
"""
In the preprocessing stage the face is cropped in each frame of the input video given facial annotations.
The size of the face is normalized to ``FACE_SIZE`` dimensions. The faces with the size
below ``MIN_FACE_SIZE`` threshold are discarded. The preprocessor is similar to the one introduced in
[CAM12]_, which is defined by ``FACE_DETECTION_METHOD = None``.
"""
#=======================================================================================
# define extractor:
from
..extractor
import
LBPHistogram
from
bob.bio.video.extractor
import
Wrapper
LBPTYPE
=
'uniform'
ELBPTYPE
=
'regular'
RAD
=
1
NEIGHBORS
=
8
CIRC
=
False
DTYPE
=
None
extractor
=
Wrapper
(
LBPHistogram
(
lbptype
=
LBPTYPE
,
elbptype
=
ELBPTYPE
,
rad
=
RAD
,
neighbors
=
NEIGHBORS
,
circ
=
CIRC
,
dtype
=
DTYPE
))
"""
In the feature extraction stage the LBP histograms are extracted from each frame of the preprocessed video.
The parameters are similar to the ones introduced in [CAM12]_.
"""
#=======================================================================================
# define algorithm:
from
bob.pad.base.algorithm
import
SVM
MACHINE_TYPE
=
'C_SVC'
KERNEL_TYPE
=
'RBF'
N_SAMPLES
=
10000
TRAINER_GRID_SEARCH_PARAMS
=
{
'cost'
:
[
2
**
P
for
P
in
range
(
-
3
,
14
,
2
)],
'gamma'
:
[
2
**
P
for
P
in
range
(
-
15
,
0
,
2
)]
}
MEAN_STD_NORM_FLAG
=
True
# enable mean-std normalization
FRAME_LEVEL_SCORES_FLAG
=
True
# one score per frame(!) in this case
algorithm
=
SVM
(
machine_type
=
MACHINE_TYPE
,
kernel_type
=
KERNEL_TYPE
,
n_samples
=
N_SAMPLES
,
trainer_grid_search_params
=
TRAINER_GRID_SEARCH_PARAMS
,
mean_std_norm_flag
=
MEAN_STD_NORM_FLAG
,
frame_level_scores_flag
=
FRAME_LEVEL_SCORES_FLAG
)
"""
The SVM algorithm with RBF kernel is used to classify the data into *real* and *attack* classes.
One score is produced for each frame of the input video, ``frame_level_scores_flag = True``.
In contrast to [CAM12]_, the grid search of SVM parameters is used to select the
successful settings. The grid search is done on the subset of training data. The size
of this subset is defined by ``n_samples`` parameter.
The data is also mean-std normalized, ``mean_std_norm_flag = True``.
"""
bob/pad/face/config/lbp_svm_aggregated_db.py
deleted
100644 → 0
View file @
a260437b
#!/usr/bin/env python2
# -*- coding: utf-8 -*-
"""
This file contains configurations to run LBP and SVM based face PAD baseline.
The settings of the preprocessor and extractor are tuned for the Replay-attack database.
In the SVM algorithm the amount of training data is reduced speeding-up the training for
large data sets, such as Aggregated PAD database.
The idea of the algorithm is introduced in the following paper: [CAM12]_.
However some settings are different from the ones introduced in the paper.
"""
#=======================================================================================
sub_directory
=
'lbp_svm_aggregated_db'
"""
Sub-directory where results will be placed.
You may change this setting using the ``--sub-directory`` command-line option
or the attribute ``sub_directory`` in a configuration file loaded **after**
this resource.
"""
#=======================================================================================
# define preprocessor:
from
..preprocessor
import
FaceCropAlign
from
bob.bio.video.preprocessor
import
Wrapper
from
bob.bio.video.utils
import
FrameSelector
FACE_SIZE
=
64
# The size of the resulting face
RGB_OUTPUT_FLAG
=
False
# Gray-scale output
USE_FACE_ALIGNMENT
=
False
# use annotations
MAX_IMAGE_SIZE
=
None
# no limiting here
FACE_DETECTION_METHOD
=
None
# use annotations
MIN_FACE_SIZE
=
50
# skip small faces
_image_preprocessor
=
FaceCropAlign
(
face_size
=
FACE_SIZE
,
rgb_output_flag
=
RGB_OUTPUT_FLAG
,
use_face_alignment
=
USE_FACE_ALIGNMENT
,
max_image_size
=
MAX_IMAGE_SIZE
,
face_detection_method
=
FACE_DETECTION_METHOD
,
min_face_size
=
MIN_FACE_SIZE
)
_frame_selector
=
FrameSelector
(
selection_style
=
"all"
)
preprocessor
=
Wrapper
(
preprocessor
=
_image_preprocessor
,
frame_selector
=
_frame_selector
)
"""
In the preprocessing stage the face is cropped in each frame of the input video given facial annotations.
The size of the face is normalized to ``FACE_SIZE`` dimensions. The faces with the size
below ``MIN_FACE_SIZE`` threshold are discarded. The preprocessor is similar to the one introduced in
[CAM12]_, which is defined by ``FACE_DETECTION_METHOD = None``.
"""
#=======================================================================================
# define extractor:
from
..extractor
import
LBPHistogram
from
bob.bio.video.extractor
import
Wrapper
LBPTYPE
=
'uniform'
ELBPTYPE
=
'regular'
RAD
=
1
NEIGHBORS
=
8
CIRC
=
False
DTYPE
=
None
extractor
=
Wrapper
(
LBPHistogram
(
lbptype
=
LBPTYPE
,
elbptype
=
ELBPTYPE
,
rad
=
RAD
,
neighbors
=
NEIGHBORS
,
circ
=
CIRC
,
dtype
=
DTYPE
))
"""
In the feature extraction stage the LBP histograms are extracted from each frame of the preprocessed video.
The parameters are similar to the ones introduced in [CAM12]_.
"""
#=======================================================================================
# define algorithm:
from
bob.pad.base.algorithm
import
SVM
MACHINE_TYPE
=
'C_SVC'
KERNEL_TYPE
=
'RBF'
N_SAMPLES
=
10000
TRAINER_GRID_SEARCH_PARAMS
=
{
'cost'
:
[
2
**
P
for
P
in
range
(
-
3
,
14
,
2
)],
'gamma'
:
[
2
**
P
for
P
in
range
(
-
15
,
0
,
2
)]
}
MEAN_STD_NORM_FLAG
=
True
# enable mean-std normalization
FRAME_LEVEL_SCORES_FLAG
=
True
# one score per frame(!) in this case
SAVE_DEBUG_DATA_FLAG
=
True
# save the data, which might be useful for debugging
REDUCED_TRAIN_DATA_FLAG
=
True
# reduce the amount of training data in the final training stage
N_TRAIN_SAMPLES
=
50000
# number of training samples per class in the final SVM training stage
algorithm
=
SVM
(
machine_type
=
MACHINE_TYPE
,
kernel_type
=
KERNEL_TYPE
,
n_samples
=
N_SAMPLES
,
trainer_grid_search_params
=
TRAINER_GRID_SEARCH_PARAMS
,
mean_std_norm_flag
=
MEAN_STD_NORM_FLAG
,
frame_level_scores_flag
=
FRAME_LEVEL_SCORES_FLAG
,
save_debug_data_flag
=
SAVE_DEBUG_DATA_FLAG
,
reduced_train_data_flag
=
REDUCED_TRAIN_DATA_FLAG
,
n_train_samples
=
N_TRAIN_SAMPLES
)
"""
The SVM algorithm with RBF kernel is used to classify the data into *real* and *attack* classes.
One score is produced for each frame of the input video, ``frame_level_scores_flag = True``.
The grid search of SVM parameters is used to select the successful settings.
The grid search is done on the subset of training data.
The size of this subset is defined by ``n_samples`` parameter.
The final training of the SVM is done on the subset of training data ``reduced_train_data_flag = True``.
The size of the subset for the final training stage is defined by the ``n_train_samples`` argument.
The data is also mean-std normalized, ``mean_std_norm_flag = True``.
"""
bob/pad/face/config/preprocessor/face_feature_crop_quality_check.py
deleted
100644 → 0
View file @
a260437b
This diff is collapsed.
Click to expand it.
bob/pad/face/config/preprocessor/video_face_crop.py
deleted
100644 → 0
View file @
a260437b
#!/usr/bin/env python2
# -*- coding: utf-8 -*-
from
bob.pad.face.preprocessor
import
FaceCropAlign
from
bob.bio.video.preprocessor
import
Wrapper
from
bob.bio.video.utils
import
FrameSelector
# =======================================================================================
# Define instances here:
FACE_SIZE
=
64
# The size of the resulting face
RGB_OUTPUT_FLAG
=
True
# RGB output
USE_FACE_ALIGNMENT
=
False
#
MAX_IMAGE_SIZE
=
None
# no limiting here
FACE_DETECTION_METHOD
=
"dlib"
# use dlib face detection
MIN_FACE_SIZE
=
50
# skip small faces
_image_preprocessor
=
FaceCropAlign
(
face_size
=
FACE_SIZE
,
rgb_output_flag
=
RGB_OUTPUT_FLAG
,
use_face_alignment
=
USE_FACE_ALIGNMENT
,
max_image_size
=
MAX_IMAGE_SIZE
,
face_detection_method
=
FACE_DETECTION_METHOD
,
min_face_size
=
MIN_FACE_SIZE
)
_frame_selector
=
FrameSelector
(
selection_style
=
"all"
)
rgb_face_detector_dlib
=
Wrapper
(
preprocessor
=
_image_preprocessor
,
frame_selector
=
_frame_selector
)
# =======================================================================================
FACE_DETECTION_METHOD
=
"mtcnn"
# use mtcnn face detection
_image_preprocessor
=
FaceCropAlign
(
face_size
=
FACE_SIZE
,
rgb_output_flag
=
RGB_OUTPUT_FLAG
,
use_face_alignment
=
USE_FACE_ALIGNMENT
,
max_image_size
=
MAX_IMAGE_SIZE
,
face_detection_method
=
FACE_DETECTION_METHOD
,
min_face_size
=
MIN_FACE_SIZE
)
rgb_face_detector_mtcnn
=
Wrapper
(
preprocessor
=
_image_preprocessor
,
frame_selector
=
_frame_selector
)
# =======================================================================================
FACE_SIZE
=
64
# The size of the resulting face
RGB_OUTPUT_FLAG
=
False
# Gray-scale output
USE_FACE_ALIGNMENT
=
True
# detect face landmarks locally and align the face
MAX_IMAGE_SIZE
=
1920
# the largest possible dimension of the input image
FACE_DETECTION_METHOD
=
"mtcnn"
# face landmarks detection method
MIN_FACE_SIZE
=
50
# skip faces smaller than this value
NORMALIZATION_FUNCTION
=
None
# no normalization
NORMALIZATION_FUNCTION_KWARGS
=
None
_image_preprocessor
=
FaceCropAlign
(
face_size
=
FACE_SIZE
,
rgb_output_flag
=
RGB_OUTPUT_FLAG
,
use_face_alignment
=
USE_FACE_ALIGNMENT
,
max_image_size
=
MAX_IMAGE_SIZE
,
face_detection_method
=
FACE_DETECTION_METHOD
,
min_face_size
=
MIN_FACE_SIZE
,
normalization_function
=
NORMALIZATION_FUNCTION
,
normalization_function_kwargs
=
NORMALIZATION_FUNCTION_KWARGS
)
bw_face_detect_mtcnn
=
Wrapper
(
preprocessor
=
_image_preprocessor
,
frame_selector
=
_frame_selector
)
bob/pad/face/config/preprocessor/video_face_crop_align_block_patch.py
deleted
100644 → 0
View file @
a260437b
#!/usr/bin/env python2
# -*- coding: utf-8 -*-
# =============================================================================
# Import here:
from
bob.pad.face.preprocessor
import
VideoFaceCropAlignBlockPatch
from
bob.pad.face.preprocessor
import
FaceCropAlign
from
bob.bio.video.preprocessor
import
Wrapper
from
bob.bio.video.utils
import
FrameSelector
from
bob.pad.face.preprocessor.FaceCropAlign
import
auto_norm_image
as
_norm_func
from
bob.pad.face.preprocessor
import
BlockPatch
# =============================================================================
# names of the channels to process:
_channel_names
=
[
'color'
,
'infrared'
,
'depth'
]
# =============================================================================
# dictionary containing preprocessors for all channels:
_preprocessors
=
{}
"""
Preprocessor to be used for Color channel.
"""
FACE_SIZE
=
128
# The size of the resulting face
RGB_OUTPUT_FLAG
=
False
# BW output
USE_FACE_ALIGNMENT
=
True
# use annotations
MAX_IMAGE_SIZE
=
None
# no limiting here
FACE_DETECTION_METHOD
=
None
# use ANNOTATIONS
MIN_FACE_SIZE
=
50
# skip small faces
_image_preprocessor
=
FaceCropAlign
(
face_size
=
FACE_SIZE
,
rgb_output_flag
=
RGB_OUTPUT_FLAG
,
use_face_alignment
=
USE_FACE_ALIGNMENT
,
max_image_size
=
MAX_IMAGE_SIZE
,
face_detection_method
=
FACE_DETECTION_METHOD
,
min_face_size
=
MIN_FACE_SIZE
)
_frame_selector
=
FrameSelector
(
selection_style
=
"all"
)
_preprocessor_rgb
=
Wrapper
(
preprocessor
=
_image_preprocessor
,
frame_selector
=
_frame_selector
)
_preprocessors
[
_channel_names
[
0
]]
=
_preprocessor_rgb
"""
Preprocessor to be used for Infrared (or Thermal) channels:
"""
FACE_SIZE
=
128
# The size of the resulting face
RGB_OUTPUT_FLAG
=
False
# Gray-scale output
USE_FACE_ALIGNMENT
=
True
# use annotations
MAX_IMAGE_SIZE
=
None
# no limiting here
FACE_DETECTION_METHOD
=
None
# use annotations
MIN_FACE_SIZE
=
50
# skip small faces
NORMALIZATION_FUNCTION
=
_norm_func
NORMALIZATION_FUNCTION_KWARGS
=
{}
NORMALIZATION_FUNCTION_KWARGS
=
{
'n_sigma'
:
3.0
,
'norm_method'
:
'MAD'
}
_image_preprocessor_ir
=
FaceCropAlign
(
face_size
=
FACE_SIZE
,
rgb_output_flag
=
RGB_OUTPUT_FLAG
,
use_face_alignment
=
USE_FACE_ALIGNMENT
,
max_image_size
=
MAX_IMAGE_SIZE
,
face_detection_method
=
FACE_DETECTION_METHOD
,
min_face_size
=
MIN_FACE_SIZE
,
normalization_function
=
NORMALIZATION_FUNCTION
,
normalization_function_kwargs
=
NORMALIZATION_FUNCTION_KWARGS
)
_preprocessor_ir
=
Wrapper
(
preprocessor
=
_image_preprocessor_ir
,
frame_selector
=
_frame_selector
)
_preprocessors
[
_channel_names
[
1
]]
=
_preprocessor_ir
"""
Preprocessor to be used for Depth channel:
"""
FACE_SIZE
=
128
# The size of the resulting face
RGB_OUTPUT_FLAG
=
False
# Gray-scale output
USE_FACE_ALIGNMENT
=
True
# use annotations
MAX_IMAGE_SIZE
=
None
# no limiting here
FACE_DETECTION_METHOD
=
None
# use annotations
MIN_FACE_SIZE
=
50
# skip small faces
NORMALIZATION_FUNCTION
=
_norm_func
NORMALIZATION_FUNCTION_KWARGS
=
{}
NORMALIZATION_FUNCTION_KWARGS
=
{
'n_sigma'
:
6.0
,
'norm_method'
:
'MAD'
}
_image_preprocessor_d
=
FaceCropAlign
(
face_size
=
FACE_SIZE
,
rgb_output_flag
=
RGB_OUTPUT_FLAG
,
use_face_alignment
=
USE_FACE_ALIGNMENT
,
max_image_size
=
MAX_IMAGE_SIZE
,
face_detection_method
=
FACE_DETECTION_METHOD
,
min_face_size
=
MIN_FACE_SIZE
,
normalization_function
=
NORMALIZATION_FUNCTION
,
normalization_function_kwargs
=
NORMALIZATION_FUNCTION_KWARGS
)
_preprocessor_d
=
Wrapper
(
preprocessor
=
_image_preprocessor_d
,
frame_selector
=
_frame_selector
)
_preprocessors
[
_channel_names
[
2
]]
=
_preprocessor_d
# =============================================================================
# define parameters and an instance of the patch extractor:
PATCH_SIZE
=
128
STEP
=
1
_block_patch_128x128
=
BlockPatch
(
patch_size
=
PATCH_SIZE
,
step
=
STEP
,
use_annotations_flag
=
False
)
# =============================================================================
"""
Define an instance for extraction of one (**whole face**) multi-channel
(BW-NIR-D) face patch of the size (3 x 128 x 128).
"""
video_face_crop_align_bw_ir_d_channels_3x128x128
=
VideoFaceCropAlignBlockPatch
(
preprocessors
=
_preprocessors
,
channel_names
=
_channel_names
,
return_multi_channel_flag
=
True
,
block_patch_preprocessor
=
_block_patch_128x128
)
# This instance is similar to above, but will return a **vectorized** patch:
video_face_crop_align_bw_ir_d_channels_3x128x128_vect
=
VideoFaceCropAlignBlockPatch
(
preprocessors
=
_preprocessors
,
channel_names
=
_channel_names
,
return_multi_channel_flag
=
False
,
block_patch_preprocessor
=
_block_patch_128x128
)
bob/pad/face/config/qm_lr.py
deleted
100644 → 0
View file @
a260437b
#!/usr/bin/env python2
# -*- coding: utf-8 -*-
"""
This file contains configurations to run Image Quality Measures (IQM) and LR based face PAD algorithm.
The settings of the preprocessor and extractor are tuned for the Replay-attack database.
The IQM features used in this algorithm/resource are introduced in the following papers: [WHJ15]_ and [CBVM16]_.
"""
#=======================================================================================
sub_directory
=
'qm_lr'
"""
Sub-directory where results will be placed.
You may change this setting using the ``--sub-directory`` command-line option
or the attribute ``sub_directory`` in a configuration file loaded **after**
this resource.
"""
#=======================================================================================
# define preprocessor:
from
..preprocessor
import
FaceCropAlign
from
bob.bio.video.preprocessor
import
Wrapper
from
bob.bio.video.utils
import
FrameSelector
FACE_SIZE
=
64
# The size of the resulting face
RGB_OUTPUT_FLAG
=
True
# RGB output
USE_FACE_ALIGNMENT
=
False
# use annotations
MAX_IMAGE_SIZE
=
None
# no limiting here
FACE_DETECTION_METHOD
=
None
# use annotations
MIN_FACE_SIZE
=
50
# skip small faces
_image_preprocessor
=
FaceCropAlign
(
face_size
=
FACE_SIZE
,
rgb_output_flag
=
RGB_OUTPUT_FLAG
,
use_face_alignment
=
USE_FACE_ALIGNMENT
,
max_image_size
=
MAX_IMAGE_SIZE
,
face_detection_method
=
FACE_DETECTION_METHOD
,
min_face_size
=
MIN_FACE_SIZE
)
_frame_selector
=
FrameSelector
(
selection_style
=
"all"
)
preprocessor
=
Wrapper
(
preprocessor
=
_image_preprocessor
,
frame_selector
=
_frame_selector
)
"""
In the preprocessing stage the face is cropped in each frame of the input video given facial annotations.
The size of the face is normalized to ``FACE_SIZE`` dimensions. The faces of the size
below ``MIN_FACE_SIZE`` threshold are discarded. The preprocessor is similar to the one introduced in
[CAM12]_, which is defined by ``FACE_DETECTION_METHOD = None``. The preprocessed frame is the RGB
facial image, which is defined by ``RGB_OUTPUT_FLAG = True``.
"""
#=======================================================================================
# define extractor:
from
..extractor
import
ImageQualityMeasure
from
bob.bio.video.extractor
import
Wrapper
GALBALLY
=
True
MSU
=
True
DTYPE
=
None
extractor
=
Wrapper
(
ImageQualityMeasure
(
galbally
=
GALBALLY
,
msu
=
MSU
,
dtype
=
DTYPE
))
"""
In the feature extraction stage the Image Quality Measures are extracted from each frame of the preprocessed RGB video.
The features to be computed are introduced in the following papers: [WHJ15]_ and [CBVM16]_.
"""
#=======================================================================================
# define algorithm:
from
bob.pad.base.algorithm
import
LogRegr
C
=
1.
# The regularization parameter for the LR classifier
FRAME_LEVEL_SCORES_FLAG
=
True
# Return one score per frame
algorithm
=
LogRegr
(
C
=
C
,
frame_level_scores_flag
=
FRAME_LEVEL_SCORES_FLAG
)
"""
The Logistic Regression is used to classify the data into *real* and *attack* classes.
One score is produced for each frame of the input video, ``frame_level_scores_flag = True``.
The sub-sampling of training data is not used here, sub-sampling flags have default ``False``
values.
"""
bob/pad/face/config/qm_one_class_gmm.py
deleted
100644 → 0
View file @
a260437b
#!/usr/bin/env python2
# -*- coding: utf-8 -*-
"""
This file contains configurations to run Image Quality Measures (IQM) and one-class GMM based face PAD algorithm.
The settings of the preprocessor and extractor are tuned for the Replay-attack database.
The IQM features used in this algorithm/resource are introduced in the following papers: [WHJ15]_ and [CBVM16]_.
"""
#=======================================================================================
sub_directory
=
'qm_one_class_gmm'
"""
Sub-directory where results will be placed.
You may change this setting using the ``--sub-directory`` command-line option
or the attribute ``sub_directory`` in a configuration file loaded **after**
this resource.
"""
#=======================================================================================
# define preprocessor:
from
..preprocessor
import
FaceCropAlign
from
bob.bio.video.preprocessor
import
Wrapper
from
bob.bio.video.utils
import
FrameSelector
FACE_SIZE
=
64
# The size of the resulting face
RGB_OUTPUT_FLAG
=
True
# RGB output
USE_FACE_ALIGNMENT
=
False
# use annotations
MAX_IMAGE_SIZE
=
None
# no limiting here
FACE_DETECTION_METHOD
=
None
# use annotations
MIN_FACE_SIZE
=
50
# skip small faces
_image_preprocessor
=
FaceCropAlign
(
face_size
=
FACE_SIZE
,
rgb_output_flag
=
RGB_OUTPUT_FLAG
,
use_face_alignment
=
USE_FACE_ALIGNMENT
,
max_image_size
=
MAX_IMAGE_SIZE
,
face_detection_method
=
FACE_DETECTION_METHOD
,
min_face_size
=
MIN_FACE_SIZE
)
_frame_selector
=
FrameSelector
(
selection_style
=
"all"
)
preprocessor
=
Wrapper
(
preprocessor
=
_image_preprocessor
,
frame_selector
=
_frame_selector
)
"""
In the preprocessing stage the face is cropped in each frame of the input video given facial annotations.
The size of the face is normalized to ``FACE_SIZE`` dimensions. The faces of the size
below ``MIN_FACE_SIZE`` threshold are discarded. The preprocessor is similar to the one introduced in
[CAM12]_, which is defined by ``FACE_DETECTION_METHOD = None``. The preprocessed frame is the RGB
facial image, which is defined by ``RGB_OUTPUT_FLAG = True``.
"""
#=======================================================================================
# define extractor:
from
..extractor
import
ImageQualityMeasure
from
bob.bio.video.extractor
import
Wrapper
GALBALLY
=
True
MSU
=
True
DTYPE
=
None
extractor
=
Wrapper
(
ImageQualityMeasure
(
galbally
=
GALBALLY
,
msu
=
MSU
,
dtype
=
DTYPE
))
"""
In the feature extraction stage the Image Quality Measures are extracted from each frame of the preprocessed RGB video.
The features to be computed are introduced in the following papers: [WHJ15]_ and [CBVM16]_.
"""
#=======================================================================================
# define algorithm:
from
bob.pad.base.algorithm
import
OneClassGMM
N_COMPONENTS
=
50
RANDOM_STATE
=
3