Skip to content
GitLab
Explore
Sign in
Primary navigation
Search or go to…
Project
bob.bio.face
Manage
Activity
Members
Labels
Plan
Issues
Issue boards
Milestones
Code
Merge requests
Repository
Branches
Commits
Tags
Repository graph
Compare revisions
Build
Pipelines
Jobs
Pipeline schedules
Artifacts
Deploy
Releases
Package Registry
Model registry
Operate
Environments
Terraform modules
Monitor
Incidents
Analyze
Value stream analytics
Contributor analytics
CI/CD analytics
Repository analytics
Model experiments
Help
Help
Support
GitLab documentation
Compare GitLab plans
Community forum
Contribute to GitLab
Provide feedback
Keyboard shortcuts
?
Snippets
Groups
Projects
Show more breadcrumbs
bob
bob.bio.face
Commits
d03eb424
Commit
d03eb424
authored
4 years ago
by
Amir MOHAMMADI
Browse files
Options
Downloads
Patches
Plain Diff
[preprocessor][FaceCrop] Automatically infer the cropped positions from face size
parent
6fa75e34
No related branches found
Branches containing commit
No related tags found
Tags containing commit
2 merge requests
!71
Face crop improvements
,
!64
Dask pipelines
Changes
3
Hide whitespace changes
Inline
Side-by-side
Showing
3 changed files
bob/bio/face/helpers.py
+1
-5
1 addition, 5 deletions
bob/bio/face/helpers.py
bob/bio/face/preprocessor/FaceCrop.py
+152
-133
152 additions, 133 deletions
bob/bio/face/preprocessor/FaceCrop.py
bob/bio/face/preprocessor/Scale.py
+5
-5
5 additions, 5 deletions
bob/bio/face/preprocessor/Scale.py
with
158 additions
and
143 deletions
bob/bio/face/helpers.py
+
1
−
5
View file @
d03eb424
#!/usr/bin/env python
# vim: set fileencoding=utf-8 :
from
bob.bio.face.preprocessor
import
FaceCrop
,
Scale
def
face_crop_solver
(
cropped_image_size
,
color_channel
=
"
rgb
"
,
cropped_positions
=
None
,
color_channel
=
"
rgb
"
,
fixed_positions
=
None
,
annotator
=
None
,
dtype
=
"
uint8
"
,
...
...
@@ -15,7 +12,6 @@ def face_crop_solver(
"""
Decide which face cropper to use.
"""
# If there's not cropped positions, just resize
if
cropped_positions
is
None
:
return
Scale
(
cropped_image_size
)
...
...
This diff is collapsed.
Click to expand it.
bob/bio/face/preprocessor/FaceCrop.py
+
152
−
133
View file @
d03eb424
#!/usr/bin/env python
# vim: set fileencoding=utf-8 :
# @author: Manuel Guenther <Manuel.Guenther@idiap.ch>
# @date: Thu May 24 10:41:42 CEST 2012
import
bob.ip.base
import
numpy
import
logging
...
...
@@ -16,95 +11,95 @@ from bob.bio.base import load_resource
class
FaceCrop
(
Base
):
"""
Crops the face according to the given annotations.
This class is designed to perform a geometric normalization of the face based
on the eye locations, using :py:class:`bob.ip.base.FaceEyesNorm`. Usually,
when executing the :py:meth:`crop_face` function, the image and the eye
locations have to be specified. There, the given image will be transformed
such that the eye locations will be placed at specific locations in the
resulting image. These locations, as well as the size of the cropped image,
need to be specified in the constructor of this class, as
``cropped_positions`` and ``cropped_image_size``.
Some image databases do not provide eye locations, but rather bounding boxes.
This is not a problem at all.
Simply define the coordinates, where you want your ``cropped_positions`` to
be in the cropped image, by specifying the same keys in the dictionary that
will be given as ``annotations`` to the :py:meth:`crop_face` function.
.. note::
These locations can even be outside of the cropped image boundary, i.e.,
when the crop should be smaller than the annotated bounding boxes.
Sometimes, databases provide pre-cropped faces, where the eyes are located at
(almost) the same position in all images. Usually, the cropping does not
conform with the cropping that you like (i.e., image resolution is wrong, or
too much background information). However, the database does not provide eye
locations (since they are almost identical for all images). In that case, you
can specify the ``fixed_positions`` in the constructor, which will be taken
instead of the ``annotations`` inside the :py:meth:`crop_face` function (in
which case the ``annotations`` are ignored).
Sometimes, the crop of the face is outside of the original image boundaries.
Usually, these pixels will simply be left black, resulting in sharp edges in
the image. However, some feature extractors do not like these sharp edges. In
this case, you can set the ``mask_sigma`` to copy pixels from the valid
border of the image and add random noise (see
:py:func:`bob.ip.base.extrapolate_mask`).
Parameters
----------
cropped_image_size : (int, int)
The resolution of the cropped image, in order (HEIGHT,WIDTH); if not given,
no face cropping will be performed
cropped_positions : dict
The coordinates in the cropped image, where the annotated points should be
put to. This parameter is a dictionary with usually two elements, e.g.,
``{
'
reye
'
:(RIGHT_EYE_Y, RIGHT_EYE_X) ,
'
leye
'
:(LEFT_EYE_Y, LEFT_EYE_X)}``.
However, also other parameters, such as ``{
'
topleft
'
: ...,
'
bottomright
'
:
...
}
``
are
supported
,
as
long
as
the
``
annotations
``
in
the
`__call__`
function
are
present
.
fixed_positions : dict or None
If specified, ignore the annotations from the database and use these fixed
positions throughout.
mask_sigma : float or None
Fill the area outside of image boundaries with random pixels from the
border, by adding noise to the pixel values. To disable extrapolation, set
this value to ``None``. To disable adding random noise, set it to a
negative value or 0.
mask_neighbors : int
The number of neighbors used during mask extrapolation. See
:py:func:`bob.ip.base.extrapolate_mask` for details.
mask_seed : int or None
The random seed to apply for mask extrapolation.
.. warning::
When run in parallel, the same random seed will be applied to all
parallel processes. Hence, results of parallel execution will differ
from the results in serial execution.
allow_upside_down_normalized_faces: bool, optional
If ``False`` (default), a ValueError is raised when normalized faces are going to be
upside down compared to input image. This allows you to catch wrong annotations in
your database easily. If you are sure about your input, you can set this flag to
``True``.
annotator : :any:`bob.bio.base.annotator.Annotator`
If provided, the annotator will be used if the required annotations are
missing.
kwargs
Remaining keyword parameters passed to the :py:class:`Base` constructor,
such as ``color_channel`` or ``dtype``.
"""
This class is designed to perform a geometric normalization of the face based
on the eye locations, using :py:class:`bob.ip.base.FaceEyesNorm`. Usually,
when executing the :py:meth:`crop_face` function, the image and the eye
locations have to be specified. There, the given image will be transformed
such that the eye locations will be placed at specific locations in the
resulting image. These locations, as well as the size of the cropped image,
need to be specified in the constructor of this class, as
``cropped_positions`` and ``cropped_image_size``.
Some image databases do not provide eye locations, but rather bounding boxes.
This is not a problem at all.
Simply define the coordinates, where you want your ``cropped_positions`` to
be in the cropped image, by specifying the same keys in the dictionary that
will be given as ``annotations`` to the :py:meth:`crop_face` function.
.. note::
These locations can even be outside of the cropped image boundary, i.e.,
when the crop should be smaller than the annotated bounding boxes.
Sometimes, databases provide pre-cropped faces, where the eyes are located at
(almost) the same position in all images. Usually, the cropping does not
conform with the cropping that you like (i.e., image resolution is wrong, or
too much background information). However, the database does not provide eye
locations (since they are almost identical for all images). In that case, you
can specify the ``fixed_positions`` in the constructor, which will be taken
instead of the ``annotations`` inside the :py:meth:`crop_face` function (in
which case the ``annotations`` are ignored).
Sometimes, the crop of the face is outside of the original image boundaries.
Usually, these pixels will simply be left black, resulting in sharp edges in
the image. However, some feature extractors do not like these sharp edges. In
this case, you can set the ``mask_sigma`` to copy pixels from the valid
border of the image and add random noise (see
:py:func:`bob.ip.base.extrapolate_mask`).
Parameters
----------
cropped_image_size : (int, int)
The resolution of the cropped image, in order (HEIGHT,WIDTH); if not given,
no face cropping will be performed
cropped_positions : dict
The coordinates in the cropped image, where the annotated points should be
put to. This parameter is a dictionary with usually two elements, e.g.,
``{
'
reye
'
:(RIGHT_EYE_Y, RIGHT_EYE_X) ,
'
leye
'
:(LEFT_EYE_Y, LEFT_EYE_X)}``.
However, also other parameters, such as ``{
'
topleft
'
: ...,
'
bottomright
'
:
...
}
``
are
supported
,
as
long
as
the
``
annotations
``
in
the
`__call__`
function
are
present
.
fixed_positions : dict or None
If specified, ignore the annotations from the database and use these fixed
positions throughout.
mask_sigma : float or None
Fill the area outside of image boundaries with random pixels from the
border, by adding noise to the pixel values. To disable extrapolation, set
this value to ``None``. To disable adding random noise, set it to a
negative value or 0.
mask_neighbors : int
The number of neighbors used during mask extrapolation. See
:py:func:`bob.ip.base.extrapolate_mask` for details.
mask_seed : int or None
The random seed to apply for mask extrapolation.
.. warning::
When run in parallel, the same random seed will be applied to all
parallel processes. Hence, results of parallel execution will differ
from the results in serial execution.
allow_upside_down_normalized_faces: bool, optional
If ``False`` (default), a ValueError is raised when normalized faces are going to be
upside down compared to input image. This allows you to catch wrong annotations in
your database easily. If you are sure about your input, you can set this flag to
``True``.
annotator : :any:`bob.bio.base.annotator.Annotator`
If provided, the annotator will be used if the required annotations are
missing.
kwargs
Remaining keyword parameters passed to the :py:class:`Base` constructor,
such as ``color_channel`` or ``dtype``.
"""
def
__init__
(
self
,
...
...
@@ -116,11 +111,36 @@ class FaceCrop(Base):
mask_seed
=
None
,
annotator
=
None
,
allow_upside_down_normalized_faces
=
False
,
**
kwargs
**
kwargs
,
):
Base
.
__init__
(
self
,
**
kwargs
)
if
isinstance
(
cropped_image_size
,
int
):
cropped_image_size
=
(
cropped_image_size
,
cropped_image_size
)
if
isinstance
(
cropped_positions
,
str
):
face_size
=
cropped_image_size
[
0
]
if
cropped_positions
==
"
eyes-center
"
:
eyes_distance
=
(
face_size
+
1
)
/
2.0
eyes_center
=
(
face_size
/
4.0
,
(
face_size
-
0.5
)
/
2.0
)
right_eye
=
(
eyes_center
[
0
],
eyes_center
[
1
]
+
eyes_distance
/
2
)
left_eye
=
(
eyes_center
[
0
],
eyes_center
[
1
]
-
eyes_distance
/
2
)
cropped_positions
=
{
"
reye
"
:
right_eye
,
"
leye
"
:
left_eye
}
elif
cropped_positions
==
"
bounding-box
"
:
cropped_positions
=
{
"
topleft
"
:
(
0
,
0
),
"
bottomright
"
:
cropped_image_size
,
}
else
:
raise
ValueError
(
f
"
Got
{
cropped_positions
}
as cropped_positions
"
"
while only eyes and bbox strings are supported.
"
)
# call base class constructor
self
.
cropped_image_size
=
cropped_image_size
self
.
cropped_positions
=
cropped_positions
...
...
@@ -169,28 +189,28 @@ class FaceCrop(Base):
def
crop_face
(
self
,
image
,
annotations
=
None
):
"""
Crops the face.
Executes the face cropping on the given image and returns the cropped
version of it.
Parameters
----------
image : 2D :py:class:`numpy.ndarray`
The face image to be processed.
annotations : dict or ``None``
The annotations that fit to the given image. ``None`` is only accepted,
when ``fixed_positions`` were specified in the constructor.
Returns
-------
face : 2D :py:class:`numpy.ndarray` (float)
The cropped face.
Raises
------
ValueError
If the annotations is None.
"""
Executes the face cropping on the given image and returns the cropped
version of it.
Parameters
----------
image : 2D :py:class:`numpy.ndarray`
The face image to be processed.
annotations : dict or ``None``
The annotations that fit to the given image. ``None`` is only accepted,
when ``fixed_positions`` were specified in the constructor.
Returns
-------
face : 2D :py:class:`numpy.ndarray` (float)
The cropped face.
Raises
------
ValueError
If the annotations is None.
"""
if
self
.
fixed_positions
is
not
None
:
annotations
=
self
.
fixed_positions
if
annotations
is
None
:
...
...
@@ -282,23 +302,23 @@ class FaceCrop(Base):
def
transform
(
self
,
X
,
annotations
=
None
):
"""
Aligns the given image according to the given annotations.
First, the desired color channel is extracted from the given image.
Afterward, the face is cropped, according to the given ``annotations`` (or
to ``fixed_positions``, see :py:meth:`crop_face`). Finally, the resulting
face is converted to the desired data type.
First, the desired color channel is extracted from the given image.
Afterward, the face is cropped, according to the given ``annotations`` (or
to ``fixed_positions``, see :py:meth:`crop_face`). Finally, the resulting
face is converted to the desired data type.
Parameters
----------
image : 2D or 3D :py:class:`numpy.ndarray`
The face image to be processed.
annotations : dict or ``None``
The annotations that fit to the given image.
Parameters
----------
image : 2D or 3D :py:class:`numpy.ndarray`
The face image to be processed.
annotations : dict or ``None``
The annotations that fit to the given image.
Returns
-------
face : 2D :py:class:`numpy.ndarray`
The cropped face.
"""
Returns
-------
face : 2D :py:class:`numpy.ndarray`
The cropped face.
"""
def
_crop
(
image
,
annot
):
# if annotations are missing and cannot do anything else return None.
...
...
@@ -339,7 +359,6 @@ class FaceCrop(Base):
else
:
return
[
_crop
(
data
,
annot
)
for
data
,
annot
in
zip
(
X
,
annotations
)]
def
__getstate__
(
self
):
d
=
self
.
__dict__
.
copy
()
d
.
pop
(
"
mask_rng
"
)
...
...
This diff is collapsed.
Click to expand it.
bob/bio/face/preprocessor/Scale.py
+
5
−
5
View file @
d03eb424
#!/usr/bin/env python
# vim: set fileencoding=utf-8 :
from
sklearn.preprocessing
import
FunctionTransformer
from
skimage.transform
import
resize
from
sklearn.utils
import
check_array
...
...
@@ -14,14 +11,17 @@ def scale(images, target_img_size):
----------
images : array_like
A list of images (in Bob format) to be scaled to the target size
target_img_size : tuple
A tuple of size 2 as (H, W)
target_img_size :
int or
tuple
A tuple of size 2 as (H, W)
or an integer where H==W
Returns
-------
numpy.ndarray
Scaled images
"""
if
isinstance
(
target_img_size
,
int
):
target_img_size
=
(
target_img_size
,
target_img_size
)
images
=
check_array
(
images
,
allow_nd
=
True
)
images
=
to_matplotlib
(
images
)
...
...
This diff is collapsed.
Click to expand it.
Preview
0%
Loading
Try again
or
attach a new file
.
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Save comment
Cancel
Please
register
or
sign in
to comment