Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in
Toggle navigation
Menu
Open sidebar
bob
bob.bio.face
Commits
52d537cd
Commit
52d537cd
authored
Jun 02, 2021
by
Tiago de Freitas Pereira
Browse files
Updated 50 shades examples
parent
edcea8f1
Pipeline
#51202
failed with stage
in 6 minutes and 45 seconds
Changes
1
Pipelines
1
Hide whitespace changes
Inline
Side-by-side
notebooks/50-shades-of-face.ipynb
View file @
52d537cd
%% Cell type:markdown id: tags:
# 50 Shades of face
In this notebook we aim to evalute the impact of different face crops in FR baselines.
For that we are relying on the MOBIO dataset, which is not ideal, but it's short enough to run a bunch of experiments.
%% Cell type:code id: tags:
```
python
# Fetching resources
import
bob.bio.base
from
bob.bio.base.pipelines.vanilla_biometrics
import
execute_vanilla_biometrics
from
bob.bio.base.pipelines.vanilla_biometrics
import
Distance
from
bob.bio.base.pipelines.vanilla_biometrics
import
VanillaBiometricsPipeline
from
bob.bio.face.database
import
MobioDatabase
from
bob.pipelines
import
wrap
from
bob.bio.face.preprocessor
import
FaceCrop
from
bob.bio.face.database
import
MobioDatabase
from
bob.extension
import
rc
from
bob.pipelines
import
wrap
import
os
import
scipy.spatial
import
bob.measure
dask_client
=
None
###
image_size
=
112
# eyes position in the vertical axis
# final position will be image_size/height_denominators
height_denominators
=
[
4.5
,
4
,
3.5
,
3
,
2.8
]
# Eyes distance to be explored
eyes_distances
=
[
30
,
35
,
40
,
42
,
45
,
48
]
output_path
=
"./50-shades"
######## CHANGE YOUR FEATURE EXTRACTOR HERE
from
bob.bio.face.embeddings.mxnet
_models
import
ArcFaceInsightFace
extractor_transformer
=
wrap
([
"sample"
],
ArcFaceInsightFace
())
from
bob.bio.face.embeddings.mxnet
import
ArcFaceInsightFace
_LResNet100
extractor_transformer
=
wrap
([
"sample"
],
ArcFaceInsightFace
_LResNet100
())
### CHANGE YOUR MATCHER HERE
algorithm
=
Distance
(
distance_function
=
scipy
.
spatial
.
distance
.
cosine
,
is_distance_function
=
True
)
##### CHANGE YOUR DATABASE HERE
database
=
MobioDatabase
(
protocol
=
"mobile0-male"
)
sample
=
database
.
references
()[
0
][
0
]
import
matplotlib.pyplot
as
plt
import
bob.io.image
```
%% Cell type:markdown id: tags:
## Setting up the grid
If you want to run this on the cluster, don't forget to
`SETSHELL grid`
before running the cell below.
%% Cell type:code id: tags:
```
python
# Starting dask client
from
dask.distributed
import
Client
from
bob.pipelines.distributed.sge
import
SGEMultipleQueuesCluster
cluster
=
SGEMultipleQueuesCluster
(
min_jobs
=
1
)
dask_client
=
Client
(
cluster
)
```
%% Cell type:markdown id: tags:
## Running different face crops
Here we are varying the
`eyes_distances`
and the ration
`fig_size/height_denominators`
,
generating the transformers and plotting the outcome
%% Cell type:code id: tags:
```
python
%
matplotlib
widget
annotation_type
=
"eyes-center"
fixed_positions
=
None
def
get_cropers
(
final_size
=
112
,
height_denominators
=
[
5
,
4
,
3
,
2
],
eyes_distances
=
[
30
,
35
,
40
,
42
,
45
]):
left_eye_offset
=
1.49
transformers
=
[]
for
e
in
eyes_distances
:
for
h
in
height_denominators
:
right_eye_offset
=
(
final_size
*
left_eye_offset
)
/
(
final_size
-
e
*
left_eye_offset
)
RIGHT_EYE_POS
=
(
final_size
/
h
,
final_size
/
right_eye_offset
)
LEFT_EYE_POS
=
(
final_size
/
h
,
final_size
/
left_eye_offset
)
#RIGHT_EYE_POS = (final_size / 3.44, final_size / 3.02)
#LEFT_EYE_POS = (final_size / 3.44, final_size / 1.49)
cropped_positions
=
{
"leye"
:
LEFT_EYE_POS
,
"reye"
:
RIGHT_EYE_POS
,
}
#print(cropped_positions)
preprocessor_transformer
=
FaceCrop
(
cropped_image_size
=
(
112
,
112
),
cropped_positions
=
cropped_positions
,
color_channel
=
'rgb'
,
fixed_positions
=
fixed_positions
)
transform_extra_arguments
=
(
None
if
(
cropped_positions
is
None
or
fixed_positions
is
not
None
)
else
((
"annotations"
,
"annotations"
),))
preprocessor_transformer
=
wrap
([
"sample"
],
preprocessor_transformer
,
transform_extra_arguments
=
transform_extra_arguments
)
transformers
.
append
(
preprocessor_transformer
)
return
transformers
def
plot_faces
(
transformers
,
database
,
subplot_shape
,
fnmrs
=
None
):
fig
,
axis
=
plt
.
subplots
(
subplot_shape
[
0
],
subplot_shape
[
1
])
offset
=
0
for
ax_h
in
axis
:
for
ax_w
in
ax_h
:
# Picking the first sample
sample
=
database
.
references
()[
0
][
0
]
preprocessor_transformer
=
transformers
[
offset
]
cropped
=
preprocessor_transformer
.
transform
([
sample
])[
0
]
cropped
=
bob
.
io
.
image
.
to_matplotlib
(
cropped
.
data
).
astype
(
"uint8"
)
ax_w
.
imshow
(
cropped
)
reye_y
=
round
(
preprocessor_transformer
.
estimator
.
cropped_positions
[
"reye"
][
0
],
2
)
reye_x
=
round
(
preprocessor_transformer
.
estimator
.
cropped_positions
[
"reye"
][
1
],
2
)
leye_y
=
round
(
preprocessor_transformer
.
estimator
.
cropped_positions
[
"leye"
][
0
],
2
)
leye_x
=
round
(
preprocessor_transformer
.
estimator
.
cropped_positions
[
"leye"
][
1
],
2
)
if
fnmrs
is
None
:
title
=
f
"(
{
reye_y
}
,
{
reye_x
}
) - (
{
leye_y
}
,
{
leye_x
}
)"
else
:
title
=
f
"(
{
reye_y
}
,
{
reye_x
}
) - (
{
leye_y
}
,
{
leye_x
}
) =
{
fnmrs
[
offset
]
}
"
ax_w
.
set_title
(
f
"
{
title
}
"
,
fontsize
=
5
)
ax_w
.
axis
(
'off'
)
offset
+=
1
subplot_shape
=
(
int
((
len
(
eyes_distances
)
*
len
(
height_denominators
))
/
len
(
height_denominators
)),
len
(
height_denominators
))
transformers
=
get_cropers
(
final_size
=
image_size
,
height_denominators
=
height_denominators
,
eyes_distances
=
eyes_distances
)
plot_faces
(
transformers
,
database
,
subplot_shape
)
```
%%%% Output: display_data
%% Cell type:markdown id: tags:
## Run vanilla biometrics
Here we are running Vanilla Biometrics several times and collecting the
`1-FNMR@FMR=0.001`
and plotting.
%% Cell type:code id: tags:
```
python
"""
vanilla_biometrics(
pipeline,
database,
dask_client,
groups,
output,
write_metadata_scores,
checkpoint,
dask_partition_size,
dask_n_workers,
)
"""
from
sklearn.pipeline
import
make_pipeline
write_metadata_scores
=
False
checkpoint
=
False
dask_partition_size
=
None
dask_n_workers
=
15
### Preparing the pipeline
fnmrs
=
[]
for
t
in
transformers
:
# Chain the Transformers together
transformer
=
make_pipeline
(
t
,
extractor_transformer
)
# Assemble the Vanilla Biometric pipeline and execute
pipeline
=
VanillaBiometricsPipeline
(
transformer
,
algorithm
)
execute_vanilla_biometrics
(
pipeline
,
database
,
dask_client
,
[
"dev"
],
output_path
,
write_metadata_scores
,
checkpoint
,
dask_partition_size
,
dask_n_workers
,
allow_scoring_with_all_biometric_references
=
True
)
scores_dev
=
os
.
path
.
join
(
output_path
,
"scores-dev"
)
# Picking FNMR@FAR=0.001
neg
,
pos
=
bob
.
bio
.
base
.
score
.
load
.
split_four_column
(
scores_dev
)
far_thres
=
bob
.
measure
.
far_threshold
(
neg
,
pos
,
0.001
)
fpr
,
fnr
=
bob
.
measure
.
fprfnr
(
neg
,
pos
,
far_thres
)
fnmr_1
=
round
(
1
-
fnr
,
2
)
fnmrs
.
append
(
fnmr_1
)
plot_faces
(
transformers
,
database
,
subplot_shape
,
fnmrs
)
```
%%%% Output: stream
There's no data to train background model.For the rest of the execution it will be assumed that the pipeline is stateless.
There's no data to train background model.For the rest of the execution it will be assumed that the pipeline is stateless.
There's no data to train background model.For the rest of the execution it will be assumed that the pipeline is stateless.
There's no data to train background model.For the rest of the execution it will be assumed that the pipeline is stateless.
There's no data to train background model.For the rest of the execution it will be assumed that the pipeline is stateless.
There's no data to train background model.For the rest of the execution it will be assumed that the pipeline is stateless.
There's no data to train background model.For the rest of the execution it will be assumed that the pipeline is stateless.
There's no data to train background model.For the rest of the execution it will be assumed that the pipeline is stateless.
There's no data to train background model.For the rest of the execution it will be assumed that the pipeline is stateless.
There's no data to train background model.For the rest of the execution it will be assumed that the pipeline is stateless.
There's no data to train background model.For the rest of the execution it will be assumed that the pipeline is stateless.
There's no data to train background model.For the rest of the execution it will be assumed that the pipeline is stateless.
There's no data to train background model.For the rest of the execution it will be assumed that the pipeline is stateless.
There's no data to train background model.For the rest of the execution it will be assumed that the pipeline is stateless.
There's no data to train background model.For the rest of the execution it will be assumed that the pipeline is stateless.
There's no data to train background model.For the rest of the execution it will be assumed that the pipeline is stateless.
There's no data to train background model.For the rest of the execution it will be assumed that the pipeline is stateless.
There's no data to train background model.For the rest of the execution it will be assumed that the pipeline is stateless.
There's no data to train background model.For the rest of the execution it will be assumed that the pipeline is stateless.
There's no data to train background model.For the rest of the execution it will be assumed that the pipeline is stateless.
There's no data to train background model.For the rest of the execution it will be assumed that the pipeline is stateless.
There's no data to train background model.For the rest of the execution it will be assumed that the pipeline is stateless.
There's no data to train background model.For the rest of the execution it will be assumed that the pipeline is stateless.
There's no data to train background model.For the rest of the execution it will be assumed that the pipeline is stateless.
There's no data to train background model.For the rest of the execution it will be assumed that the pipeline is stateless.
There's no data to train background model.For the rest of the execution it will be assumed that the pipeline is stateless.
There's no data to train background model.For the rest of the execution it will be assumed that the pipeline is stateless.
There's no data to train background model.For the rest of the execution it will be assumed that the pipeline is stateless.
There's no data to train background model.For the rest of the execution it will be assumed that the pipeline is stateless.
There's no data to train background model.For the rest of the execution it will be assumed that the pipeline is stateless.
%%%% Output: display_data
%% Cell type:code id: tags:
```
python
# Shutting down client
dask_client
.
shutdown
()
```
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment