Skip to content
GitLab
Explore
Sign in
Primary navigation
Search or go to…
Project
bob.bio.face
Manage
Activity
Members
Labels
Plan
Issues
Issue boards
Milestones
Code
Merge requests
Repository
Branches
Commits
Tags
Repository graph
Compare revisions
Build
Pipelines
Jobs
Pipeline schedules
Artifacts
Deploy
Releases
Package registry
Model registry
Operate
Environments
Terraform modules
Monitor
Incidents
Analyze
Value stream analytics
Contributor analytics
CI/CD analytics
Repository analytics
Model experiments
Help
Help
Support
GitLab documentation
Compare GitLab plans
Community forum
Contribute to GitLab
Provide feedback
Keyboard shortcuts
?
Snippets
Groups
Projects
Show more breadcrumbs
bob
bob.bio.face
Commits
a8cd5c4c
Commit
a8cd5c4c
authored
4 years ago
by
Hatef OTROSHI
Browse files
Options
Downloads
Patches
Plain Diff
+ notebook: Extract_ArcFace_from_MOBIO
parent
7e057fa5
No related branches found
No related tags found
1 merge request
!120
Adding a tutorial notebook (Extract ArcFace from MOBIO)
Pipeline
#50829
passed
4 years ago
Stage: build
Changes
1
Pipelines
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
notebooks/Extract_ArcFace_from_MOBIO.ipynb
+171
-0
171 additions, 0 deletions
notebooks/Extract_ArcFace_from_MOBIO.ipynb
with
171 additions
and
0 deletions
notebooks/Extract_ArcFace_from_MOBIO.ipynb
0 → 100644
+
171
−
0
View file @
a8cd5c4c
{
"cells": [
{
"cell_type": "markdown",
"id": "54da1cf9",
"metadata": {},
"source": [
"# Extracting embedding features from face data\n",
"In this notebook, we aim to extract embedding features from images using face recogntion extractors.\n",
"As an example, we use MOBIO dataset, and extract Arcface features from the face images:"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "3e7ff891",
"metadata": {},
"outputs": [],
"source": [
"##### CHANGE YOUR DATABASE HERE\n",
"from bob.bio.face.config.database.mobio_male import database\n",
"annotation_type = database.annotation_type\n",
"fixed_positions = database.fixed_positions"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "a5fba83d",
"metadata": {},
"outputs": [],
"source": [
"from bob.bio.face.config.baseline.arcface_insightface import load\n",
"pipeline = load(annotation_type, fixed_positions) #pre-process and feature extraction pipeline\n",
"transformer = pipeline.transformer"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "4d610eb0",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"CheckpointWrapper(estimator=Pipeline(steps=[('samplewrapper-1',\n",
" SampleWrapper(estimator=FaceCrop(cropped_image_size=(112,\n",
" 112),\n",
" cropped_positions={'leye': (55,\n",
" 81),\n",
" 'reye': (55,\n",
" 42)}),\n",
" fit_extra_arguments=(),\n",
" transform_extra_arguments=(('annotations',\n",
" 'annotations'),))),\n",
" ('samplewrapper-2',\n",
" SampleWrapper(estimator=ArcFaceInsightFace(),\n",
" fit_extra_arguments=(),\n",
" transform_extra_arguments=()))]),\n",
" features_dir='features',\n",
" load_func=<function load at 0x7f5c5424d4c0>,\n",
" save_func=<function save at 0x7f5c5424d670>)\n"
]
}
],
"source": [
"import bob.pipelines\n",
"\n",
"features_dir = \"features\" #Path to store extracted features\n",
"transformer = bob.pipelines.CheckpointWrapper(transformer, features_dir=features_dir)\n",
"\n",
"# Printing the setup of the transformer\n",
"print(transformer)"
]
},
{
"cell_type": "markdown",
"id": "7ea60d56",
"metadata": {},
"source": [
"As an example, we consider 10 samples from this database and extract features for these samples:"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "bb65175a",
"metadata": {},
"outputs": [],
"source": [
"# get 10 samples from database\n",
"samples = database.all_samples()[:10]"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "aee7754f",
"metadata": {},
"outputs": [],
"source": [
"features = transformer.transform(samples)"
]
},
{
"cell_type": "markdown",
"id": "bb27ce2a",
"metadata": {},
"source": [
"In the following cells, we convert the extracted features to `numpy.array` and check the size of features."
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "a0a9efe1",
"metadata": {},
"outputs": [],
"source": [
"import numpy as np\n",
"from bob.pipelines import SampleBatch\n",
"\n",
"np_features = np.array(SampleBatch(features))"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "92971828",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"(10, 512)"
]
},
"execution_count": 9,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"np_features.shape"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.8"
}
},
"nbformat": 4,
"nbformat_minor": 5
}
%% Cell type:markdown id:54da1cf9 tags:
# Extracting embedding features from face data
In this notebook, we aim to extract embedding features from images using face recogntion extractors.
As an example, we use MOBIO dataset, and extract Arcface features from the face images:
%% Cell type:code id:3e7ff891 tags:
```
python
##### CHANGE YOUR DATABASE HERE
from
bob.bio.face.config.database.mobio_male
import
database
annotation_type
=
database
.
annotation_type
fixed_positions
=
database
.
fixed_positions
```
%% Cell type:code id:a5fba83d tags:
```
python
from
bob.bio.face.config.baseline.arcface_insightface
import
load
pipeline
=
load
(
annotation_type
,
fixed_positions
)
#pre-process and feature extraction pipeline
transformer
=
pipeline
.
transformer
```
%% Cell type:code id:4d610eb0 tags:
```
python
import
bob.pipelines
features_dir
=
"
features
"
#Path to store extracted features
transformer
=
bob
.
pipelines
.
CheckpointWrapper
(
transformer
,
features_dir
=
features_dir
)
# Printing the setup of the transformer
print
(
transformer
)
```
%% Output
CheckpointWrapper(estimator=Pipeline(steps=[('samplewrapper-1',
SampleWrapper(estimator=FaceCrop(cropped_image_size=(112,
112),
cropped_positions={'leye': (55,
81),
'reye': (55,
42)}),
fit_extra_arguments=(),
transform_extra_arguments=(('annotations',
'annotations'),))),
('samplewrapper-2',
SampleWrapper(estimator=ArcFaceInsightFace(),
fit_extra_arguments=(),
transform_extra_arguments=()))]),
features_dir='features',
load_func=<function load at 0x7f5c5424d4c0>,
save_func=<function save at 0x7f5c5424d670>)
%% Cell type:markdown id:7ea60d56 tags:
As an example, we consider 10 samples from this database and extract features for these samples:
%% Cell type:code id:bb65175a tags:
```
python
# get 10 samples from database
samples
=
database
.
all_samples
()[:
10
]
```
%% Cell type:code id:aee7754f tags:
```
python
features
=
transformer
.
transform
(
samples
)
```
%% Cell type:markdown id:bb27ce2a tags:
In the following cells, we convert the extracted features to
`numpy.array`
and check the size of features.
%% Cell type:code id:a0a9efe1 tags:
```
python
import
numpy
as
np
from
bob.pipelines
import
SampleBatch
np_features
=
np
.
array
(
SampleBatch
(
features
))
```
%% Cell type:code id:92971828 tags:
```
python
np_features
.
shape
```
%% Output
(10, 512)
This diff is collapsed.
Click to expand it.
Preview
0%
Loading
Try again
or
attach a new file
.
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Save comment
Cancel
Please
register
or
sign in
to comment