Commit a8cd5c4c authored by Hatef OTROSHI's avatar Hatef OTROSHI

+ notebook: Extract_ArcFace_from_MOBIO

parent 7e057fa5
Pipeline #50829 passed with stage
in 26 minutes and 59 seconds
{
"cells": [
{
"cell_type": "markdown",
"id": "54da1cf9",
"metadata": {},
"source": [
"# Extracting embedding features from face data\n",
"In this notebook, we aim to extract embedding features from images using face recogntion extractors.\n",
"As an example, we use MOBIO dataset, and extract Arcface features from the face images:"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "3e7ff891",
"metadata": {},
"outputs": [],
"source": [
"##### CHANGE YOUR DATABASE HERE\n",
"from bob.bio.face.config.database.mobio_male import database\n",
"annotation_type = database.annotation_type\n",
"fixed_positions = database.fixed_positions"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "a5fba83d",
"metadata": {},
"outputs": [],
"source": [
"from bob.bio.face.config.baseline.arcface_insightface import load\n",
"pipeline = load(annotation_type, fixed_positions) #pre-process and feature extraction pipeline\n",
"transformer = pipeline.transformer"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "4d610eb0",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"CheckpointWrapper(estimator=Pipeline(steps=[('samplewrapper-1',\n",
" SampleWrapper(estimator=FaceCrop(cropped_image_size=(112,\n",
" 112),\n",
" cropped_positions={'leye': (55,\n",
" 81),\n",
" 'reye': (55,\n",
" 42)}),\n",
" fit_extra_arguments=(),\n",
" transform_extra_arguments=(('annotations',\n",
" 'annotations'),))),\n",
" ('samplewrapper-2',\n",
" SampleWrapper(estimator=ArcFaceInsightFace(),\n",
" fit_extra_arguments=(),\n",
" transform_extra_arguments=()))]),\n",
" features_dir='features',\n",
" load_func=<function load at 0x7f5c5424d4c0>,\n",
" save_func=<function save at 0x7f5c5424d670>)\n"
]
}
],
"source": [
"import bob.pipelines\n",
"\n",
"features_dir = \"features\" #Path to store extracted features\n",
"transformer = bob.pipelines.CheckpointWrapper(transformer, features_dir=features_dir)\n",
"\n",
"# Printing the setup of the transformer\n",
"print(transformer)"
]
},
{
"cell_type": "markdown",
"id": "7ea60d56",
"metadata": {},
"source": [
"As an example, we consider 10 samples from this database and extract features for these samples:"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "bb65175a",
"metadata": {},
"outputs": [],
"source": [
"# get 10 samples from database\n",
"samples = database.all_samples()[:10]"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "aee7754f",
"metadata": {},
"outputs": [],
"source": [
"features = transformer.transform(samples)"
]
},
{
"cell_type": "markdown",
"id": "bb27ce2a",
"metadata": {},
"source": [
"In the following cells, we convert the extracted features to `numpy.array` and check the size of features."
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "a0a9efe1",
"metadata": {},
"outputs": [],
"source": [
"import numpy as np\n",
"from bob.pipelines import SampleBatch\n",
"\n",
"np_features = np.array(SampleBatch(features))"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "92971828",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"(10, 512)"
]
},
"execution_count": 9,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"np_features.shape"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.8"
}
},
"nbformat": 4,
"nbformat_minor": 5
}
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment