Commit 8ee4726a authored by Philip ABBET's avatar Philip ABBET

Add frgc/3 (api change: beat.backend.python v1.4.2)

parent b514bfa2
This diff is collapsed.
This diff is collapsed.
.. Copyright (c) 2017 Idiap Research Institute, http://www.idiap.ch/ ..
.. Contact: beat.support@idiap.ch ..
.. ..
.. This file is part of the beat.examples module of the BEAT platform. ..
.. ..
.. Commercial License Usage ..
.. Licensees holding valid commercial BEAT licenses may use this file in ..
.. accordance with the terms contained in a written agreement between you ..
.. and Idiap. For further information contact tto@idiap.ch ..
.. ..
.. Alternatively, this file may be used under the terms of the GNU Affero ..
.. Public License version 3 as published by the Free Software and appearing ..
.. in the file LICENSE.AGPL included in the packaging of this file. ..
.. The BEAT platform is distributed in the hope that it will be useful, but ..
.. WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY ..
.. or FITNESS FOR A PARTICULAR PURPOSE. ..
.. ..
.. You should have received a copy of the GNU Affero Public License along ..
.. with the BEAT platform. If not, see http://www.gnu.org/licenses/. ..
The Face Recognition Grand Challenge
------------------------------------
Changelog
=========
* **Version 3**, 31/Oct/2017:
- Port to beat.backend.python v1.4.2
* **Version 2**, 20/Jan/2016:
- Port to Bob v2
* **Version 1**, 08/Apr/2015:
- Initial release
Description
===========
The `FRGC <http://www.nist.gov/itl/iad/ig/frgc.cfm>`_ data distribution
consists of three parts. The first is the FRGC data set. The second part is
the FRGC BEE. The BEE distribution includes all the data sets for performing
and scoring the six experiments. The third part is a set of baseline
algorithms for experiments 1 through 4. With all three components, it is
possible to run experiments 1 through 4, from processing the raw images
to producing Receiver Operating Characteristics (ROCs).
The data for FRGC consists of 50,000 recordings divided into training and
validation partitions. The training partition is designed for training
algorithms and the validation partition is for assessing performance of an
approach in a laboratory setting. The validation partition consists of data
from 4,003 subject sessions. A subject session is the set of all images of a
person taken each time a person's biometric data is collected and consists of
four controlled still images, two uncontrolled still images, and one
three-dimensional image. The controlled images were taken in a studio setting,
are full frontal facial images taken under two lighting conditions and with
two facial expressions (smiling and neutral). The uncontrolled images were
taken in varying illumination conditions; e.g., hallways, atriums, or outside.
Each set of uncontrolled images contains two expressions, smiling and neutral.
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment