diff --git a/doc/example.rst b/doc/example.rst deleted file mode 100644 index f1ba11c52966c798887ed31e1b09637846e04129..0000000000000000000000000000000000000000 --- a/doc/example.rst +++ /dev/null @@ -1,154 +0,0 @@ -.. vim: set fileencoding=utf-8 : - -.. _bob.iris_example: - -============================================================= - A Complete Application: Analysis of the Fisher Iris Dataset -============================================================= - -The Iris flower data set _ or Fisher's Iris data set is a multivariate data set introduced by Sir Ronald Aylmer Fisher (1936) as an example of discriminant analysis. -It is sometimes called Anderson's Iris data set because Edgar Anderson collected the data to quantify the morphologic variation of Iris flowers of three related species. -The dataset consists of 50 samples from each of three species of Iris flowers (Iris setosa, Iris virginica and Iris versicolor). -Four features were measured from each sample, they are the length and the width of sepal and petal, in centimeters. -Based on the combination of the four features, Fisher developed a linear discriminant model to distinguish the species from each other. - -In this example, we collect bits and pieces of the previous tutorials and build a complete example that discriminates Iris species based on Bob. - -.. note:: - - This example will consider all 3 classes for the LDA training. - This is **not** what Fisher did in his paper [Fisher1936]_ . - In that work Fisher did the *right* thing only for the first 2-class problem (setosa *versus* versicolor). - You can reproduce the 2-class LDA using bob's LDA training system without problems. - When inserting the virginica class, Fisher decides for a different metric (:math:4vi + ve - 5se) and solves for the matrices in the last row of Table VIII. - - This is OK, but does not generalize the method proposed in the beginning of his paper. - Results achieved by the generalized LDA method [Duda1973]_ will not match Fisher's result on that last table, be aware. - That being said, the final histogram presented on that paper looks quite similar to the one produced by this script, showing that Fisher's solution was a good approximation for the generalized LDA implementation available in Bob. - -.. [Fisher1936] **R. A. FISHER**, *The Use of Multiple Measurements in Taxonomic Problems*, Annals of Eugenics, pp. 179-188, 1936 -.. [Duda1973] **R.O. Duda and P.E. Hart**, *Pattern Classification and Scene Analysis*, (Q327.D83) John Wiley & Sons. ISBN 0-471-22361-1. 1973 (See page 218). - - -.. testsetup:: iris - - import bob - import numpy - import matplotlib - if not hasattr(matplotlib, 'backends'): - matplotlib.use('pdf') #non-interactive avoids exception on display - - -Training a :py:class:bob.learn.linear.Machine with LDA ---------------------------------------------------------- - -Creating a :py:class:bob.learn.linear.Machine to perform Linear Discriminant Analysis on the Iris dataset involves using the :py:class:bob.learn.linear.FisherLDATrainer: - -.. doctest:: iris - - >>> import bob.db.iris - >>> import bob.learn.linear - >>> trainer = bob.learn.linear.FisherLDATrainer() - >>> data = bob.db.iris.data() - >>> machine, unused_eigen_values = trainer.train(data.values()) - >>> machine.shape - (4, 2) - -That is it! The returned :py:class:bob.learn.linear.Machine is now setup to perform LDA on the Iris data set. -A few things should be noted: - -1. The returned :py:class:bob.learn.linear.Machine represents the linear projection of the input features to a new 3D space which maximizes the between-class scatter and minimizes the within-class scatter. - In other words, the internal matrix :math:\mathbf{W} is 4-by-2. - The projections are calculated internally using Singular Value Decomposition _ (SVD). - The first projection (first row of :math:\mathbf{W} corresponds to the highest eigenvalue resulting from the decomposition, the second, the second highest, and so on; - -2. The trainer also returns the eigenvalues generated after the SVD for our LDA implementation, in case you would like to use them. - For this example, we just discard this information. - -Looking at the first LDA component ----------------------------------- - -To reproduce Fisher's results, we must pass the data through the created machine: - -.. doctest:: iris - - >>> output = {} - >>> for key in data: - ... output[key] = machine.forward(data[key]) - -At this point the variable output contains the LDA-projected information as 2D :py:class:numpy.ndarray objects. -The only step missing is the visualization of the results. -Fisher proposed the use of a histogram showing the separation achieved by looking at the first only. -Let's reproduce it. - -.. doctest:: iris - - >>> from matplotlib import pyplot - >>> pyplot.hist(output['setosa'][:,0], bins=8, color='green', label='Setosa', alpha=0.5) # doctest: +SKIP - >>> pyplot.hist(output['versicolor'][:,0], bins=8, color='blue', label='Versicolor', alpha=0.5) # doctest: +SKIP - >>> pyplot.hist(output['virginica'][:,0], bins=8, color='red', label='Virginica', alpha=0.5) # doctest: +SKIP - -We can certainly throw in more decoration: - -.. doctest:: iris - - >>> pyplot.legend() # doctest: +SKIP - >>> pyplot.grid(True) # doctest: +SKIP - >>> pyplot.axis([-3,+3,0,20]) # doctest: +SKIP - >>> pyplot.title("Iris Plants / 1st. LDA component") # doctest: +SKIP - >>> pyplot.xlabel("LDA[0]") # doctest: +SKIP - >>> pyplot.ylabel("Count") # doctest: +SKIP - -Finally, to display the plot, do: - -.. code-block:: python - - >>> pyplot.show() - -You should see an image like this: - -.. plot:: plot/iris_lda.py - - -Measuring performance ---------------------- - -You can measure the performance of the system on classifying, say, *Iris Virginica* as compared to the other two variants. -We can use the functions in :ref:bob.measure  for that purpose. -Let's first find a threshold that separates this variant from the others. -We choose to find the threshold at the point where the relative error rate considering both *Versicolor* and *Setosa* variants is the same as for the *Virginica* one. - -.. doctest:: iris - - >>> import bob.measure - >>> negatives = numpy.vstack([output['setosa'], output['versicolor']])[:,0] - >>> positives = output['virginica'][:,0] - >>> threshold= bob.measure.eer_threshold(negatives, positives) - -With the threshold at hand, we can estimate the number of correctly classified *negatives* (or true-rejections) and *positives* (or true-accepts). -Let's translate that: plants from the *Versicolor* and *Setosa* variants that have the first LDA component smaller than the threshold (so called *negatives* at this point) and plants from the *Virginica* variant that have the first LDA component greater than the threshold defined (the *positives*). -To calculate the rates, we just use :ref:bob.measure  again: - -.. doctest:: iris - - >>> true_rejects = bob.measure.correctly_classified_negatives(negatives, threshold) - >>> true_accepts = bob.measure.correctly_classified_positives(positives, threshold) - -From that you can calculate, for example, the number of misses at the defined threshold: - -.. doctest:: iris - - >>> sum(true_rejects) - 98 - >>> sum(true_accepts) - 49 - -You can also plot an ROC curve. -Here is the full code that will lead you to the following plot: - -.. plot:: plot/iris_lda_roc.py - :include-source: True - -.. include:: links.rst - - diff --git a/doc/index.rst b/doc/index.rst index 23e570667b94a7d6469ceb10a6002d78b68f6bc5..a98466d1e65d61938925204234b4bc1022f3b129 100644 --- a/doc/index.rst +++ b/doc/index.rst @@ -1,6 +1,5 @@ .. vim: set fileencoding=utf-8 : -.. _bob_main_page: ======================= Bob @@ -9,23 +8,6 @@ Bob_ is a free signal-processing and machine learning toolbox originally developed by the Biometrics group at Idiap_ Research Institute, Switzerland. -The toolbox is written in a mix of Python_ and C++_ and is designed to be -both efficient and reduce development time. It is composed of a reasonably -large number of packages_ that implement tools for image, audio & video -processing, machine learning & pattern recognition, and a lot more task -specific packages. +The documentation of Bob has moved. Please visit https://www.idiap.ch/software/bob for +up-to-date links. -.. todolist:: - -.. toctree:: - :maxdepth: 2 - - install - tutorial - example - list - source - Bob's Wiki - - -.. include:: links.rst diff --git a/doc/install.rst b/doc/install.rst deleted file mode 100644 index fd963f0497ba0d54af345ccee2bf1be2a3f885d4..0000000000000000000000000000000000000000 --- a/doc/install.rst +++ /dev/null @@ -1,114 +0,0 @@ -.. _bob.install: - -*************************** - Installation Instructions -*************************** - -We offer pre-compiled binary installations of Bob using conda_ for Linux and -MacOS 64-bit operating systems. - -#. Please install conda_ (miniconda is preferred) and get familiar with it. -#. Make sure you have an up-to-date conda_ installation (conda 4.4 and above - is needed) with the **correct configuration** by running the commands - below: - - .. code:: sh - - $conda update -n base conda -$ conda config --set show_channel_urls True - -#. Create an environment for Bob: - - .. code:: sh - - $conda create --name bob_py3 --override-channels \ - -c https://www.idiap.ch/software/bob/conda -c defaults \ - python=3 bob -$ conda activate bob_py3 - $conda config --env --add channels defaults -$ conda config --env --add channels https://www.idiap.ch/software/bob/conda - -#. Install the Bob packages that you need in that environment: - - .. code:: sh - - $conda install bob.io.image bob.bio.base ... - -**Repeat the last two steps for every conda environment that you create for -Bob.** - -For a comprehensive list of packages that are either part of |project| or use -|project|, please visit packages_. - -.. warning:: - - Be aware that if you use packages from our channel and other user/community - channels (especially conda-forge) in one environment, you may end up - with a broken envrionment. We can only guarantee that the packages in our - channel are compatible with the defaults channel. - -.. note:: - - Bob does not work on Windows and hence no conda packages are available for - it. It will not work even if you install it from source. If you are an - experienced user and manage to make Bob work on Windows, please let us know - through our mailing list_. - -.. note:: - - Bob has been reported to run on arm processors (e.g. Raspberry Pi) but is - not installable with conda. Please see :ref:bob.source for installations - on how to install Bob from source. - - -Installing older versions of Bob -================================ - -Since Bob 4, you can easily select the Bob version that you want to install -using conda. For example: - -.. code:: sh - -$ conda install bob=4.0.0 bob.io.base - -will install the version of bob.io.base that was associated with the Bob -4.0.0 release. - -Bob packages that were released before Bob 4 are not easily installable. Here, -we provide conda environment files (**Linux 64-bit only**) that will install -all Bob packages associated with an older release of Bob: - -=========== ============================================================== -Bob Version Environment Files -=========== ============================================================== -2.6.2 :download:envs/v262py27.yaml, :download:envs/v262py35.yaml -2.7.0 :download:envs/v270py27.yaml, :download:envs/v270py35.yaml -3.0.0 :download:envs/v300py27.yaml, :download:envs/v300py36.yaml -=========== ============================================================== - -To install them, download one of the files above and run: - -.. code:: sh - - $conda env create --file v300py36.yaml - - -Details (Advanced Users) -======================== - -Since Bob 4, the bob conda package is just a meta package that pins all -packages to a specific version. Installing bob will not install anything; -it will just impose pinnings in your environment. Normally, installations of -Bob packages should work without installing bob itself. For example, -running: - -.. code:: sh - -$ conda create --name env_name --override-channels \ - -c https://www.idiap.ch/software/bob/conda -c defaults \ - bob. - -should always create a working environment. If it doesn't, please let us know. - - -.. include:: links.rst diff --git a/doc/links.rst b/doc/links.rst deleted file mode 100644 index 1e5adfa94108c2c199f01ba25dcab92e403aa15b..0000000000000000000000000000000000000000 --- a/doc/links.rst +++ /dev/null @@ -1,51 +0,0 @@ -.. _anaconda: https://www.continuum.io/anaconda -.. _Artistic-2.0: http://www.opensource.org/licenses/Artistic-2.0 -.. _Blitz++: http://www.oonumerics.org/blitz -.. _Bob: https://www.idiap.ch/software/bob -.. _Boost: http://www.boost.org -.. _BSD-2-Clause: http://www.opensource.org/licenses/BSD-2-Clause -.. _BSD-3-Clause: http://www.opensource.org/licenses/BSD-3-Clause -.. _BSL-1.0: http://www.opensource.org/licenses/BSL-1.0 -.. _c++: http://www2.research.att.com/~bs/C++.html -.. _CMake: http://www.cmake.org -.. _conda: https://conda.io/ -.. _Dvipng: http://savannah.nongnu.org/projects/dvipng/ -.. _FFMpeg: http://ffmpeg.org -.. _fftw: http://www.fftw.org/ -.. _giflib: http://giflib.sourceforge.net/ -.. _GPL-2.0: http://www.opensource.org/licenses/GPL-2.0 -.. _GPL-3.0: http://www.opensource.org/licenses/GPL-3.0 -.. _HDF5 License: ftp://ftp.hdfgroup.org/HDF5/current/src/unpacked/COPYING -.. _HDF5: http://www.hdfgroup.org/HDF5 -.. _idiap: http://www.idiap.ch -.. _install: https://www.idiap.ch/software/bob/install -.. _IPython: http://ipython.scipy.org -.. _Lapack: http://www.netlib.org/lapack -.. _LaTeX: http://www.latex-project.org/ -.. _LGPL-2.1: http://www.opensource.org/licenses/LGPL-2.1 -.. _libAV: http://libav.org -.. _libjpeg: http://libjpeg.sourceforge.net/ -.. _libpng license: http://www.libpng.org/pub/png/src/libpng-LICENSE.txt -.. _libpng: http://libpng.org/pub/png/libpng.html -.. _LIBSVM: http://www.csie.ntu.edu.tw/~cjlin/libsvm/ -.. _libtiff: http://www.remotesensing.org/libtiff/ -.. _mailing list: https://www.idiap.ch/software/bob/discuss -.. _MatIO: http://matio.sourceforge.net -.. _Matplotlib: http://matplotlib.sourceforge.net -.. _miniconda: http://conda.pydata.org/miniconda.html -.. _MIT: http://www.opensource.org/licenses/MIT -.. _nose: http://nose.readthedocs.org -.. _NumPy Reference: https://docs.scipy.org/doc/numpy/ -.. _NumPy: http://www.numpy.org -.. _packages: https://www.idiap.ch/software/bob/packages -.. _Pillow: http://python-pillow.github.io/ -.. _pkg-config: http://www.freedesktop.org/wiki/Software/pkg-config/ -.. _Python-2.0: http://www.opensource.org/licenses/Python-2.0 -.. _python: http://www.python.org -.. _SciPy: http://www.scipy.org -.. _Setuptools: http://trac.edgewall.org/wiki/setuptools -.. _Sphinx: http://sphinx.pocoo.org -.. _SQLAlchemy: http://www.sqlalchemy.org/ -.. _SQLite: http://www.sqlite.org -.. _VLFeat: http://www.vlfeat.org/ -.. _Wiki: https://www.idiap.ch/software/bob/wiki diff --git a/doc/list.rst b/doc/list.rst deleted file mode 100644 index a1f4a5885fea4744a8ce3e402d5e4358de0d590d..0000000000000000000000000000000000000000 --- a/doc/list.rst +++ /dev/null @@ -1,13 +0,0 @@ -====================== - List of Bob packages -====================== - -Bob is organized in several independent python packages. - -* You can - search PyPI _ - for a comprehensive list of packages **that either use Bob or are part of - Bob**. -* Also, we maintain a list of active packages_. - -.. include:: links.rst diff --git a/doc/source.rst b/doc/source.rst deleted file mode 100644 index 1d4686723d19766144f15ae892f97bb128fe64cb..0000000000000000000000000000000000000000 --- a/doc/source.rst +++ /dev/null @@ -1,11 +0,0 @@ -.. _bob.source: - -======================= - Compiling from Source -======================= - -Please refer to :ref:bob.buildout and :ref:bob.extension for a complete -guide on how to install, develop existing, and create new |project| packages. - - -.. include:: links.rst diff --git a/doc/tutorial.rst b/doc/tutorial.rst deleted file mode 100644 index 4bee3f233a6d6da2684dd0091dc959f75412c68f..0000000000000000000000000000000000000000 --- a/doc/tutorial.rst +++ /dev/null @@ -1,265 +0,0 @@ -.. _bob.tutorial: - -******************************** - Getting started with |project| -******************************** - -The following tutorial constitutes a suitable starting point to get to -know how to use |project|'s packages and to learn its fundamental concepts. - -They all rely on the lab-like environment which is Python_. Using |project| -within a Python environment is convenient because: - -- you can easily glue together all of the components of an experiment - within a single Python script (which does not require to be - compiled), - -- scripts may easily rely on other Python tools like SciPy_ as well - as |project|, and - -- Python bindings are used to transparently run the underlying - efficient C++ compiled code for the key features of the library. - - -Multi-dimensional Arrays -======================== - -The fundamental data structure of |project| is a multi-dimensional array. In -signal- processing and machine learning, arrays are a suitable representation -for many different types of digital signals such as images, audio data and -extracted features. Python is the working environment selected for this library -and so when using Python we have relied on the existing NumPy_ multi- -dimensional arrays :any:numpy.ndarray. This provides with greater flexibility -within the Python environment. - -At the C++ level, the Blitz++_ library is used to handle arrays. |project| -provides internal conversion routines to transparently and efficiently convert -NumPy ndarrays to/from Blitz++. As they are done implicitly, the user has no -need to care about this aspect and should just use NumPy ndarrays everywhere -while inside Python code. - -For an introduction and tutorials about NumPy ndarrays, just visit the NumPy -Reference_ website. For a short tutorial on the bindings from NumPy ndarrays -to Blitz++, you can read the documentation of our :ref:bob.blitz package. - -.. note:: - - Many functions in Bob will return multi-dimensional arrays type - :any:bob.blitz.array, which are **wrapped** by as a :any:numpy.ndarray. While - you can use these arrays in all contexts inside Bob, NumPy and Scipy, some - functionality of the :any:numpy.ndarray are **not available**. In - particular, resizing the arrays with :any:numpy.ndarray.resize will raise an - exception. In such cases, please make a **copy** of the array using - :any:numpy.ndarray.copy. - -Digital signals as multi-dimensional arrays -=========================================== - -For Bob, we have decided to represent digital signals directly as -:any:numpy.ndarray rather than having dedicated classes for each type of -signals. This implies that some convention has been defined. - -Vectors and matrices --------------------- - -A vector is represented as a 1D NumPy array, whereas a matrix is -represented by a 2D array whose first dimension corresponds to the rows, -and second dimension to the columns. - -.. code:: python - - >>> import numpy - >>> A = numpy.array([[1, 2, 3], [4, 5, 6]], dtype='uint8') # A is a matrix 2x3 - >>> print(A) - [[1 2 3] - [4 5 6]] - >>> b = numpy.array([1, 2, 3], dtype='uint8') # b is a vector of length 3 - >>> print(b) - [1 2 3] - -Images ------- - -**Grayscale** images are represented as 2D arrays, the first dimension -being the height (number of rows) and the second dimension being the -width (number of columns). For instance: - -.. code:: python - - >>> img = numpy.ndarray((480,640), dtype='uint8') - -img which is a 2D array can be seen as a gray-scale image of -dimension 640 (width) by 480 (height). In addition, img can be seen -as a matrix with 480 rows and 640 columns. This is the reason why we -have decided that for images, the first dimension is the height and the -second one the width, such that it matches the matrix convention as -well. - -**Color** images are represented as 3D arrays, the first dimension being -the number of color planes, the second dimension the height and the -third the width. As an image is an array, this is the responsibility of -the user to know in which color space the content is stored. -:ref:bob.ip.color provides functions to perform color-space conversion: - -.. code:: python - - >>> import bob.ip.color - >>> colored = numpy.ndarray((3,480,640), dtype='uint8') - >>> gray = bob.ip.color.rgb_to_gray(colored) - >>> print (gray.shape) - [480 640] - -Videos ------- - -A video can be seen as a sequence of images over time. By convention, the first -dimension is for the frame indices (time index), whereas the remaining ones are -related to the corresponding image frame. More information about loading and -handling video sources can be found in :ref:bob.io.video. - -Audio signals -------------- - -Audio signals in Bob are represented as 2D arrays: the first dimension being -the number of channels and the second dimension corresponding to the time -index. For instance: - -.. code:: python - - >>> import bob.io.audio - >>> audio = bob.io.audio.reader("test.wav") - >>> audio.rate - 16000.0 - >>> signal = audio.load() - >>> signal.shape - (1, 268197) - -:ref:bob.io.audio supports loading a variety of audio files. Please refer to -its documentation for more information. - -.. warning:: - - You can also use scipy.io.wavfile to load wav files in Python but the - returned data is slightly different compared to bob.io.audio. In Scipy - the first dimension corresponds to the time index rather than the audio - channel. Also in Scipy, the loaded signal maybe an int8 or int16 or - something else depending on the audio but bob.io.audio always returns - the data as float arrays. We recommend using bob.io.audio since it - supports more audio formats and it is more consistent with the rest of Bob - packages. - - -Input and output -================ - -The default way to read and write data from and to files with Bob is -using the binary HDF5_ format which has several tools to inspect those -files. Bob's support for HDF5 files is given through the :ref:bob.io.base -package. - -On the other hand, loading and writing of different kinds of data is provided -in other Packages_ of Bob using a plug-in strategy. Many image types can be -read using :ref:bob.io.image, and many video codecs are supported through -the :ref:bob.io.video plug-in. Also, a comprehensive support for -MatLab files is given through the :ref:bob.io.matlab interface. - -Additionally, :ref:bob.io.base provides two generic functions -:any:bob.io.base.load and :any:bob.io.base.save to load and save data of -various types, based on the filename extension. For example, to load a -.jpg image, simply call: - -.. code:: python - - >>> import bob.io.base - >>> import bob.io.image #under the hood: loads Bob plug-in for image files - >>> img = bob.io.base.load("myimg.jpg") - -Image processing -================ - -The image processing module is split into several packages, where most -functionality is contained in the :ref:bob.ip.base module. For an -introduction in simple affine image transformations such as scaling and -rotating images, as well as for more complex operations like Gaussian or Sobel -filtering, please refer to the :ref:bob.ip.base. Also, simple -texture features like LBP's can be extracted using :any:bob.ip.base.LBP. - -Gabor wavelet functionality has made it into its own package -:ref:bob.ip.gabor. A tutorial on how to perform a Gabor wavelet transform, -extract Gabor jets in grid graphs and compare Gabor jets, please read the -:ref:bob.ip.gabor. - -Machine learning -================ - -*Machines* and *Trainers* are one of the core components of Bob. -Machines represent statistical models or other functions defined by -parameters that can be trained or set by using trainers. Two examples of -machines are multi-layer perceptrons (MLPs) and Gaussian mixture models -(GMMs). - -The operation you normally expect from a machine is to be able to feed a -feature vector and extract the machine response or output for that input -vector. It works, in many ways, similarly to signal processing blocks. -Different types of machines will give you a different type of output. -Here, we examine a few of the machines and trainers available in Bob. - -- For a start, you should read the :ref:bob.learn.linear, - which is able to perform subspace projections like PCA and LDA. - -- Multi-Layer Perceptron (MLP) Machines and Trainers are provided within the - :ref:bob.learn.mlp package. - -- :ref:bob.learn.libsvm are provided though a bridge to - LibSVM_. - -- Generating strong classifiers by Boosting Strong Classifiers - weak classifiers is provided by :ref:bob.learn.boosting. - -- K-Means clustering and Gaussian Mixture Modeling, as well as Joint - Factor Analysis, Inter-Session Variability and Total Variability - modeling and, finally, Probabilistic Linear Discriminant Analysis is - implemented in :ref:bob.learn.em. - -Database interfaces -=================== - -Bob provides an API to easily query and interface with well known databases. A -database contains information about the organization of the files, functions to -query information such as the data which might be used for training a model, -but it usually does **not** contain the data itself (except for some toy -examples). Please visit :ref:bob.db.base for an excellent guide on Bob's -datbases. - -Bob includes a (growing) list of supported database interfaces. There are some -small toy databases like :ref:bob.db.iris and the :ref:bob.db.mnist -database can be used to train and evaluate classification experiments. For the -former, a detailed example on how to use Bob's machine learning techniques to -classify the Iris flowers is given in :doc:example. - -However, most of the databases contain face images, speech data or videos that -are used for biometric recognition and presentation attack detection -(anti-spoofing). A complete (and growing) list of database packages can be -found in our Packages_. - -Several databases that can be used for biometric recognition share a common -interface, which is defined in the :any:bob.bio.base.database.BioDatabase -package. Generic functionality that is available in all verification database -packages is defined in the :ref:bob.bio.base , while a list of -databases that implement this interface can be found in -:ref:bob.bio.face , :ref:bob.bio.video , -:ref:bob.bio.spear , or any other biometric package depending -on the modality of the database. - -Performance evaluation -====================== - -Methods in the :ref:bob.measure module can be used evaluate error for -multi-class or binary classification problems. Several evaluation -techniques such as Root Mean Squared Error, F-score, Recognition Rates, -False Acceptance and False Rejection Rates, and Equal Error Rates can be -computed, but also functionality for plotting CMC, ROC, DET and EPC -curves are described in more detail in the :ref:bob.measure. - - -.. include:: links.rst