Commit 5725ca2f authored by Manuel Günther's avatar Manuel Günther
Browse files

Re-arranged the classes to fit better to the current structure of Bob...

Re-arranged the classes to fit better to the current structure of Bob packages; new README and documentation strategy.
parent c227b7e1
*~
*.swp
*.pyc
_*.so
*.so
*.dylib
bin
eggs
parts
.installed.cfg
.mr.developer.cfg
.settings
.coverage
build
*.egg-info
*egg
src
develop-eggs
sphinx
dist
*.so
CMakeLists.txt
.nfs*
.gdb_history
build
*.egg
src/
......@@ -9,6 +9,7 @@ matrix:
- SCIPYSPEC===0.12.0
- secure: IrUGhi6wLNjZTcVmyvlUW4S8gvbsjd1KGTONBBFJLe2SsczfuTIp0UrwPYAu3O2mEh04ivdgpjy/sKKH+atd+mx1WGYpyc7telS+6pzzGhW+utLnknETJYN0diK1Th62GFDhbwtvjoLcn2VMT22Agh94Ob4JvD0tzlDcZpEOOEs=
- secure: YV9GTtTddGgvvaW/o0esSY5ov8qIZJI77+mapAR138JrUi4opSXPzVk9XzHXCRR6K1IPBJ/PZfF+optX6SGWiD634A8T1vvrt8X/RrvV0Lbu3V33Qxi7UQgVRTGbR61RlzSEe1WjVXUKdZGMz+O07hwzS/MVkY88K48h/3Gtej8=
- BOB_DOCUMENTATION_SERVER=https://www.idiap.ch/software/bob/docs/latest/bioidiap/%s/master
- python: 3.2
env:
- NUMPYSPEC===1.8.0
......@@ -20,7 +21,7 @@ matrix:
before_install:
- sudo add-apt-repository -y ppa:biometrics/bob
- sudo apt-get update -qq
- sudo apt-get install -qq --force-yes libboost-all-dev libblitz1-dev libhdf5-serial-dev
- sudo apt-get install -qq --force-yes libboost-all-dev libblitz1-dev libhdf5-serial-dev texlive-latex-recommended texlive-latex-extra texlive-fonts-recommended
- if [ -n "${NUMPYSPEC}" ]; then sudo apt-get install -qq libatlas-dev libatlas-base-dev liblapack-dev gfortran; fi
- if [ -n "${NUMPYSPEC}" ]; then pip install --upgrade pip setuptools; fi
- if [ -n "${NUMPYSPEC}" ]; then pip install --find-links http://wheels.astropy.org/ --find-links http://wheels2.astropy.org/ --use-wheel numpy$NUMPYSPEC; fi
......
......@@ -2,143 +2,36 @@
.. Manuel Guenther <manuel.guenther@idiap.ch>
.. Thu Sep 4 10:53:22 CEST 2014
.. image:: https://travis-ci.org/bioidiap/bob.learn.boosting.svg?branch=master
:target: https://travis-ci.org/bioidiap/bob.learn.boosting
.. image:: http://img.shields.io/badge/docs-stable-yellow.png
:target: http://pythonhosted.org/bob.learn.boosting/index.html
.. image:: http://img.shields.io/badge/docs-latest-orange.png
:target: https://www.idiap.ch/software/bob/docs/latest/bioidiap/bob.learn.boosting/master/index.html
.. image:: https://travis-ci.org/bioidiap/bob.learn.boosting.svg?branch=master
:target: https://travis-ci.org/bioidiap/bob.learn.boosting
.. image:: https://coveralls.io/repos/bioidiap/bob.learn.boosting/badge.png
:target: https://coveralls.io/r/bioidiap/bob.learn.boosting
.. image:: http://img.shields.io/github/tag/bioidiap/bob.learn.boosting.png
:target: https://github.com/bioidiap/bob.learn.boosting
.. image:: https://img.shields.io/badge/github-master-0000c0.png
:target: https://github.com/bioidiap/bob.learn.boosting/tree/master
.. image:: http://img.shields.io/pypi/v/bob.learn.boosting.png
:target: https://pypi.python.org/pypi/bob.learn.boosting
.. image:: http://img.shields.io/pypi/dm/bob.learn.boosting.png
:target: https://pypi.python.org/pypi/bob.learn.boosting
==========================================================================================
Generalized Boosting Framework using Stump and Look Up Table (LUT) based Weak Classifier
==========================================================================================
=============================
Bob's extension to boosting
=============================
The package implements a generalized boosting framework, which incorporates different boosting approaches.
The Boosting algorithms implemented in this package are:
1) Gradient Boost [Fri00]_ (generalized version of Adaboost [FS99]_) for univariate cases using stump decision classifiers, as in [VJ04]_.
2) TaylorBoost [SMV11]_ for univariate and multivariate cases using Look-Up-Table based classifiers [Ata12]_
.. [Fri00] *Jerome H. Friedman*. **Greedy function approximation: a gradient boosting machine**. Annals of Statistics, 29:1189--1232, 2000.
.. [FS99] *Yoav Freund and Robert E. Schapire*. **A short introduction to boosting**. Journal of Japanese Society for Artificial Intelligence, 14(5):771-780, September, 1999.
.. [VJ04] *Paul Viola and Michael J. Jones*. **Robust real-time face detection**. International Journal of Computer Vision (IJCV), 57(2): 137--154, 2004.
.. [SMV11] *Mohammad J. Saberian, Hamed Masnadi-Shirazi, Nuno Vasconcelos*. **TaylorBoost: First and second-order boosting algorithms with explicit margin control**. IEEE Conference on Conference on Computer Vision and Pattern Recognition (CVPR), 2929--2934, 2011.
.. [Ata12] *Cosmin Atanasoaei*. **Multivariate boosting with look-up tables for face processing**. PhD Thesis, École Polytechnique Fédérale de Lausanne (EPFL), Switzerland, 2012.
Installation:
-------------
Bob
...
The boosting framework is dependent on the open source signal-processing and machine learning toolbox Bob_, which you need to download from its web page.
For more information, please read Bob's `installation instructions <https://github.com/idiap/bob/wiki/Packages>`_.
This package
............
The most simple way to download the latest stable version of the package is to use the Download button above and extract the archive into a directory of your choice.
If y want, you can also check out the latest development branch of this package using::
$ git clone https://github.com/bioidiap/bob.learn.boosting.git
Afterwards, please open a terminal in this directory and call::
$ python bootstrap.py
$ ./bin/buildout
These 2 commands should download and install all dependencies and get you a fully operational test and development environment.
Example
-------
To show an exemplary usage of the boosting algorithm, binary and multi-variate classification of hand-written digits from the MNIST database is performed.
For simplicity, we just use the pixel gray values as (discrete) features to classify the digits.
In each boosting round, a single pixel location is selected.
In case of the stump classifier, this pixel value is compared to a threshold (which is determined during training), and one of the two classes is assigned.
The LUT weak classifier selects a feature (i.e., a pixel location in the images) and determines the most probable digit for each pixel value.
Finally, the strong classifier combines several weak classifiers by a weighted sum of their predictions.
The script ``./bin/boosting_example.py`` is provided to perform all different examples.
This script has several command line parameters, which vary the behavior of the training and/or testing procedure.
All parameters have a long value (starting with ``--``) and a shortcut (starting with a single ``-``).
These parameters are (see also ``./bin/boosting_example.py --help``):
To control the type of training, you can select:
* ``--trainer-type``: Select the type of weak classifier. Possible values are ``stump`` and ``lut``
* ``--loss-type``: Select the loss function. Possible values are ``tan``, ``log`` and ``exp``. By default, a loss function suitable to the trainer type is selected.
* ``--number-of-boosting-rounds``: The number of weak classifiers to select.
* ``--multi-variate`` (only valid for LUT trainer): Perform multi-variate classification, or binary (one-to-one) classification.
* ``--feature-selection-style`` (only valid for multi-variate training): Select the feature for each output ``independent``ly or ``shared``?
To control the experimentation, you can choose:
* ``--digits``: The digits to classify. For multi-variate training, one classifier is trained for all given digits, while for uni-variate training all possible one-to-one classifiers are trained.
* ``--all``: Select all 10 digits.
* ``--classifier-file``: Save the trained classifier(s) into the given file and/or read the classifier(s) from this file.
* ``--force``: Overwrite the given classifier file if it already exists.
For information and debugging purposes, it might be interesting to use:
* ``--verbose`` (can be used several times): Increases the verbosity level from 0 (error) over 1 (warning) and 2 (info) to 3 (debug). Verbosity level 2 (``-vv``) is recommended.
* ``--number-of-elements``: Reduce the number of elements per class (digit) to the given value.
Four different kinds of experiments can be performed:
1. Uni-variate classification using the stump classifier, classifying digits 5 and 6::
$ ./bin/boosting_example.py -vv --trainer-type stump --digits 5 6
2. Uni-variate classification using the LUT classifier, classifying digits 5 and 6::
$ ./bin/boosting_example.py -vv --trainer-type lut --digits 5 6
3. Multi-variate classification using LUT classifier and shared features, classifying all 10 digits::
$ ./bin/boosting_example.py -vv --trainer-type lut --all-digits --multi-variate --feature-selection-style shared
4. Multi-variate classification using LUT classifier and independent features, classifying all 10 digits::
$ ./bin/boosting_example.py -vv --trainer-type lut --all-digits --multi-variate --feature-selection-style independent
.. note:
During the execution of the experiments, the warning message "L-BFGS returned warning '2': ABNORMAL_TERMINATION_IN_LNSRCH" might appear.
This warning message is normal and does not influence the results much.
.. note:
For experiment 1, the training terminates after 75 of 100 rounds since the computed weight for the weak classifier of that round is vanishing.
Hence, performing more boosting rounds will not change the strong classifier any more.
All experiments should be able to run using several minutes of execution time.
The results of the above experiments should be the following (split in the remaining classification error on the training set, and the error on the test set)
+------------+----------+----------+
| Experiment | Training | Test |
+------------+----------+----------+
| 1 | 91.04 % | 92.05 % |
+------------+----------+----------+
| 2 | 100.0 % | 95.35 % |
+------------+----------+----------+
| 3 | 97.59 % | 83.47 % |
+------------+----------+----------+
| 4 | 99.04 % | 86.25 % |
+------------+----------+----------+
Of course, you can try out different combinations of digits for experiments 1 and 2.
Getting Help
Installation
------------
To install this package -- alone or together with other `Packages of Bob <https://github.com/idiap/bob/wiki/Packages>`_ -- please read the `Installation Instructions <https://github.com/idiap/bob/wiki/Installation>`_.
For Bob_ to be able to work properly, some dependent packages are required to be installed.
Please make sure that you have read the `Dependencies <https://github.com/idiap/bob/wiki/Dependencies>`_ for your operating system.
In case you experience problems with the code, or with downloading the required databases and/or software, please contact manuel.guenther@idiap.ch or file a bug report under https://github.com/bioidiap/bob.learn.boosting.
Documentation
-------------
For further documentation on this package, please read the `Stable Version <http://pythonhosted.org/bob.learn.boosting/index.html>`_ or the `Latest Version <https://www.idiap.ch/software/bob/docs/latest/bioidiap/bob.learn.boosting/master/index.html>`_ of the documentation.
For a list of tutorials on this or the other packages ob Bob_, or information on submitting issues, asking questions and starting discussions, please visit its website.
.. _bob: http://www.idiap.ch/software/bob
.. _bob: https://www.idiap.ch/software/bob
......@@ -5,10 +5,22 @@ import bob.io.base
import bob.extension
bob.extension.load_bob_library('bob.learn.boosting', __file__)
from .loss import *
from .trainer import *
from .machine import *
# include loss functions
from . import LossFunction # Just to get the documentation for it
from .ExponentialLoss import ExponentialLoss
from .LogitLoss import LogitLoss
from .TangentialLoss import TangentialLoss
from ._library import JesorskyLoss
# include trainers
from .StumpTrainer import StumpTrainer
from .Boosting import Boosting
from ._library import LUTTrainer
# include machines
from ._library import WeakMachine, StumpMachine, LUTMachine, BoostedMachine
# include auxiliary functions
from ._library import weighted_histogram
def get_config():
......
......@@ -17,7 +17,7 @@ static auto boostedMachine_doc = bob::extension::ClassDoc(
.add_prototype("hdf5", "")
// .add_parameter("weak_classifiers", "[bob.boosting.machine.WeakMachine]", "A list of weak machines that should be used in this strong machine")
// .add_parameter("weights", "float <#machines,#outputs>", "The list of weights for the machines.")
.add_parameter("hdf5", ":py:class:`bob.io.HDF5File`", "The HDF5 file object to read the weak classifier from")
.add_parameter("hdf5", ":py:class:`bob.io.base.HDF5File`", "The HDF5 file object to read the weak classifier from")
);
......@@ -392,7 +392,7 @@ static auto boostedMachine_load_doc = bob::extension::FunctionDoc(
true
)
.add_prototype("hdf5")
.add_parameter("hdf5", ":py:class:`bob.io.HDF5File`", "The HDF5 file to load this machine from.")
.add_parameter("hdf5", ":py:class:`bob.io.base.HDF5File`", "The HDF5 file to load this machine from.")
;
static PyObject* boostedMachine_load(
......@@ -426,7 +426,7 @@ static auto boostedMachine_save_doc = bob::extension::FunctionDoc(
true
)
.add_prototype("hdf5")
.add_parameter("hdf5", ":py:class:`bob.io.HDF5File`", "The HDF5 file to save this weak machine to.")
.add_parameter("hdf5", ":py:class:`bob.io.base.HDF5File`", "The HDF5 file to save this weak machine to.")
;
static PyObject* boostedMachine_save(
......
from . import LossFunction # Just to get the documentation for it
from .ExponentialLoss import ExponentialLoss
from .LogitLoss import LogitLoss
from .TangentialLoss import TangentialLoss
from ._library import JesorskyLoss
# gets sphinx autodoc done right - don't remove it
__all__ = [_ for _ in dir() if not _.startswith('_')]
......@@ -20,7 +20,7 @@ static auto lutMachine_doc = bob::extension::ClassDoc(
.add_parameter("index", "int", "The index into the feature vector (for the univariate case)")
.add_parameter("look_up_tables", "float <#entries,#outputs>", "The look up tables, one for each output dimension (for the multi-variate case)")
.add_parameter("indices", "int <#outputs>", "The indices into the feature vector, one for each output dimension (for the multi-variate case)")
.add_parameter("hdf5", "bob.io.HDF5File", "The HDF5 file object to read the weak classifier from")
.add_parameter("hdf5", ":py:class:`bob.io.base.HDF5File`", "The HDF5 file object to read the weak classifier from")
);
......@@ -275,7 +275,7 @@ static auto lutMachine_load_doc = bob::extension::FunctionDoc(
true
)
.add_prototype("hdf5")
.add_parameter("hdf5", "bob.io.HDF5File", "The HDF5 file to load this weak machine from.")
.add_parameter("hdf5", ":py:class:`bob.io.base.HDF5File`", "The HDF5 file to load this weak machine from.")
;
static PyObject* lutMachine_load(
......@@ -307,7 +307,7 @@ static auto lutMachine_save_doc = bob::extension::FunctionDoc(
true
)
.add_prototype("hdf5")
.add_parameter("hdf5", "bob.io.HDF5File", "The HDF5 file to save this weak machine to.")
.add_parameter("hdf5", ":py:class:`bob.io.base.HDF5File`", "The HDF5 file to save this weak machine to.")
;
static PyObject* lutMachine_save(
......
# import the C++ stuff
from ._library import WeakMachine, StumpMachine, LUTMachine, BoostedMachine
# gets sphinx autodoc done right - don't remove it
__all__ = [_ for _ in dir() if not _.startswith('_')]
......@@ -18,7 +18,7 @@ static auto stumpMachine_doc = bob::extension::ClassDoc(
.add_parameter("threshold", "float", "The decision threshold")
.add_parameter("polarity", "float", "-1 if positive values are below threshold, +1 if positive values are above threshold")
.add_parameter("index", "int", "The index into the feature vector that is thresholded")
.add_parameter("hdf5", "bob.io.HDF5File", "The HDF5 file object to read the weak classifier from")
.add_parameter("hdf5", ":py:class:`bob.io.base.HDF5File`", "The HDF5 file object to read the weak classifier from")
);
......@@ -269,7 +269,7 @@ static auto stumpMachine_load_doc = bob::extension::FunctionDoc(
true
)
.add_prototype("hdf5")
.add_parameter("hdf5", "bob.io.HDF5File", "The HDF5 file to load this weak machine from.")
.add_parameter("hdf5", ":py:class:`bob.io.base.HDF5File`", "The HDF5 file to load this weak machine from.")
;
static PyObject* stumpMachine_load(
......@@ -303,7 +303,7 @@ static auto stumpMachine_save_doc = bob::extension::FunctionDoc(
true
)
.add_prototype("hdf5")
.add_parameter("hdf5", "bob.io.HDF5File", "The HDF5 file to save this weak machine to.")
.add_parameter("hdf5", ":py:class:`bob.io.base.HDF5File`", "The HDF5 file to save this weak machine to.")
;
static PyObject* stumpMachine_save(
......
from ._library import LUTTrainer
from .StumpTrainer import StumpTrainer
from .Boosting import Boosting
# gets sphinx autodoc done right - don't remove it
__all__ = [_ for _ in dir() if not _.startswith('_')]
......@@ -41,7 +41,6 @@ extensions = [
'sphinx.ext.autosummary',
'sphinx.ext.doctest',
'sphinx.ext.intersphinx',
#'bob.sphinxext.plot', # ours add source copying to install directory
]
# The viewcode extension appeared only on Sphinx >= 1.0.0
......@@ -249,40 +248,15 @@ man_pages = [
('index', 'bob.project.example', u'Bob Project Example Documentation', [u'Idiap Research Institute'], 1)
]
# We want to remove all private (i.e. _. or __.__) members
# that are not in the list of accepted functions
accepted_private_functions = ['__call__']
def member_function_test(app, what, name, obj, skip, options):
# test if we have a private function
if len(name) > 1 and name[0] == '_':
# test if this private function should be allowed
if name not in accepted_private_functions:
# omit privat functions that are not in the list of accepted private functions
return True
else:
# test if the method is documented
if not hasattr(obj, '__doc__') or not obj.__doc__:
return True
# Skips selected members in auto-generated documentation. Unfortunately, old
# versions of Boost.Python will not generate a __self__ member for static
# methods and that screws-up Sphinx processing.
if sphinx.__version__ < "1.0":
# We have to remove objects that do not have a __self__ attribute set
import types
if isinstance(obj, types.BuiltinFunctionType) and \
not hasattr(obj, '__self__') and what == 'class':
app.warn("Skipping %s %s (no __self__)" % (what, name))
return True
return False
# Default processing flags for sphinx
autoclass_content = 'both'
autodoc_member_order = 'bysource'
autodoc_default_flags = ['members', 'undoc-members', 'private-members', 'special-members', 'inherited-members', 'show-inheritance']
autodoc_default_flags = ['members', 'undoc-members', 'inherited-members', 'show-inheritance']
# For inter-documentation mapping:
from bob.extension.utils import link_documentation
intersphinx_mapping = link_documentation(['python', 'numpy', 'scipy'])
def setup(app):
app.connect('autodoc-skip-member', member_function_test)
pass
......@@ -19,10 +19,10 @@
===========================================
As an example for the classification task, we perform a classification of hand-written digits using the `MNIST <http://yann.lecun.com/exdb/mnist>`_ database.
There, images of single hand-written digits are stored, and a training and test set is provided, which we can access with our `xbob.db.mnist <http://pypi.python.org/pypi/xbob.db.mnist>`_ database interface.
There, images of single hand-written digits are stored, and a training and test set is provided, which we can access with our `bob.db.mnist <http://pypi.python.org/pypi/bob.db.mnist>`_ database interface.
.. note::
In fact, to minimize the dependencies to other packages, the ``xbob.db.mnist`` database interface is replaced by a local interface.
In fact, to minimize the dependencies to other packages, the ``bob.db.mnist`` database interface is replaced by a local interface.
In our experiments, we simply use the pixel gray values as features.
Since the gray values are discrete in range :math:`[0, 255]`, we can employ both the stump decision classifiers and the look-up-table's.
......@@ -43,7 +43,7 @@ To control the type of training, you can select:
* ``--loss-type``: Select the loss function. Possible values are ``tan``, ``log`` and ``exp``. By default, a loss function suitable to the trainer type is selected.
* ``--number-of-boosting-rounds``: The number of weak classifiers to select.
* ``--multi-variate`` (only valid for LUT trainer): Perform multi-variate classification, or binary (one-to-one) classification.
* ``--feature-selection-style`` (only valid for multi-variate training): Select the feature for each output ``independent``ly or ``shared``?
* ``--feature-selection-style`` (only valid for multi-variate training): Select the feature for each output ``independent`` or ``shared``?
To control the experimentation, you can choose:
......@@ -59,19 +59,19 @@ For information and debugging purposes, it might be interesting to use:
Four different kinds of experiments can be performed:
1. Uni-variate classification using the stump classifier, classifying digits 5 and 6::
1. Uni-variate classification using the stump classifier :py:class:`bob.learn.boosting.StumpMachine`, classifying digits 5 and 6::
$ ./bin/boosting_example.py -vv --trainer-type stump --digits 5 6
2. Uni-variate classification using the LUT classifier, classifying digits 5 and 6::
2. Uni-variate classification using the LUT classifier :py:class:`bob.learn.boosting.LUTMachine`, classifying digits 5 and 6::
$ ./bin/boosting_example.py -vv --trainer-type lut --digits 5 6
3. Multi-variate classification using LUT classifier and shared features, classifying all 10 digits::
3. Multi-variate classification using LUT classifier :py:class:`bob.learn.boosting.LUTMachine` and shared features, classifying all 10 digits::
$ ./bin/boosting_example.py -vv --trainer-type lut --all-digits --multi-variate --feature-selection-style shared
4. Multi-variate classification using LUT classifier and independent features, classifying all 10 digits::
4. Multi-variate classification using LUT classifier :py:class:`bob.learn.boosting.LUTMachine` and independent features, classifying all 10 digits::
$ ./bin/boosting_example.py -vv --trainer-type lut --all-digits --multi-variate --feature-selection-style independent
......@@ -138,7 +138,8 @@ Here, we describe the more complex way, i.e., the multi-variate case.
[ 1., -1.],
[ 1., -1.]])
Now, we can train the classifier. Here, we use the multi-variate LUT trainer with logit loss:
Now, we can train the classifier using the :py:class:`bob.learn.boosting.Boosting` boosting trainer.
Here, we use the multi-variate LUT trainer :py:class:`bob.learn.boosting.LUTTrainer` with logit loss :py:class:`bob.learn.boosting.LogitLoss`:
.. doctest::
......@@ -153,7 +154,7 @@ Now, we can train the classifier. Here, we use the multi-variate LUT trainer wit
>>> # perform training for 100 rounds (i.e., select 100 weak machines)
>>> strong_classifier = strong_trainer.train(training_samples.astype(numpy.uint16), training_targets, 10)
Having the strong classifier, we can classify the test samples:
Having the strong classifier (which is of type :py:class:`bob.learn.boosting.BoostedMachine`), we can classify the test samples:
.. doctest::
......@@ -179,3 +180,4 @@ Having the strong classifier, we can classify the test samples:
2004
>>> classification.shape[0]
2115
......@@ -4,9 +4,9 @@
..
.. Copyright (C) 2011-2013 Idiap Research Institute, Martigny, Switzerland
=============================
Bob's extension to boosting
=============================
===========================================================================================
Generalized Boosting Framework using Stump and Look Up Table (LUT) based Weak Classifiers
===========================================================================================
.. todolist::
......
......@@ -7,25 +7,26 @@ This section includes information for using the Python API of ``bob.learn.boosti
Machines
........
The :py:mod:`bob.learn.boosting.machine` sub-module contains classifiers that can predict classes for given input values.
The strong classifier is the :py:class:`bob.learn.boosting.BoostedMachine`, which is a weighted combination of :py:class:`bob.learn.boosting.WeakMachine`.
Weak machines might be a :py:class:`bob.learn.boosting.LUTMachine` or a :py:class:`bob.learn.boosting.StumpMachine`.
Theoretically, the strong classifier can consist of different types of weak classifiers, but usually all weak classifiers have the same type.
The :py:mod:`bob.learn.boosting` module contains classifiers that can predict classes for given input value:
.. automodule:: bob.learn.boosting.machine
* :py:class:`bob.learn.boosting.BoostedMachine` : the strong classifier, which is a weighted combination of several machines of type :py:class:`bob.learn.boosting.WeakMachine`.
Weak machines might be:
Trainers
........
* :py:class:`bob.learn.boosting.LUTMachine` : A weak machine that performs a classification by a look-up-table thresholding.
* :py:class:`bob.learn.boosting.StumpMachine` : A weak machine that performs classification by simple threshlding.
The :py:mod:`bob.learn.boosting.trainer` sub-module contains trainers that trains:
Theoretically, the strong classifier can consist of different types of weak classifiers, but usually all weak classifiers have the same type.
* :py:class:`bob.learn.boosting.Boosting` : a strong machine of type :py:class:`bob.learn.boosting.BoostedMachine`
* :py:class:`bob.learn.boosting.LUTTrainer` : a weak machine of type :py:class:`bob.learn.boosting.LUTMachine`
* :py:class:`bob.learn.boosting.StrumTrainer` : a weak machine of type :py:class:`bob.learn.boosting.StumpMachine`
Trainers
........
Available trainers in :py:mod:`bob.learn.boosting` are:
.. automodule:: bob.learn.boosting.trainer
* :py:class:`bob.learn.boosting.Boosting` : Trains a strong machine of type :py:class:`bob.learn.boosting.BoostedMachine`.
* :py:class:`bob.learn.boosting.LUTTrainer` : Trains a weak machine of type :py:class:`bob.learn.boosting.LUTMachine`.
* :py:class:`bob.learn.boosting.StrumTrainer` : Trains a weak machine of type :py:class:`bob.learn.boosting.StumpMachine`.
Loss functions
......@@ -39,10 +40,12 @@ A base class loss function :py:class:`bob.learn.boosting.LossFunction` is called
Not all combinations of loss functions and weak trainers make sense.
Here is a list of useful combinations:
1. :py:class:`bob.learn.boosting.ExponentialLoss` with :py:class:`bob.learn.boosting.StrumTrainer` (uni-variate classification only)
2. :py:class:`bob.learn.boosting.LogitLoss` with :py:class:`bob.learn.boosting.StrumTrainer` or :py:class:`bob.learn.boosting.LUTTrainer` (uni-variate or multi-variate classification)
3. :py:class:`bob.learn.boosting.TangentialLoss` with :py:class:`bob.learn.boosting.StrumTrainer` or :py:class:`bob.learn.boosting.LUTTrainer` (uni-variate or multi-variate classification)
4. :py:class:`bob.learn.boosting.JesorskyLoss` with :py:class:`bob.learn.boosting.LUTTrainer` (multi-variate regression only)
1. :py:class:`bob.learn.boosting.ExponentialLoss` with :py:class:`bob.learn.boosting.StrumTrainer` (uni-variate classification only).
2. :py:class:`bob.learn.boosting.LogitLoss` with :py:class:`bob.learn.boosting.StrumTrainer` or :py:class:`bob.learn.boosting.LUTTrainer` (uni-variate or multi-variate classification).
3. :py:class:`bob.learn.boosting.TangentialLoss` with :py:class:`bob.learn.boosting.StrumTrainer` or :py:class:`bob.learn.boosting.LUTTrainer` (uni-variate or multi-variate classification).
4. :py:class:`bob.learn.boosting.JesorskyLoss` with :py:class:`bob.learn.boosting.LUTTrainer` (multi-variate regression only).
.. automodule:: bob.learn.boosting.loss
Details
.......
.. automodule:: bob.learn.boosting
......@@ -138,6 +138,7 @@ setup(
# PyPI. You can find the complete list of classifiers that are valid and
# useful here (http://pypi.python.org/pypi?%3Aaction=list_classifiers).
classifiers = [
'Framework :: Bob',
'Development Status :: 4 - Beta',
'Intended Audience :: Developers',
'License :: OSI Approved :: GNU General Public License v3 (GPLv3)',
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment