Commit a0d806ee authored by Manuel Günther's avatar Manuel Günther
Browse files

Fixed documentation; re-generated MNIST database interface.

parent 614f8711
include README.rst bootstrap.py buildout.cfg
recursive-include docs *.py *.rst
recursive-include xbob/boosting/cpp *.h *.cpp
recursive-include xbob/boosting/tests *.hdf5
\ No newline at end of file
recursive-include bob/learn/boosting/cpp *.h *.cpp
recursive-include bob/learn/boosting/data *.hdf5
......@@ -29,7 +29,7 @@ This package
The most simple way to download the latest stable version of the package is to use the Download button above and extract the archive into a directory of your choice.
If y want, you can also check out the latest development branch of this package using::
$ git clone https://gitlab.idiap.ch/biometric/xbob-boosting.git
$ git clone https://github.com/bioidiap/bob.learn.boosting.git
Afterwards, please open a terminal in this directory and call::
......@@ -122,6 +122,6 @@ Of course, you can try out different combinations of digits for experiments 1 an
Getting Help
------------
In case you experience problems with the code, or with downloading the required databases and/or software, please contact manuel.guenther@idiap.ch or file a bug report under https://gitlab.idiap.ch/biometric/xbob-boosting.
In case you experience problems with the code, or with downloading the required databases and/or software, please contact manuel.guenther@idiap.ch or file a bug report under https://github.com/bioidiap/bob.learn.boosting.
.. _bob: http://www.idiap.ch/software/bob
......@@ -13,10 +13,13 @@ class Boosting:
**Constructor Documentation**
Keyword parameters:
weak_trainer (a :py:class:`xbob.boosting.trainer.LUTTrainer` or :py:class:`xbob.boosting.trainer.StumpTrainer`): The class to train weak machines.
Keyword parameters
weak_trainer : :py:class:`bob.learn.boosting.LUTTrainer` or :py:class:`bob.learn.boosting.StumpTrainer`
The class to train weak machines.
loss_function (a class derived from :py:class:`xbob.boosting.loss.LossFunction`): The function to define the weights for the weak machines.
loss_function : a class derived from :py:class:`bob.learn.boosting.LossFunction`
The function to define the weights for the weak machines.
"""
......@@ -38,17 +41,20 @@ class Boosting:
Keyword parameters:
training_features (uint16 <#samples, #features> or float <#samples, #features>): Features extracted from the training samples.
training_targets (float <#samples, #outputs>): The values that the boosted classifier should reach for the given samples.
training_features : uint16 <#samples, #features> or float <#samples, #features>)
Features extracted from the training samples.
number_of_rounds (int): The number of rounds of boosting, i.e., the number of weak classifiers to select.
training_targets : float <#samples, #outputs>
The values that the boosted classifier should reach for the given samples.
boosted_machine (:py:class:`xbob.boosting.machine.BoostedMachine` or None): the machine to add the weak machines to. If not given, a new machine is created.
number_of_rounds : int
The number of rounds of boosting, i.e., the number of weak classifiers to select.
Returns
boosted_machine :py:class:`bob.learn.boosting.BoostedMachine` or None
The machine to add the weak machines to. If not given, a new machine is created.
:py:class:`xbob.boosting.machine.BoostedMachine` The boosted machine that is combination of the weak classifiers.
Returns : :py:class:`bob.learn.boosting.BoostedMachine`
The boosted machine that is combination of the weak classifiers.
"""
# Initializations
......
......@@ -2,7 +2,7 @@ import numpy
class LossFunction:
"""This is a base class for all loss functions implemented in pure python.
It is simply a python re-implementation of the :py:class:`xbob.boosting.loss.LossFunction` class.
It is simply a python re-implementation of the :py:class:`bob.learn.boosting.LossFunction` class.
This class provides the interface for the L-BFGS optimizer.
Please overwrite the loss() and loss_gradient() function (see below) in derived loss classes.
......
......@@ -18,7 +18,7 @@ class StumpTrainer():
loss_gradient (float<#samples>): The loss gradient values for the training samples
Returns
A (weak) :py:class:`xbob.boosting.machine.StumpMachine`
A (weak) :py:class:`bob.learn.boosting.StumpMachine`
"""
# Initialization
......
#!/usr/bin/env python
"""The test script to perform the binary classification on the digits from the MNIST dataset.
The MNIST data is exported using the xbob.db.mnist module which provide the train and test
partitions for the digits. Pixel values of grey scale images are used as features and the
available algorithms for classification are Lut based Boosting and Stump based Boosting.
The MNIST data is exported using a module similar to the xbob.db.mnist module which provide the train and test partitions for the digits.
Pixel values of grey scale images are used as features and the available algorithms for classification are Lut based Boosting and Stump based Boosting.
Thus it conducts only one binary classifcation test.
......@@ -40,7 +39,7 @@ def command_line_arguments(command_line_options):
parser.add_argument('-r', '--number-of-boosting-rounds', type = int, default = 100, help = "The number of boosting rounds, i.e., the number of weak classifiers.")
parser.add_argument('-m', '--multi-variate', action = 'store_true', help = "Perform multi-variate training?")
parser.add_argument('-s', '--feature-selection-style', default = 'independent', choices = {'independent', 'shared'}, help = "The feature selection style (only for multivariate classification with the LUT trainer).")
parser.add_argument('-s', '--feature-selection-style', default = 'independent', choices = ('independent', 'shared'), help = "The feature selection style (only for multivariate classification with the LUT trainer).")
parser.add_argument('-d', '--digits', type = int, nargs="+", choices=range(10), default=[5,6], help = "Select the digits you want to compare.")
parser.add_argument('-a', '--all-digits', action='store_true', help = "Use all digits")
......
from .LossFunction import LossFunction
import math
import numpy
class JesorskyLoss (LossFunction):
"""This class computes the Jesorsky loss that is used in regression tasks like feature localization."""
def _inter_eye_distance(self, targets):
"""Computes the inter-eye distance from the given target vector.
It assumes that the eyes are stored as the first two elements in the vector,
as: [0]: re_y [1]: re_x, [2]: le_y, [3]: re_x
"""
return math.sqrt((targets[0] - targets[2])**2 + (targets[1] - targets[3])**2)
def loss(self, targets, scores):
"""Computes the Jesorsky loss for the given target and score vectors.
Both vectors are assumed to have contained feature positions in y and x,
and the first four values correspond to the eye locations:
[0]: re_y [1]: re_x, [2]: le_y, [3]: re_x
Keyword parameters:
targets (float <#samples, #outputs>): The target values that should be reached.
scores (float <#samples, #outputs>): The scores provided by the classifier.
Returns
(float <#samples, 1>): One error for each target/score pair.
"""
# compute one error for each sample
errors = numpy.zeros((targets.shape[0],1))
for i in range(targets.shape[0]):
# compute inter-eye-distance
scale = 0.5/self._inter_eye_distance(targets[i])
# compute error for all positions
# which are assumed to be 2D points
for j in range(0, targets.shape[1], 2):
dx = scores[i,j] - targets[i,j]
dy = scores[i,j+1] - targets[i,j+1]
# sum errors
errors[i,0] += math.sqrt(dx**2 + dy**2) * scale
return errors
def loss_gradient(self, targets, scores):
"""Computes the gradient of the Jesorsky loss for the given target and score vectors
Both vectors are assumed to have contained feature positions in y and x,
and the first four values correspond to the eye locations:
[0]: re_y [1]: re_x, [2]: le_y, [3]: re_x
Keyword parameters:
targets (float <#samples, #outputs>): The target values that should be reached.
scores (float <#samples, #outputs>): The scores provided by the classifier.
Returns
(float <#samples, #outputs>): One gradient vector for each target/score pair.
"""
# allocate memory for the gradients
gradient = numpy.ndarray(targets.shape, numpy.float)
# iterate over all samples
for i in range(targets.shape[0]):
# compute inter-eye-distance
scale = 0.5/self._inter_eye_distance(targets[i])
# compute gradient for all elements in the vector
# which are assumed to be 2D points
for j in range(0, targets.shape[1], 2):
dx = scores[i,j] - targets[i,j]
dy = scores[i,j+1] - targets[i,j+1]
error = math.sqrt(dx**2 + dy**2)
# set gradient
gradient[i,j] = dx * scale / error
gradient[i,j+1] = dy * scale / error
return gradient
......@@ -70,7 +70,7 @@ static PyModuleDef module_definition = {
BOB_EXT_MODULE_NAME,
module_docstr,
-1,
// BoostingMethods,
BoostingMethods,
0,
};
#endif
......
import unittest
from unittest import SkipTest
#from unittest import SkipTest
import random
import bob.learn.boosting
import numpy
......@@ -40,7 +40,7 @@ class TestJesorskyLoss(unittest.TestCase):
self.assertTrue(grad_sum.shape[0] == num_outputs)
@unittest.skip("Implement me!")
def test02_negative_target(self):
loss_function = bob.learn.boosting.JesorskyLoss()
......@@ -52,7 +52,6 @@ class TestJesorskyLoss(unittest.TestCase):
weak_scores = numpy.array([[0.2, 0.4, 0.5, 0.6], [0.5, 0.5, 0.5, 0.5]], 'float64')
prev_scores = numpy.array([[0.1, 0.2, 0.3, 0.4], [0.5, 0.5, 0.5, 0.5]], 'float64')
raise SkipTest ("Implement me!")
# TODO: implement this test properly
# check the loss values
loss_value = loss_function.loss(targets, score)
......
......@@ -39,7 +39,7 @@ class TestLutTrainer(unittest.TestCase):
range_feature = max_feature
trainer = bob.learn.boosting.LUTTrainer(range_feature)
features = bob.io.base.load(bob.io.base.test_utils.datafile('datafile.hdf5', 'bob.learn.boosting', 'tests'))
features = bob.io.base.load(bob.io.base.test_utils.datafile('testdata.hdf5', 'bob.learn.boosting'))
x_train1 = numpy.copy(features)
x_train1[x_train1[:,selected_index] >=10, selected_index] = 9
......@@ -67,7 +67,7 @@ class TestLutTrainer(unittest.TestCase):
selected_index = 5
range_feature = max_feature + delta
trainer = bob.learn.boosting.LUTTrainer(range_feature)
features = bob.io.base.load(bob.io.base.test_utils.datafile('datafile.hdf5', 'bob.learn.boosting', 'tests')).astype(numpy.uint16)
features = bob.io.base.load(bob.io.base.test_utils.datafile('testdata.hdf5', 'bob.learn.boosting')).astype(numpy.uint16)
x_train = numpy.vstack((features, features))
x_train[0:num_samples,selected_index] = x_train[0:num_samples,selected_index] + delta
......@@ -90,7 +90,7 @@ class TestLutTrainer(unittest.TestCase):
range_feature = max_feature + delta
trainer = bob.learn.boosting.LUTTrainer(range_feature)
features = bob.io.base.load(bob.io.base.test_utils.datafile('datafile.hdf5', 'bob.learn.boosting', 'tests')).astype(numpy.uint16)
features = bob.io.base.load(bob.io.base.test_utils.datafile('testdata.hdf5', 'bob.learn.boosting')).astype(numpy.uint16)
x_train = numpy.vstack((features, features))
x_train[0:num_samples,selected_index] = x_train[0:num_samples,selected_index] + delta
......
......@@ -22,13 +22,14 @@ class MNIST:
hdf5 = bob.io.base.HDF5File(datafile)
self._data = {}
self._labels = {}
for group in ('train', 'test'):
self._data[group] = []
hdf5.cd(group)
for i in range(10):
self._data[group].append(hdf5.read(str(i)))
data = hdf5.read('data')
labels = hdf5.read('labels')
self._data[group] = data
self._labels[group] = labels
hdf5.cd('..')
shutil.rmtree(temp_dir)
def data(self, groups = ('train', 'test'), labels=range(10)):
......@@ -39,12 +40,14 @@ class MNIST:
if isinstance(labels, int):
labels = (labels,)
_data = numpy.ndarray((0,784), dtype = numpy.uint8)
_labels = numpy.ndarray((0), dtype = numpy.uint8)
_data = []
_labels = []
for group in groups:
for label in labels:
_data = numpy.vstack((_data, self._data[group][int(label)]))
_labels = numpy.hstack((_labels, numpy.ones(self._data[group][int(label)].shape[:1], numpy.uint8) * int(label)))
return _data, _labels
for i in range(self._labels[group].shape[0]):
# check if the label is the desired one
if self._labels[group][i] in labels:
_data.append(self._data[group][i])
_labels.append(self._labels[group][i])
return numpy.array(_data, numpy.uint8), numpy.array(_labels, numpy.uint8)
......@@ -73,7 +73,7 @@ project = u'Boosting extension for Bob'
import time
copyright = u'%s, Idiap Research Institute' % time.strftime('%Y')
distribution = pkg_resources.require('xbob.boosting')[0]
distribution = pkg_resources.require('bob.learn.boosting')[0]
# The version info for the project you're documenting, acts as replacement for
# |version| and |release|, also used in various other places throughout the
......
......@@ -8,8 +8,8 @@
import os
import numpy
import xbob.boosting
import xbob.db.mnist
import bob.learn.boosting
import bob.learn.boosting.utils
numpy.set_printoptions(precision=3, suppress=True)
......@@ -21,6 +21,9 @@
As an example for the classification task, we perform a classification of hand-written digits using the `MNIST <http://yann.lecun.com/exdb/mnist>`_ database.
There, images of single hand-written digits are stored, and a training and test set is provided, which we can access with our `xbob.db.mnist <http://pypi.python.org/pypi/xbob.db.mnist>`_ database interface.
.. note::
In fact, to minimize the dependencies to other packages, the ``xbob.db.mnist`` database interface is replaced by a local interface.
In our experiments, we simply use the pixel gray values as features.
Since the gray values are discrete in range :math:`[0, 255]`, we can employ both the stump decision classifiers and the look-up-table's.
Nevertheless, other discrete features, like Local Binary Patterns (LBP) could be used as well.
......@@ -103,14 +106,13 @@ One exemplary test case in details
----------------------------------
Having a closer look into the example script, there are several steps that are performed.
The first step is generating the training examples from the ``xbob.db.mnist`` database interface.
The first step is generating the training examples from the MNIST database interface.
Here, we describe the more complex way, i.e., the multi-variate case.
.. doctest::
>>> # open the database interface (will download the digits from the webpage)
>>> db = xbob.db.mnist.Database()
Downloading the mnist database from http://yann.lecun.com/exdb/mnist/ ...
>>> db = bob.learn.boosting.utils.MNIST()
>>> # get the training data for digits 0, 1
>>> training_samples, training_labels = db.data("train", labels = [0, 1])
>>> # limit the training samples (for test purposes only)
......@@ -140,13 +142,13 @@ Now, we can train the classifier. Here, we use the multi-variate LUT trainer wit
.. doctest::
>>> weak_trainer = xbob.boosting.trainer.LUTTrainer(
>>> weak_trainer = bob.learn.boosting.LUTTrainer(
... maximum_feature_value = 256,
... number_of_outputs = 2,
... selection_style = 'independent'
... )
>>> loss_function = xbob.boosting.loss.LogitLoss()
>>> strong_trainer = xbob.boosting.trainer.Boosting(weak_trainer, loss_function)
>>> loss_function = bob.learn.boosting.LogitLoss()
>>> strong_trainer = bob.learn.boosting.Boosting(weak_trainer, loss_function)
>>> # perform training for 100 rounds (i.e., select 100 weak machines)
>>> strong_classifier = strong_trainer.train(training_samples.astype(numpy.uint16), training_targets, 10)
......
......@@ -37,24 +37,24 @@ Currently, two types of weak classifiers are implemented in this boosting framew
Stump classifier
................
The first classifier, which can only handle univariate classification tasks, is the :py:class:`xbob.boosting.machine.StumpMachine`.
The first classifier, which can only handle univariate classification tasks, is the :py:class:`bob.learn.boosting.StumpMachine`.
For a given input vector :math:`\vec x`, the classifier bases its decision on **a single element** :math:`x_m` of the input vector:
.. math::
W(\vec x) = \left\{ \begin{array}{r@{\text{ if }}l} +1 & (x_m - \theta) * \phi >= 0 \\ -1 & (x_m - \theta) * \phi < 0 \end{array}\right.
Threshold :math:`\theta`, polarity :math:`phi` and index :math:`m` are parameters of the classifier, which are trained using the :py:class:`xbob.boosting.trainer.StumpTrainer`.
Threshold :math:`\theta`, polarity :math:`phi` and index :math:`m` are parameters of the classifier, which are trained using the :py:class:`bob.learn.boosting.StumpTrainer`.
For a given training set :math:`\{\vec x_p \mid p=1,\dots,P\}` and according target values :math:`\{t_p \mid p=1,\dots,P\}`, the threshold :math:`\theta_m` is computed for each input index :math:`m`, such that the lowest classification error is obtained, and the :math:`m` with the lowest training classification error is taken.
The polarity :math:`\phi` is set to :math:`-1`, if values lower than the threshold should be considered as positive examples, or to :math:`+1` otherwise.
To compute the classification error for a given :math:`\theta_m`, the gradient of a loss function is taken into consideration.
For the stump trainer, usually the :py:class:`xbob.boosting.loss.ExponentialLoss` is considered as the loss function.
For the stump trainer, usually the :py:class:`bob.learn.boosting.ExponentialLoss` is considered as the loss function.
Look-Up-Table classifier
........................
The second classifier, which can handle univariate and multivariate classification and regression tasks, is the :py:class:`xbob.boosting.machine.LUTMachine`.
The second classifier, which can handle univariate and multivariate classification and regression tasks, is the :py:class:`bob.learn.boosting.LUTMachine`.
This classifier is designed to handle input vectors with **discrete** values only.
Again, the decision of the weak classifier is based on a single element of the input vector :math:`\vec x`.
......@@ -63,7 +63,7 @@ In the univariate case, for each of the possible discrete values of :math:`x_m`,
.. math::
W(\vec x) = LUT[x_m]
This look-up-table LUT and the feature index :math:`m` is trained by the :py:class:`xbob.boosting.trainer.LUTTrainer`.
This look-up-table LUT and the feature index :math:`m` is trained by the :py:class:`bob.learn.boosting.LUTTrainer`.
In the multivariate case, each output :math:`W^o` is handled independently, i.e., a separate look-up-table :math:`LUT^o` and a separate feature index :math:`m^o` is assigned for each output dimension :math:`o`:
......@@ -71,16 +71,16 @@ In the multivariate case, each output :math:`W^o` is handled independently, i.e.
W^o(\vec x) = LUT^o[x_{m^o}]
.. note::
As a variant, the feature index :math:`m^o` can be selected to be ``shared`` for all outputs, see :py:class:`xbob.boosting.trainer.LUTTrainer` for details.
As a variant, the feature index :math:`m^o` can be selected to be ``shared`` for all outputs, see :py:class:`bob.learn.boosting.LUTTrainer` for details.
A weak look-up-table classifier is learned using the :py:class:`xbob.boosting.trainer.LUTTrainer`.
A weak look-up-table classifier is learned using the :py:class:`bob.learn.boosting.LUTTrainer`.
Strong classifier
-----------------
The strong classifier, which is of type :py:class:`xbob.boosting.machine.BoostedMachine`, is a weighted combination of weak classifiers, which are usually of the same type.
It can be trained with the :py:class:`xbob.boosting.trainer.Boosting` trainer, which takes a list of training samples, and a list of univariate or multivariate target vectors.
The strong classifier, which is of type :py:class:`bob.learn.boosting.BoostedMachine`, is a weighted combination of weak classifiers, which are usually of the same type.
It can be trained with the :py:class:`bob.learn.boosting.Boosting` trainer, which takes a list of training samples, and a list of univariate or multivariate target vectors.
In several rounds, the trainer computes (here, only the univariate case is considered, but the multivariate case is similar -- simply replace scores by score vectors.):
1. The classification results (the so-called *scores*) for the current strong classifier:
......@@ -120,11 +120,11 @@ Loss functions
As shown above, the loss functions define, how well the currently predicted scores :math:`s_p` fit to the target values :math:`t_p`.
Depending on the desired task, and on the type of classifier, different loss functions might be used:
1. The :py:class:`xbob.boosting.loss.ExponentialLoss` can be used for the binary classification task, i.e., when target values are in :math:`{+1, -1}`
1. The :py:class:`bob.learn.boosting.ExponentialLoss` can be used for the binary classification task, i.e., when target values are in :math:`{+1, -1}`
2. The :py:class:`xbob.boosting.loss.LogitLoss` can be used for the multi-variate classification task, i.e., when target vectors have entries from :math:`{+1, 0}`
2. The :py:class:`bob.learn.boosting.LogitLoss` can be used for the multi-variate classification task, i.e., when target vectors have entries from :math:`{+1, 0}`
3. The :py:class:`xbob.boosting.loss.JesorskyLoss` can be used for the particular multi-variate regression task of learning the locations of facial features.
3. The :py:class:`bob.learn.boosting.JesorskyLoss` can be used for the particular multi-variate regression task of learning the locations of facial features.
Other loss functions, e.g., using the Euclidean distance for regression, should be easily implementable.
......
......@@ -2,47 +2,47 @@
Python API
============
This section includes information for using the Python API of ``xbob.boosting``.
This section includes information for using the Python API of ``bob.learn.boosting``.
Machines
........
The :py:mod:`xbob.boosting.machine` sub-module contains classifiers that can predict classes for given input values.
The strong classifier is the :py:class:`xbob.boosting.machine.BoostedMachine`, which is a weighted combination of :py:class:`xbob.boosting.machine.WeakMachine`.
Weak machines might be a :py:class:`xbob.boosting.machine.LUTMachine` or a :py:class:`xbob.boosting.machine.StumpMachine`.
The :py:mod:`bob.learn.boosting.machine` sub-module contains classifiers that can predict classes for given input values.
The strong classifier is the :py:class:`bob.learn.boosting.BoostedMachine`, which is a weighted combination of :py:class:`bob.learn.boosting.WeakMachine`.
Weak machines might be a :py:class:`bob.learn.boosting.LUTMachine` or a :py:class:`bob.learn.boosting.StumpMachine`.
Theoretically, the strong classifier can consist of different types of weak classifiers, but usually all weak classifiers have the same type.
.. automodule:: xbob.boosting.machine
.. automodule:: bob.learn.boosting.machine
Trainers
........
The :py:mod:`xbob.boosting.trainer` sub-module contains trainers that trains:
The :py:mod:`bob.learn.boosting.trainer` sub-module contains trainers that trains:
* :py:class:`xbob.boosting.trainer.Boosting` : a strong machine of type :py:class:`xbob.boosting.machine.BoostedMachine`
* :py:class:`xbob.boosting.trainer.LUTTrainer` : a weak machine of type :py:class:`xbob.boosting.machine.LUTMachine`
* :py:class:`xbob.boosting.trainer.StrumTrainer` : a weak machine of type :py:class:`xbob.boosting.machine.StumpMachine`
* :py:class:`bob.learn.boosting.Boosting` : a strong machine of type :py:class:`bob.learn.boosting.BoostedMachine`
* :py:class:`bob.learn.boosting.LUTTrainer` : a weak machine of type :py:class:`bob.learn.boosting.LUTMachine`
* :py:class:`bob.learn.boosting.StrumTrainer` : a weak machine of type :py:class:`bob.learn.boosting.StumpMachine`
.. automodule:: xbob.boosting.trainer
.. automodule:: bob.learn.boosting.trainer
Loss functions
..............
Loss functions are used to define new weights for the weak machines using the ``scipy.optimize.fmin_l_bfgs_b`` function.
A base class loss function :py:class:`xbob.boosting.loss.LossFunction` is called by that function, and derived classes implement the actual loss for a single sample.
A base class loss function :py:class:`bob.learn.boosting.LossFunction` is called by that function, and derived classes implement the actual loss for a single sample.
.. note::
Loss functions are designed to be used in combination with a specific weak trainer in specific cases.
Not all combinations of loss functions and weak trainers make sense.
Here is a list of useful combinations:
1. :py:class:`xbob.boosting.loss.ExponentialLoss` with :py:class:`xbob.boosting.trainer.StrumTrainer` (uni-variate classification only)
2. :py:class:`xbob.boosting.loss.LogitLoss` with :py:class:`xbob.boosting.trainer.StrumTrainer` or :py:class:`xbob.boosting.trainer.LUTTrainer` (uni-variate or multi-variate classification)
3. :py:class:`xbob.boosting.loss.TangentialLoss` with :py:class:`xbob.boosting.trainer.StrumTrainer` or :py:class:`xbob.boosting.trainer.LUTTrainer` (uni-variate or multi-variate classification)
4. :py:class:`xbob.boosting.loss.JesorskyLoss` with :py:class:`xbob.boosting.trainer.LUTTrainer` (multi-variate regression only)
1. :py:class:`bob.learn.boosting.ExponentialLoss` with :py:class:`bob.learn.boosting.StrumTrainer` (uni-variate classification only)
2. :py:class:`bob.learn.boosting.LogitLoss` with :py:class:`bob.learn.boosting.StrumTrainer` or :py:class:`bob.learn.boosting.LUTTrainer` (uni-variate or multi-variate classification)
3. :py:class:`bob.learn.boosting.TangentialLoss` with :py:class:`bob.learn.boosting.StrumTrainer` or :py:class:`bob.learn.boosting.LUTTrainer` (uni-variate or multi-variate classification)
4. :py:class:`bob.learn.boosting.JesorskyLoss` with :py:class:`bob.learn.boosting.LUTTrainer` (multi-variate regression only)
.. automodule:: xbob.boosting.loss
.. automodule:: bob.learn.boosting.loss
......@@ -122,13 +122,13 @@ setup(
# Console scripts, which will appear in ./bin/ after buildout
'console_scripts': [
'boosting_example.py = xbob.boosting.examples.mnist:main',
'boosting_example.py = bob.learn.boosting.examples.mnist:main',
],
# tests that are _exported_ (that can be executed by other packages) can
# be signalized like this:
'bob.test': [
'boosting = xbob.boosting.tests.test_boosting:TestBoosting',
'boosting = bob.learn.boosting.tests.test_boosting:TestBoosting',
],
},
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment