Commit b4641511 authored by Manuel Günther's avatar Manuel Günther
Browse files

Removed old scripts from Rakesh; updated README and buildout.cfg

parent 4e95326a
......@@ -3,47 +3,55 @@ Generalized Boosting Framework using Stump and Look Up Table (LUT) based Weak Cl
========================================================================================
The package implements a generalized boosting framework, which incorporates different boosting approaches.
The Boosting algorithms implemented in this package are
The Boosting algorithms implemented in this package are:
1) Gradient Boost (generalized version of Adaboost) for univariate cases
2) TaylorBoost for univariate and multivariate cases
1) Gradient Boost [Fri00]_ (generalized version of Adaboost [FS99]_) for univariate cases using stump decision classifiers, as in [VJ04]_.
2) TaylorBoost [SMV11]_ for univariate and multivariate cases using Look-Up-Table based classifiers [Ata12]_
The weak classifiers associated with these boosting algorithms are
.. [Fri00] *Jerome H. Friedman*. **Greedy function approximation: a gradient boosting machine**. Annals of Statistics, 29:1189--1232, 2000.
.. [FS99] *Yoav Freund and Robert E. Schapire*. **A short introduction to boosting**. Journal of Japanese Society for Artificial Intelligence, 14(5):771-780, September, 1999.
1) Stump classifiers
2) LUT based classifiers
.. [VJ04] *Paul Viola and Michael J. Jones*. **Robust real-time face detection**. International Journal of Computer Vision (IJCV), 57(2): 137--154, 2004.
.. [SMV11] *Mohammad J. Saberian, Hamed Masnadi-Shirazi, Nuno Vasconcelos*. **TaylorBoost: First and second-order boosting algorithms with explicit margin control**. IEEE Conference on Conference on Computer Vision and Pattern Recognition (CVPR), 2929--2934, 2011.
.. [Ata12] *Cosmin Atanasoaei*. **Multivariate boosting with look-up tables for face processing**. PhD Thesis, École Polytechnique Fédérale de Lausanne (EPFL), Switzerland, 2012.
Check the following reference for the details:
Installation:
-------------
1. Viola, Paul, and Michael J. Jones. "Robust real-time face detection." International journal of computer vision 57.2 (2004): 137-154.
Bob
...
2. Saberian, Mohammad J., Hamed Masnadi-Shirazi, and Nuno Vasconcelos. "Taylorboost: First and second-order boosting algorithms with explicit margin control." Computer Vision and Pattern Recognition (CVPR), 2011 IEEE Conference on. IEEE, 2011.
The boosting framework is dependent on the open source signal-processing and machine learning toolbox Bob_, which you need to download from its web page.
For more information, please read Bob's `installation instructions <https://github.com/idiap/bob/wiki/Packages>`_.
3. Cosmin Atanasoaei, "Multivariate Boosting with Look Up Table for face processing", PhD thesis (2012).
This package
............
The most simple way to download the latest stable version of the package is to use the Download button above and extract the archive into a directory of your choice.
If y want, you can also check out the latest development branch of this package using::
Installation:
----------
$ git clone https://gitlab.idiap.ch/biometric/xbob-boosting.git
Once you have downloaded the package use the following two commands to install it:
Afterwards, please open a terminal in this directory and call::
$ python bootstrap.py
$ ./bin/buildout
These 2 commands should download and install all non-installed dependencies and get you a fully operational test and development environment.
These 2 commands should download and install all dependencies and get you a fully operational test and development environment.
Example
-------
To show an exemplary usage of the boosting algorithm, the binary and multi-variate classification of hand-written digits from the MNIST database is performed.
To show an exemplary usage of the boosting algorithm, binary and multi-variate classification of hand-written digits from the MNIST database is performed.
For simplicity, we just use the pixel gray values as (discrete) features to classify the digits.
In each boosting round, a single pixel location is selected.
In case of the stump classifier, this pixel value is compared to a threshold (which is determined during training), and one of the two classes is assigned.
In case of the LUT, for each value of the pixel the most probable digit is determined.
The LUT weak classifier selects a feature (i.e., a pixel location in the images) and determines the most probable digit for each pixel value.
Finally, the strong classifier combines several weak classifiers by a weighted sum of their predictions.
The script ``./bin/boosting_example.py`` is provided to perform all different examples.
This script has several command line parameters, which vary the behavior of the training and/or testing procedure.
All parameters have a long value (starting with ``--``) and a shotcut (starting with a single ``-``).
All parameters have a long value (starting with ``--``) and a shortcut (starting with a single ``-``).
These parameters are (see also ``./bin/boosting_example.py --help``):
To control the type of training, you can select:
......@@ -51,7 +59,7 @@ To control the type of training, you can select:
* ``--trainer-type``: Select the type of weak classifier. Possible values are ``stump`` and ``lut``
* ``--loss-type``: Select the loss function. Possible values are ``tan``, ``log`` and ``exp``. By default, a loss function suitable to the trainer type is selected.
* ``--number-of-boosting-rounds``: The number of weak classifiers to select.
* ``--multi-variate`` (only valid for LUT trainer): Perform multi-vatriate classification, or binary (one-to-one) classification.
* ``--multi-variate`` (only valid for LUT trainer): Perform multi-variate classification, or binary (one-to-one) classification.
* ``--feature-selection-style`` (only valid for multi-variate training): Select the feature for each output ``independent``ly or ``shared``?
To control the experimentation, you can choose:
......@@ -66,52 +74,54 @@ For information and debugging purposes, it might be interesting to use:
* ``--verbose`` (can be used several times): Increases the verbosity level from 0 (error) over 1 (warning) and 2 (info) to 3 (debug). Verbosity level 2 (``-vv``) is recommended.
* ``number-of-elements``: Reduce the number of elements per class (digit) to the given value.
Four different kinds of experimentations can be performed:
1. Uni-variate classification using the stump trainer:
$ ./bin/boosting_example.py -vv --trainer-type stump --digits 5 6 --classifier-file stump.hdf5
2. Uni-variate classification using the LUT trainer:
$ ./bin/boosting_example.py -vv --trainer-type lut --digits 5 6 --classifier-file lut_uni.hdf5
3. Multi-variate classification using LUT training and shared features.
Four different kinds of experiments can be performed:
$ ./bin/boosting_example.py -vv --trainer-type lut --all-digits ----classifier-file lut_shared.hdf5
1. Uni-variate classification using the stump classifier, classifying digits 5 and 6::
4. Multi-variate classification using LUT training and independent features.
$ ./bin/boosting_example.py -vv --trainer-type stump --digits 5 6
$ ./bin/boosting_example.py -vv --trainer-type lut --all-digits --classifier-file lut_shared.hdf5
2. Uni-variate classification using the LUT classifier, classifying digits 5 and 6::
$ ./bin/boosting_example.py -vv --trainer-type lut --digits 5 6
User Guide
----------
3. Multi-variate classification using LUT classifier and shared features, classifying all 10 digits::
This section explains how to use the package in order to: a) test the MNIST dataset for binary classification
b) test the dataset for multi class classification.
$ ./bin/boosting_example.py -vv --trainer-type lut --all-digits --multi-variate --feature-selection-style shared
a) The following command will run a single binary test for the digits specified and display the classification
accuracy on the console:
4. Multi-variate classification using LUT classifier and independent features, classifying all 10 digits::
$ ./bin/mnist_binary_one.py
$ ./bin/boosting_example.py -vv --trainer-type lut --all-digits --multi-variate --feature-selection-style independent
if you want to see all the option associated with the command type:
$ ./bin/mnist_binary_one.py -h
.. note:
During the execution of the experiments, the warning message "L-BFGS returned warning '2': ABNORMAL_TERMINATION_IN_LNSRCH" might appear.
This warning message is normal and does not influence the results much.
To run the tests for all the combination of of ten digits use the following command:
.. note:
For experiment 1, the training terminates after 75 of 100 rounds since the computed weight for the weak classifier of that round is vanishing.
Hence, performing more boosting rounds will not change the strong classifier any more.
$ ./bin/mnist_binary_all.py
All experiments should be able to run using several minutes of execution time.
The results of the above experiments should be the following (split in the remaining classification error on the training set, and the error on the test set)
This command tests all the possible calumniation of digits which results in 45 different binary tests. The
accuracy of individual tests and the final average accuracy of all the tests is displayed on the console.
+------------+----------+----------+
| Experiment | Training | Test |
+------------+----------+----------+
| 1 | 91.04 % | 92.05 % |
+------------+----------+----------+
| 2 | 100.0 % | 95.35 % |
+------------+----------+----------+
| 3 | 97.59 % | 83.47 % |
+------------+----------+----------+
| 4 | 99.04 % | 86.25 % |
+------------+----------+----------+
b) The following command can be used for the multivariate digits test:
Of course, you can try out different combinations of digits for experiments 1 and 2.
$ ./bin/mnist_multi.py
Because of large number of samples and multivariate problem it requires times in days on a normal system. Use -h
option to see different option available with this command.
Getting Help
------------
In case you experience problems with the code, or with downloading the required databases and/or software, please contact manuel.guenther@idiap.ch or file a bug report under https://gitlab.idiap.ch/biometric/xbob-boosting.
.. _bob: http://www.idiap.ch/software/bob
; vim: set fileencoding=utf-8 :
; Andre Anjos <andre.anjos@idiap.ch>
; Mon 16 Apr 08:29:18 2012 CEST
; Manuel Guenther <manuel.guenther@idiap.ch>
; Wed Feb 19 13:56:42 CET 2014
[buildout]
parts = xbob.boosting scripts
......@@ -11,10 +11,6 @@ newest = false
verbose = true
;debug = true
;prefixes = /idiap/group/torch5spro/releases/bob-1.2.0/install/linux-x86_64-release
prefixes = /idiap/user/mguenther/Bob/release
;prefixes = /idiap/user/mguenther/Bob/debug
[xbob.boosting]
recipe = xbob.buildout:develop
......
......@@ -33,10 +33,10 @@ setup(
version='1.0.1a0',
description='Boosting framework for Bob',
url='https://gitlab.idiap.ch/manuel.guenther/xbob-boosting',
url='https://gitlab.idiap.ch/biometric/xbob-boosting',
license='GPLv3',
author='Rakesh Mehta',
author_email='rakesh.mehta@idiap.ch',
author='Manuel Guenther (with help of Rakesh Mehta)',
author_email='manuel.guenther@idiap.ch',
# If you have a better, long description of your package, place it on the
# 'doc' directory and then hook it here
......@@ -60,6 +60,7 @@ setup(
'bob', # base signal proc./machine learning library
],
# Set up the C++ compiler to compile the C++ source code of this package
cmdclass={
'build_ext': build_ext,
},
......@@ -76,55 +77,20 @@ setup(
pkgconfig = [
'bob-io',
],
# STUFF for DEBUGGING goes here (requires DEBUG bob version...):
# extra_compile_args = [
# '-ggdb',
# ],
# define_macros = [
# ('BZ_DEBUG', 1)
# ],
# undef_macros=[
# 'NDEBUG'
# ]
)
],
# Your project should be called something like 'xbob.<foo>' or
# 'xbob.<foo>.<bar>'. To implement this correctly and still get all your
# packages to be imported w/o problems, you need to implement namespaces
# on the various levels of the package and declare them here. See more
# about this here:
# http://peak.telecommunity.com/DevCenter/setuptools#namespace-packages
#
# Our database packages are good examples of namespace implementations
# using several layers. You can check them out here:
# https://github.com/idiap/bob/wiki/Satellite-Packages
# Declare that the package is in the namespace xbob
namespace_packages = [
'xbob',
],
# This entry defines which scripts you will have inside the 'bin' directory
# once you install the package (or run 'bin/buildout'). The order of each
# entry under 'console_scripts' is like this:
# script-name-at-bin-directory = module.at.your.library:function
#
# The module.at.your.library is the python file within your library, using
# the python syntax for directories (i.e., a '.' instead of '/' or '\').
# This syntax also omits the '.py' extension of the filename. So, a file
# installed under 'example/foo.py' that contains a function which
# implements the 'main()' function of particular script you want to have
# should be referred as 'example.foo:main'.
#
# In this simple example we will create a single program that will print
# the version of bob.
# Define the entry points for this package
entry_points={
# scripts should be declared using this entry:
# Console scripts, which will appear in ./bin/ after buildout
'console_scripts': [
'boosting_example.py = xbob.boosting.examples.mnist:main',
# 'mnist_binary_all.py = xbob.boosting.scripts.mnist_binary_all:main',
# 'mnist_binary_one.py = xbob.boosting.scripts.mnist_binary_one:main',
# 'mnist_multi.py = xbob.boosting.scripts.mnist_multi:main',
],
# tests that are _exported_ (that can be executed by other packages) can
......
......@@ -34,13 +34,13 @@ LOSS = {
def command_line_arguments():
"""Defines the command line options."""
parser = argparse.ArgumentParser(description=__doc__, formatter_class=argparse.RawDescriptionHelpFormatter)
parser = argparse.ArgumentParser(description=__doc__, formatter_class=argparse.ArgumentDefaultsHelpFormatter)
parser.add_argument('-t', '--trainer-type', default = 'stump', choices = TRAINER.keys(), help = "The type of weak trainer used for boosting." )
parser.add_argument('-l', '--loss-type', choices = LOSS.keys(), help = "The type of loss function used in boosting to compute the weights for the weak classifiers.")
parser.add_argument('-r', '--number-of-boosting-rounds', type = int, default = 100, help = "The number of boosting rounds, i.e., the number of weak classifiers.")
parser.add_argument('-m', '--multi-variate', action = 'store_true', help = "Perform multi-variate training?")
parser.add_argument('-s', '--feature-selection-style', default = 'independent', choices = {'indepenent', 'shared'}, help = "The feature selection style (only for multivariate classification with the LUT trainer).")
parser.add_argument('-s', '--feature-selection-style', default = 'independent', choices = {'independent', 'shared'}, help = "The feature selection style (only for multivariate classification with the LUT trainer).")
parser.add_argument('-d', '--digits', type = int, nargs="+", choices=range(10), default=[5,6], help = "Select the digits you want to compare.")
parser.add_argument('-a', '--all-digits', action='store_true', help = "Use all digits")
......
#!/usr/bin/env python
"""The test script to perform the multivariate classification on the digits from the MNIST dataset.
The MNIST data is exported using the xbob.db.mnist module which provide the train and test
partitions for the digits. LBP features are extracted and the available algorithms for
classification is Lut based Boosting.
"""
import xbob.db.mnist
import numpy
import sys, getopt
import argparse
import string
import bob
from ..util import confusion
from ..features import local_feature
from ..core import boosting
import matplotlib.pyplot
def main():
parser = argparse.ArgumentParser(description = " The arguments for the boosting. ")
parser.add_argument('-r', default = 20, dest = "num_rnds", type = int, help = "The number of round for the boosting")
parser.add_argument('-l', default = 'exp', dest = "loss_type", type= str, choices = {'log','exp'}, help = "The type of the loss function. Logit and Exponential functions are the avaliable options")
parser.add_argument('-s', default = 'indep', dest = "selection_type", choices = {'indep', 'shared'}, type = str, help = "The feature selection type for the LUT based trainer. For multivarite case the features can be selected by sharing or independently ")
parser.add_argument('-n', default = 256, dest = "num_entries", type = int, help = "The number of entries in the LookUp table. It is the range of the feature values, e.g. if LBP features are used this values is 256.")
parser.add_argument('-f', default = 'lbp', dest = "feature_type", type = str, choices = {'lbp', 'mlbp', 'tlbp', 'dlbp'}, help = "The type of LBP features to be extracted from the image to perform the classification. The features are extracted from the block of varying scales")
parser.add_argument('-d', default = 10, dest = "num_digits", type = int, help = "The number of digits to be considered for classification.")
args = parser.parse_args()
# download the dataset
db_object = xbob.db.mnist.Database()
# Hardcode the number of digits and the image size
num_digits = args.num_digits
img_size = 28
# get the data (features and labels) for the selected digits from the xbob_db_mnist class functions
train_img, label_train = db_object.data('train',labels = range(num_digits))
test_img, label_test = db_object.data('test', labels = range(num_digits))
# Format the label data into int and change the class labels to -1 and +1
label_train = label_train.astype(int)
label_test = label_test.astype(int)
# initialize the label data for multivariate case
train_targets = -numpy.ones([train_img.shape[0],num_digits])
test_targets = -numpy.ones([test_img.shape[0],num_digits])
for i in range(num_digits):
train_targets[label_train == i,i] = 1
test_targets[label_test == i,i] = 1
# Extract the lbp features from the images
lbp_extractor = bob.ip.LBP(8)
temp_img = train_img[0,:].reshape([img_size,img_size])
output_image_size = lbp_extractor.get_lbp_shape(temp_img)
feature_dimension = output_image_size[0]*output_image_size[1]
train_fea = numpy.zeros((train_img.shape[0], feature_dimension))
test_fea = numpy.zeros((test_img.shape[0], feature_dimension))
for i in range(train_img.shape[0]):
current_img = train_img[i,:].reshape([img_size,img_size])
lbp_output_image = numpy.ndarray ( output_image_size, dtype = numpy.uint16 )
lbp_extractor (current_img, lbp_output_image)
train_fea[i,:] = numpy.reshape(lbp_output_image, feature_dimension, 1)
for i in range(test_img.shape[0]):
current_img = test_img[i,:].reshape([img_size,img_size])
lbp_output_image = numpy.ndarray ( output_image_size, dtype = numpy.uint16 )
lbp_extractor (current_img, lbp_output_image)
test_fea[i,:] = numpy.reshape(lbp_output_image, feature_dimension, 1)
train_fea = train_fea.astype(numpy.uint8)
test_fea = test_fea.astype(numpy.uint8)
print "LBP features computed"
# Initilize the trainer with LutTrainer
boost_trainer = boosting.Boost('LutTrainer')
# Set the parameters for the boosting
boost_trainer.num_rnds = args.num_rnds
boost_trainer.loss_type = args.loss_type
boost_trainer.selection_type = args.selection_type
boost_trainer.num_entries = args.num_entries
# Perform boosting of the feature set samp
machine = boost_trainer.train(train_fea, train_targets)
# Classify the test samples (testsamp) using the boosted classifier generated above
prediction_labels = machine.classify(test_fea)
# Calulate the values for confusion matrix
confusion_matrix = numpy.zeros([num_digits,num_digits])
for i in range(num_digits):
prediction_i = prediction_labels[test_targets[:,i] == 1,:]
num_samples_i = prediction_i.shape[0]
"""The test script to perform the binary classification on the digits from the MNIST dataset.
The MNIST data is exported using the xbob.db.mnist module which provide the train and test
partitions for the digits. Pixel values of grey scale images are used as features and the
available algorithms for classification are Lut based Boosting and Stump based Boosting.
The script test all the possible combination of the two digits which results in 45 different
binary classfication tests.
$ python mnist_binary.py -t <Trainer_type> -r <Number_of_boosting_rounds> -l <Loss_type> -s <selection_type> -n <Number_of_lut_entries>
"""
import xbob.db.mnist
import numpy
import sys, getopt
from ..core import booster
def main(argv):
opts, args = getopt.getopt(argv,"t:r:l:s:n:")
for opt, arg in opts:
if opt == '-t':
trainer_type = arg
elif opt == '-r':
num_rnds = arg
elif opt == 'l':
loss_type = arg
elif opt == 's':
selection_type = arg
elif opt == 'n':
num_entries = arg
# Initializations
accu = 0
test_num = 0
# download the dataset
db_object = xbob.db.mnist.Database()
# select the digits to classify
for digit1 in range(10):
for digit2 in range(digit1+2,10):
test_num = test_num +1
# get the data (features and labels) for the selected digits from the xbob_db_mnist class functions
fea_train, label_train = db_object.data('train',labels = [digit1,digit2])
fea_test, label_test = db_object.data('test', labels = [digit1,digit2])
# Format the label data into int and change the class labels to -1 and +1
label_train = label_train.astype(int)
label_test = label_test.astype(int)
label_train[label_train == digit1] = 1
label_test[label_test == digit1] = 1
label_train[label_train == digit2] = -1
label_test[label_test == digit2] = -1
# Initilize the trainer with 'LutTrainer' or 'StumpTrainer'
boost_trainer = booster.Boost('StumpTrainer')
# Set the parameters for the boosting
boost_trainer.num_rnds = 10
boost_trainer.loss_type = 'exp'
boost_trainer.selection_type = 'indep'
boost_trainer.num_entries = 256
# Perform boosting of the feature set samp
model = boost_trainer.train(fea_train, label_train)
# Classify the test samples (testsamp) using the boosited classifier generated above
pred_scores, prediction_labels = model.classify(fea_test)
# calculate the accuracy in percentage for the curent classificaiton test
label_test = label_test[:,numpy.newaxis]
accuracy = 100*float(sum(prediction_labels == label_test))/(len(label_test))
print "The accuracy of binary classification test for digits %d and %d is %f " % (digit1, digit2, accuracy)
accu = accu + accuracy
accu = accu/test_num
print "The average accuracy for all the test is %f %" % (accu)
if __name__ == "__main__":
main(sys.argv[1:])
#!/usr/bin/env python
"""The test script to perform the binary classification on the digits from the MNIST dataset.
Pixel values of grey scale images are used as features and the available algorithms
for classification are Lut based Boosting and Stump based Boosting.
The script test all the possible combination of the two digits which results in 45 different
binary classfication tests.
"""
import xbob.db.mnist
import numpy
import sys, getopt
import argparse
import string
from ..core import boosting
import xbob.db.mnist
def main():
parser = argparse.ArgumentParser(description=__doc__, formatter_class=argparse.RawDescriptionHelpFormatter)
parser.add_argument('-t', default = 'StumpTrainer',dest = "trainer_type", type = str, choices = {'StumpTrainer', 'LutTrainer'}, help = "This is the type of trainer used for the boosting." )
parser.add_argument('-r', default = 20, dest = "num_rnds", type = int , help = "The number of round for the boosting")
parser.add_argument('-l', default = 'exp', dest = "loss_type", type= str, choices = {'log','exp'}, help = "The type of the loss function. Logit and Exponential functions are the avaliable options")
parser.add_argument('-s', default = 'indep', dest = "selection_type", choices = {'indep', 'shared'}, type = str, help = "The feature selection type for the LUT based trainer. For multivarite case the features can be selected by sharing or independently ")
parser.add_argument('-n', default = 256, dest = "num_entries", type = int, help = "The number of entries in the LookUp table. It is the range of the feature values, e.g. if LBP features are used this values is 256.")
args = parser.parse_args()
# Initializations
accu = 0
test_num = 0
# download the dataset
db_object = xbob.db.mnist.Database()
# select the digits to classify
for digit1 in range(10):
for digit2 in range(digit1+1,10):
test_num = test_num +1
# get the data (features and labels) for the selected digits from the xbob_db_mnist class functions
fea_train, label_train = db_object.data('train',labels = [digit1,digit2])
fea_test, label_test = db_object.data('test', labels = [digit1,digit2])
# Format the label data into int and change the class labels to -1 and +1
label_train = label_train.astype(int)
label_test = label_test.astype(int)
label_train[label_train == digit1] = 1
label_test[label_test == digit1] = 1
label_train[label_train == digit2] = -1
label_test[label_test == digit2] = -1
# Initilize the trainer with 'LutTrainer' or 'StumpTrainer'
boost_trainer = boosting.Boost(args.trainer_type)
# Set the parameters for the boosting
boost_trainer.num_rnds = args.num_rnds
boost_trainer.loss_type = args.loss_type
boost_trainer.selection_type = args.selection_type
boost_trainer.num_entries = args.num_entries
# Perform boosting of the feature set samp
machine = boost_trainer.train(fea_train, label_train)
# Classify the test samples (testsamp) using the boosited classifier generated above
pred_scores, prediction_labels = machine.classify(fea_test)
# calculate the accuracy in percentage for the curent classificaiton test
# label_test = label_test[:,numpy.newaxis]
accuracy = 100*float(sum(prediction_labels == label_test))/(len(label_test))
print "The accuracy of binary classification test for digits %d and %d is %f " % (digit1, digit2, accuracy)
accu = accu + accuracy
accu = accu/test_num
print "The average accuracy for all the test is %f " % (accu)
return 0
if __name__ == "__main__":
main()
#!/usr/bin/env python
"""The test script to perform the binary classification on the digits from the MNIST dataset.
The MNIST data is exported using the xbob.db.mnist module which provide the train and test
partitions for the digits. Pixel values of grey scale images are used as features and the
available algorithms for classification are Lut based Boosting and Stump based Boosting.
Thus it conducts only one binary classifcation test.
"""
import xbob.db.mnist
import numpy
import sys, getopt
import string
import argparse
from ..core import boosting
import xbob.db.mnist
def main():
parser = argparse.ArgumentParser(description=__doc__, formatter_class=argparse.RawDescriptionHelpFormatter)
parser.add_argument('-t', default = 'StumpTrainer',dest = "trainer_type", type = str, choices = {'StumpTrainer', 'LutTrainer'}, help = "This is the type of trainer used for the boosting." )
parser.add_argument('-r', default = 20, dest = "num_rnds", type = int , help = "The number of round for the boosting")
parser.add_argument('-l', default = 'exp', dest = "loss_type", type= str, choices = {'log','exp'}, help = "The type of the loss function. Logit and Exponential functions are the avaliable options")
parser.add_argument('-s', default = 'indep', dest = "selection_type", choices = {'indep', 'shared'}, type = str, help = "The feature selection type for the LUT based trainer. For multivarite case the features can be selected by sharing or independently ")
parser.add_argument('-n', default = 256, dest = "num_entries", type = int, help = "The number of entries in the LookUp table. It is the range of the feature values, e.g. if LBP features are used this values is 256.")
parser.add_argument('-d1', default = 1, dest = "digit1", type = int,choices = {0,1,2,3,4,5,6,7,8,9}, help = " The first digit for the classficaton test.")
parser.add_argument('-d2', default = 2, dest = "digit2", type = int,choices = {0,1,2,3,4,5,6,7,8,9}, help = " The second digit for the classficaton test.")
args = parser.parse_args()
# download the dataset
db_object = xbob.db.mnist.Database()
# select the digits to classify
digit1 = args.digit1
digit2 = args.digit2