Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in
Toggle navigation
Menu
Open sidebar
bob
bob.learn.boosting
Commits
746d7f9a
Commit
746d7f9a
authored
Aug 16, 2013
by
Rakesh MEHTA
Browse files
Spell Checks
parent
aa5759ef
Changes
5
Hide whitespace changes
Inline
Sidebyside
README.rst
View file @
746d7f9a
...
...
@@ 5,12 +5,12 @@ The package implements a generalized boosting framework which incorporate differ
boosting approaches. The Boosting algorithms implemented in this package are
1) Gradient Boost (generalized version of Adaboost) for univariate cases
2) TaylorBoost for univaria
n
te and multivariate cases
2) TaylorBoost for univariate and multivariate cases
The weak classfiers associated with these boosting algorithms are
The weak class
i
fiers associated with these boosting algorithms are
1) Stump classifiers
2) LUT based classfiers
2) LUT based class
i
fiers
Check the following reference for the details:
...
...
@@ 34,7 +34,7 @@ two categories:
onevsone and onevsall. Both the boosting algorithm (Gradient Boost and Taylor boost)
can be used for testing this scenario.
2) Multivariate Test: It is the multi class classification problem. All the 10 digit classfication
2) Multivariate Test: It is the multi class classification problem. All the 10 digit class
i
fication
is considered in a single test. Only Multivariate Taylor boosting can be used for testing this scenario.
Installation:
...
...
@@ 52,10 +52,10 @@ get you a fully operational test and development environment.
User Guide

This section explains how to use the package in order to: a) test the MNIST dataset for binary clssification
This section explains how to use the package in order to: a) test the MNIST dataset for binary cl
a
ssification
b) test the dataset for multi class classification.
a) The following command will run a single binary test for the digits specified and display the classifcation
a) The following command will run a single binary test for the digits specified and display the classif
i
cation
accuracy on the console:
$ ./bin/mnist_binary_one.py
...
...
@@ 68,10 +68,10 @@ To run the tests for all the combination of of ten digits use the following comm
$ ./bin/mnist_binary_all.py
This command tests all the possible c
o
mniation of digits which results in 45 different binary tests. The
This command tests all the possible c
alu
mniation of digits which results in 45 different binary tests. The
accuracy of individual tests and the final average accuracy of all the tests is displayed on the console.
b) The following command can be used for the multivarite digits test:
b) The following command can be used for the multivari
a
te digits test:
$ ./bin/mnist_multi.py
...
...
xbob/boosting/core/boosting.py
View file @
746d7f9a
""" The module consist of the classes to generate a strong boosting classifier and test features using that classifier.
Bo
s
sting algorithms have three main dimensions: weak trainers that are boosting, optimization strategy
Bo
o
sting algorithms have three main dimensions: weak trainers that are boosting, optimization strategy
for boosting and loss function that guide the optimization. For each one of these the following
choices are implemented.
...
...
@@ 35,7 +35,7 @@ class Boost:
""" The class to boost the features from a set of training samples.
It iteratively adds new trainer modelsto assemble a strong classifier.
It iteratively adds new trainer models
to assemble a strong classifier.
In each round of iteration a weak trainer is learned
by optimization of a differentiable function. The following parameters are involved
...
...
@@ 56,12 +56,12 @@ class Boost:
The type of weak trainer to be learned. Two types of weak trainers are
supported currently.
'LutTrainer': It is used for d
e
screte feature types.LUT are used as weak
'LutTrainer': It is used for d
i
screte feature types.LUT are used as weak
trainers and Taylor Boost is used as optimization strategy.
E
g
: LBP features, MCT features.
E
x.
: LBP features, MCT features.
'StumpTrainer': Decsion Stumps are used as weak trainer and GradBoost is
used as optimization strategy.It can be used with both d
e
screte
'StumpTrainer': Dec
i
sion Stumps are used as weak trainer and GradBoost is
used as optimization strategy.It can be used with both d
i
screte
and continuous type of features
num_entries: Type int, Default = 256
...
...
@@ 73,7 +73,7 @@ class Boost:
lut_loss: Type string, Default = 'expectational'
For LutTrainer two types of loss function are supported: expectational and variational.
Variational perform margnally better than the expectational loss as reported in Cosmin's
Variational perform marg
i
nally better than the expectational loss as reported in Cosmin's
thesis, however at the expense of high computational complexity.
This parameter can be set to 'expectational' or 'variational'.
...
...
@@ 90,7 +90,7 @@ class Boost:
def
__init__
(
self
,
trainer_type
):
""" The function to initilize the boosting parameters.
""" The function to initi
a
lize the boosting parameters.
The function set the default values for the following boosting parameters:
The number of rounds for boosting: 100
...
...
@@ 118,7 +118,7 @@ class Boost:
""" The function to train a boosting machine.
The function boosts the discrete features (fset) and returns a strong classifier
as a combin
t
aion of weak classifier.
as a combina
t
ion of weak classifier.
Inputs:
fset: (num_sam x num_features) features extracted from the samples
...
...
@@ 229,10 +229,10 @@ class BoostMachine():
Return:
prediction_labels: The predicted clsses for the test samples
prediction_labels: The predicted cl
a
sses for the test samples
Type: numpy array (#number of samples)
"""
# Initilization
# Initi
a
lization
num_trainer
=
len
(
self
.
weak_trainer
)
num_samp
=
test_features
.
shape
[
0
]
pred_labels
=

numpy
.
ones
([
num_samp
,
self
.
num_op
])
...
...
xbob/boosting/core/losses.py
View file @
746d7f9a
...
...
@@ 43,7 +43,7 @@ class ExpLossFunction():
#return loss_grad
def
loss_sum
(
self
,
*
args
):
"""The function computes the sum of the exponential loss which is used to find the optmized values of alpha (x).
"""The function computes the sum of the exponential loss which is used to find the opt
i
mized values of alpha (x).
The functions computes sum of loss values which is required during the linesearch step for the optimization of the alpha.
This function is given as the input for the lbfgs optimization function.
...
...
@@ 55,7 +55,7 @@ class ExpLossFunction():
targets: The targets for the samples
type: numpy array (# number of samples x #number of outputs)
pred_scores: The cum
m
ulative prediction scores of the samples until the previous round of the boosting.
pred_scores: The cumulative prediction scores of the samples until the previous round of the boosting.
type: numpy array (# number of samples)
curr_scores: The prediction scores of the samples for the current round of the boosting.
...
...
@@ 82,7 +82,7 @@ class ExpLossFunction():
def
loss_grad_sum
(
self
,
*
args
):
"""The function computes the sum of the exponential loss which is used to find the optmized values of alpha (x).
"""The function computes the sum of the exponential loss which is used to find the opt
i
mized values of alpha (x).
The functions computes sum of loss values which is required during the linesearch step for the optimization of the alpha.
This function is given as the input for the lbfgs optimization function.
...
...
@@ 94,7 +94,7 @@ class ExpLossFunction():
targets: The targets for the samples
type: numpy array (# number of samples x #number of outputs)
pred_scores: The cum
m
ulative prediction scores of the samples until the previous round of the boosting.
pred_scores: The cumulative prediction scores of the samples until the previous round of the boosting.
type: numpy array (# number of samples)
curr_scores: The prediction scores of the samples for the current round of the boosting.
...
...
@@ 104,7 +104,7 @@ class ExpLossFunction():
Return:
sum_loss: The sum of the loss gradient values for the current value of the alpha
type: float"""
# initilize the values
# initi
a
lize the values
x
=
args
[
0
]
targets
=
args
[
1
]
pred_scores
=
args
[
2
]
...
...
@@ 158,7 +158,7 @@ class LogLossFunction():
return

targets
*
e
*
denom
def
loss_sum
(
self
,
*
args
):
"""The function computes the sum of the logit loss which is used to find the optmized values of alpha (x).
"""The function computes the sum of the logit loss which is used to find the opt
i
mized values of alpha (x).
The functions computes sum of loss values which is required during the linesearch step for the optimization of the alpha.
This function is given as the input for the lbfgs optimization function.
...
...
@@ 170,7 +170,7 @@ class LogLossFunction():
targets: The targets for the samples
type: numpy array (# number of samples x #number of outputs)
pred_scores: The cum
m
ulative prediction scores of the samples until the previous round of the boosting.
pred_scores: The cumulative prediction scores of the samples until the previous round of the boosting.
type: numpy array (# number of samples)
curr_scores: The prediction scores of the samples for the current round of the boosting.
...
...
@@ 192,7 +192,7 @@ class LogLossFunction():
#@abstractmethod
def
loss_grad_sum
(
self
,
*
args
):
"""The function computes the sum of the logit loss gradient which is used to find the optmized values of alpha (x).
"""The function computes the sum of the logit loss gradient which is used to find the opt
i
mized values of alpha (x).
The functions computes sum of loss values which is required during the linesearch step for the optimization of the alpha.
This function is given as the input for the lbfgs optimization function.
...
...
@@ 204,7 +204,7 @@ class LogLossFunction():
targets: The targets for the samples
type: numpy array (# number of samples x #number of outputs)
pred_scores: The cum
m
ulative prediction scores of the samples until the previous round of the boosting.
pred_scores: The cumulative prediction scores of the samples until the previous round of the boosting.
type: numpy array (# number of samples)
curr_scores: The prediction scores of the samples for the current round of the boosting.
...
...
xbob/boosting/core/trainers.py
View file @
746d7f9a
""" The module consists of the weak trainers which are used in the boosting framework.
currently two tr
i
aner types are implmented: Stump trainer and Lut trainer.
currently two tra
i
ner types are impl
e
mented: Stump trainer and Lut trainer.
The modules structure is as follows:
StumpTrainer class provides the methods to compute the weak st
r
ump trainer
StumpTrainer class provides the methods to compute the weak stump trainer
and test the features using these trainers.
LutTrainer class provides the methods to compute the weak LUT trainer
...
...
@@ 15,12 +15,12 @@ import math
class
StumpTrainer
():
""" The weak trainer class for training stumps as classifiers. The trainer is paramet
e
rized
""" The weak trainer class for training stumps as classifiers. The trainer is parametrized
the threshold and the polarity.
"""
def
__init__
(
self
):
""" Initilize the stump classifier"""
""" Initi
a
lize the stump classifier"""
self
.
threshold
=
0
self
.
polarity
=
0
self
.
selected_indices
=
0
...
...
@@ 58,7 +58,7 @@ class StumpTrainer():
for
i
in
range
(
numFea
):
polarity
[
i
],
threshold
[
i
],
gain
[
i
]
=
self
.
compute_thresh
(
fea
[:,
i
],
loss_grad
)
# Find the optimum id and
t
is corresponding trainer
# Find the optimum id and i
t
s corresponding trainer
opt_id
=
gain
.
argmax
()
self
.
threshold
=
threshold
[
opt_id
]
self
.
polarity
=
polarity
[
opt_id
]
...
...
@@ 72,7 +72,7 @@ class StumpTrainer():
""" Function computes the stump classifier (threshold) for a single feature
Function to compute the threshold for a single feature. The threshold is computed for
the given feature values using the weak learner algorithm
given in the Voila Jones Robust Face classification
the given feature values using the weak learner algorithm
of Viola Jones.
Inputs:
...
...
@@ 153,14 +153,14 @@ class StumpTrainer():
class
LutTrainer
():
""" The LutTrainer class contain methods to learn weak trainer using LookUp Tables.
It can be used for multivariate binary classfication """
It can be used for multivariate binary class
i
fication """
def
__init__
(
self
,
num_entries
,
selection_type
,
num_op
):
""" Function to initilize the parameters.
""" Function to initi
a
lize the parameters.
Function to initilize the weak LutTrainer. Each weak Luttrainer is specified with a
Function to initi
a
lize the weak LutTrainer. Each weak Luttrainer is specified with a
LookUp Table and the feature index which corresponds to the feature on which the
current classifier has to applied.
...
...
@@ 176,7 +176,7 @@ class LutTrainer():
Type: string {'indep', 'shared'}
num_op: The number of outputs for the classification task.
type: Inte
r
ger
type: Integer
"""
self
.
num_entries
=
num_entries
...
...
@@ 235,7 +235,7 @@ class LutTrainer():
elif
self
.
selection_type
==
'shared'
:
# for 'shared' feature selection the loss function is summed over multiple dimensions and
# the feature that minimized this
a
cumulative loss is used for all the outputs
# the feature that minimized this cumulative loss is used for all the outputs
accum_loss
=
numpy
.
sum
(
sum_loss
,
1
)
selected_findex
=
accum_loss
.
argmin
()
...
...
xbob/boosting/features/local_feature.py
View file @
746d7f9a
"""The module implements
provide
the interface for block based local feature extraction methods.
The
features
implemented are Local Binary Pattern and its variants (tLbp, dLBP, mLBP). The features
are extracted using blocks of different scale. Integral images are used to eff
e
ciently extract the
"""The module implements the interface
s
for block based local feature extraction methods.
The implemented
features
are Local Binary Pattern and its variants (tLbp, dLBP, mLBP). The features
are extracted using blocks of different scale. Integral images are used to eff
i
ciently extract the
features. """
...
...
@@ 17,10 +17,10 @@ class lbp_feature():
The number of neighbouring blocks are fixed to eight that correspond to the original LBP structure. """
def
__init__
(
self
,
ftype
):
"""The function to initilize the feature type.
"""The function to initi
a
lize the feature type.
The function initilizes the type of feature to be extracted. The type of feature can be one of the following
lbp: The original LBP features that take difference of cent
e
r with the eight neighbours.
The function initi
a
lizes the type of feature to be extracted. The type of feature can be one of the following
lbp: The original LBP features that take difference of centr
e
with the eight neighbours.
tlbp: It take the difference of the neighbours with the adjacent neighbour and central values is ignored.
dlbp: The difference between the pixels is taken along four different directions.
mlbp: The difference of the neighbouring values is taken with the average of neighbours and the central value."""
...
...
@@ 28,15 +28,15 @@ class lbp_feature():
def
compute_integral_image
(
self
,
img
):
"""The function c
u
mputes an inte
r
gal image for the given image.
"""The function c
o
mputes an integ
r
al image for the given image.
The function computes the inte
r
gral image for the eff
e
cient computation of the block based features.
In
o
uts:
The function computes the integral image for the eff
i
cient computation of the block based features.
In
p
uts:
self: feature object
img: Input images
return:
int_img: The inte
r
gal image of the input image."""
int_img: The integ
r
al image of the input image."""
integral_y
=
numpy
.
cumsum
(
img
,
0
)
integral_xy
=
numpy
.
cumsum
(
integral_y
,
1
)
...
...
@@ 59,7 +59,7 @@ class lbp_feature():
Return:
feature_vector: The concatenated feature vectors for all the scales."""
# Compute the inte
r
gal image and pad zeros along row and col for block processing
# Compute the integ
r
al image and pad zeros along row and col for block processing
integral_imgc
=
self
.
compute_integral_image
(
img
)
rows
,
cols
=
img
.
shape
integral_img
=
numpy
.
zeros
([
rows
+
1
,
cols
+
1
])
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment