Reorganing References

parent 7893ff7b
Pipeline #18479 passed with stage
in 38 minutes and 59 seconds
...@@ -19,7 +19,7 @@ static auto PLDABase_doc = bob::extension::ClassDoc( ...@@ -19,7 +19,7 @@ static auto PLDABase_doc = bob::extension::ClassDoc(
"This class is a container for the :math:`F` (between class variantion matrix), :math:`G` (within class variantion matrix) and :math:`\\Sigma` " "This class is a container for the :math:`F` (between class variantion matrix), :math:`G` (within class variantion matrix) and :math:`\\Sigma` "
"matrices and the mean vector :math:`\\mu` of a PLDA model. This also" "matrices and the mean vector :math:`\\mu` of a PLDA model. This also"
"precomputes useful matrices to make the model scalable." "precomputes useful matrices to make the model scalable."
"References: [ElShafey2014,PrinceElder2007,LiFu2012]", "References: [ElShafey2014]_ [PrinceElder2007]_ [LiFu2012]_ ",
"" ""
).add_constructor( ).add_constructor(
bob::extension::FunctionDoc( bob::extension::FunctionDoc(
......
...@@ -20,7 +20,7 @@ static auto PLDAMachine_doc = bob::extension::ClassDoc( ...@@ -20,7 +20,7 @@ static auto PLDAMachine_doc = bob::extension::ClassDoc(
"This class is a container for an enrolled identity/class. It contains information extracted from the enrollment samples. " "This class is a container for an enrolled identity/class. It contains information extracted from the enrollment samples. "
"It should be used in combination with a PLDABase instance.\n\n" "It should be used in combination with a PLDABase instance.\n\n"
"References: [ElShafey2014]_, [PrinceElder2007]_, [LiFu2012]_", "References: [ElShafey2014]_ [PrinceElder2007]_ [LiFu2012]_ ",
"" ""
).add_constructor( ).add_constructor(
bob::extension::FunctionDoc( bob::extension::FunctionDoc(
......
...@@ -106,7 +106,7 @@ static auto PLDATrainer_doc = bob::extension::ClassDoc( ...@@ -106,7 +106,7 @@ static auto PLDATrainer_doc = bob::extension::ClassDoc(
BOB_EXT_MODULE_PREFIX ".PLDATrainer", BOB_EXT_MODULE_PREFIX ".PLDATrainer",
"This class can be used to train the :math:`F`, :math:`G` and " "This class can be used to train the :math:`F`, :math:`G` and "
" :math:`\\Sigma` matrices and the mean vector :math:`\\mu` of a PLDA model." " :math:`\\Sigma` matrices and the mean vector :math:`\\mu` of a PLDA model."
"References: [ElShafey2014]_,[PrinceElder2007]_,[LiFu2012]_", "References: [ElShafey2014]_ [PrinceElder2007]_ [LiFu2012]_ ",
"" ""
).add_constructor( ).add_constructor(
bob::extension::FunctionDoc( bob::extension::FunctionDoc(
......
...@@ -121,7 +121,7 @@ Maximum likelihood Estimator (MLE) ...@@ -121,7 +121,7 @@ Maximum likelihood Estimator (MLE)
In statistics, maximum likelihood estimation (MLE) is a method of estimating In statistics, maximum likelihood estimation (MLE) is a method of estimating
the parameters of a statistical model given observations by finding the the parameters of a statistical model given observations by finding the
:math:`\Theta` that maximizes :math:`P(x|\Theta)` for all :math:`x` in your :math:`\Theta` that maximizes :math:`P(x|\Theta)` for all :math:`x` in your
dataset [10]_. This optimization is done by the **Expectation-Maximization** dataset [9]_. This optimization is done by the **Expectation-Maximization**
(EM) algorithm [8]_ and it is implemented by (EM) algorithm [8]_ and it is implemented by
:py:class:`bob.learn.em.ML_GMMTrainer`. :py:class:`bob.learn.em.ML_GMMTrainer`.
...@@ -181,7 +181,7 @@ Maximum a posteriori Estimator (MAP) ...@@ -181,7 +181,7 @@ Maximum a posteriori Estimator (MAP)
Closely related to the MLE, Maximum a posteriori probability (MAP) is an Closely related to the MLE, Maximum a posteriori probability (MAP) is an
estimate that equals the mode of the posterior distribution by incorporating in estimate that equals the mode of the posterior distribution by incorporating in
its loss function a prior distribution [11]_. Commonly this prior distribution its loss function a prior distribution [10]_. Commonly this prior distribution
(the values of :math:`\Theta`) is estimated with MLE. This optimization is done (the values of :math:`\Theta`) is estimated with MLE. This optimization is done
by the **Expectation-Maximization** (EM) algorithm [8]_ and it is implemented by the **Expectation-Maximization** (EM) algorithm [8]_ and it is implemented
by :py:class:`bob.learn.em.MAP_GMMTrainer`. by :py:class:`bob.learn.em.MAP_GMMTrainer`.
...@@ -582,7 +582,7 @@ The snippet bellow shows how to compute scores using this approximation. ...@@ -582,7 +582,7 @@ The snippet bellow shows how to compute scores using this approximation.
Probabilistic Linear Discriminant Analysis (PLDA) Probabilistic Linear Discriminant Analysis (PLDA)
------------------------------------------------- -------------------------------------------------
Probabilistic Linear Discriminant Analysis [16]_ is a probabilistic model that Probabilistic Linear Discriminant Analysis [5]_ is a probabilistic model that
incorporates components describing both between-class and within-class incorporates components describing both between-class and within-class
variations. Given a mean :math:`\mu`, between-class and within-class subspaces variations. Given a mean :math:`\mu`, between-class and within-class subspaces
:math:`F` and :math:`G` and residual noise :math:`\epsilon` with zero mean and :math:`F` and :math:`G` and residual noise :math:`\epsilon` with zero mean and
...@@ -598,7 +598,7 @@ An Expectation-Maximization algorithm can be used to learn the parameters of ...@@ -598,7 +598,7 @@ An Expectation-Maximization algorithm can be used to learn the parameters of
this model :math:`\mu`, :math:`F` :math:`G` and :math:`\Sigma`. As these this model :math:`\mu`, :math:`F` :math:`G` and :math:`\Sigma`. As these
parameters can be shared between classes, there is a specific container class parameters can be shared between classes, there is a specific container class
for this purpose, which is :py:class:`bob.learn.em.PLDABase`. The process is for this purpose, which is :py:class:`bob.learn.em.PLDABase`. The process is
described in detail in [17]_. described in detail in [6]_.
Let us consider a training set of two classes, each with 3 samples of Let us consider a training set of two classes, each with 3 samples of
dimensionality 3. dimensionality 3.
...@@ -793,9 +793,11 @@ Follow bellow an example of score normalization using ...@@ -793,9 +793,11 @@ Follow bellow an example of score normalization using
.. [2] http://publications.idiap.ch/index.php/publications/show/2606 .. [2] http://publications.idiap.ch/index.php/publications/show/2606
.. [3] http://dx.doi.org/10.1016/j.csl.2007.05.003 .. [3] http://dx.doi.org/10.1016/j.csl.2007.05.003
.. [4] http://dx.doi.org/10.1109/TASL.2010.2064307 .. [4] http://dx.doi.org/10.1109/TASL.2010.2064307
.. [5] http://dx.doi.org/10.1109/ICCV.2007.4409052
.. [6] http://doi.ieeecomputersociety.org/10.1109/TPAMI.2013.38
.. [7] http://en.wikipedia.org/wiki/K-means_clustering .. [7] http://en.wikipedia.org/wiki/K-means_clustering
.. [8] http://en.wikipedia.org/wiki/Expectation-maximization_algorithm .. [8] http://en.wikipedia.org/wiki/Expectation-maximization_algorithm
.. [10] http://en.wikipedia.org/wiki/Maximum_likelihood .. [9] http://en.wikipedia.org/wiki/Maximum_likelihood
.. [11] http://en.wikipedia.org/wiki/Maximum_a_posteriori_estimation .. [10] http://en.wikipedia.org/wiki/Maximum_a_posteriori_estimation
.. [16] http://dx.doi.org/10.1109/ICCV.2007.4409052
.. [17] http://doi.ieeecomputersociety.org/10.1109/TPAMI.2013.38
...@@ -32,18 +32,26 @@ References ...@@ -32,18 +32,26 @@ References
----------- -----------
.. [Reynolds2000] *Reynolds, Douglas A., Thomas F. Quatieri, and Robert B. Dunn*. **Speaker Verification Using Adapted Gaussian Mixture Models**, Digital signal processing 10.1 (2000): 19-41. .. [Reynolds2000] *Reynolds, Douglas A., Thomas F. Quatieri, and Robert B. Dunn*. **Speaker Verification Using Adapted Gaussian Mixture Models**, Digital signal processing 10.1 (2000): 19-41.
.. [Vogt2008] *R. Vogt, S. Sridharan*. **'Explicit Modelling of Session Variability for Speaker Verification'**, Computer Speech & Language, 2008, vol. 22, no. 1, pp. 17-38
.. [McCool2013] *C. McCool, R. Wallace, M. McLaren, L. El Shafey, S. Marcel*. **'Session Variability Modelling for Face Authentication'**, IET Biometrics, 2013 .. [Vogt2008] *R. Vogt, S. Sridharan*. **'Explicit Modelling of Session Variability for Speaker Verification'**, Computer Speech & Language, 2008, vol. 22, no. 1, pp. 17-38
.. [ElShafey2014] *Laurent El Shafey, Chris McCool, Roy Wallace, Sebastien Marcel*. **'A Scalable Formulation of Probabilistic Linear Discriminant Analysis: Applied to Face Recognition'**, TPAMI'2014
.. [PrinceElder2007] *Prince and Elder*. **'Probabilistic Linear Discriminant Analysis for Inference About Identity'**, ICCV'2007 .. [McCool2013] *C. McCool, R. Wallace, M. McLaren, L. El Shafey, S. Marcel*. **'Session Variability Modelling for Face Authentication'**, IET Biometrics, 2013
.. [LiFu2012] *Li, Fu, Mohammed, Elder and Prince*. **'Probabilistic Models for Inference about Identity'**, TPAMI'2012
.. [ElShafey2014] *Laurent El Shafey, Chris McCool, Roy Wallace, Sebastien Marcel*. **'A Scalable Formulation of Probabilistic Linear Discriminant Analysis: Applied to Face Recognition'**, TPAMI'2014
.. [Bishop1999] Tipping, Michael E., and Christopher M. Bishop. "Probabilistic principal component analysis." Journal of the Royal Statistical Society: Series B (Statistical Methodology) 61.3 (1999): 611-622.
.. [Roweis1998] Roweis, Sam. "EM algorithms for PCA and SPCA." Advances in neural information processing systems (1998): 626-632. .. [PrinceElder2007] *Prince and Elder*. **'Probabilistic Linear Discriminant Analysis for Inference About Identity'**, ICCV'2007
.. [Glembek2009] Glembek, Ondrej, et al. "Comparison of scoring methods used in speaker recognition with joint factor analysis." Acoustics, Speech and Signal Processing, 2009. ICASSP 2009. IEEE International Conference on. IEEE, 2009. .. [LiFu2012] *Li, Fu, Mohammed, Elder and Prince*. **'Probabilistic Models for Inference about Identity'**, TPAMI'2012
.. [Auckenthaler2000] Auckenthaler, Roland, Michael Carey, and Harvey Lloyd-Thomas. "Score normalization for text-independent speaker verification systems." Digital Signal Processing 10.1 (2000): 42-54.
.. [Mariethoz2005] Mariethoz, Johnny, and Samy Bengio. "A unified framework for score normalization techniques applied to text-independent speaker verification." IEEE signal processing letters 12.7 (2005): 532-535. .. [Bishop1999] Tipping, Michael E., and Christopher M. Bishop. "Probabilistic principal component analysis." Journal of the Royal Statistical Society: Series B (Statistical Methodology) 61.3 (1999): 611-622.
.. [Roweis1998] Roweis, Sam. "EM algorithms for PCA and SPCA." Advances in neural information processing systems (1998): 626-632.
.. [Glembek2009] Glembek, Ondrej, et al. "Comparison of scoring methods used in speaker recognition with joint factor analysis." Acoustics, Speech and Signal Processing, 2009. ICASSP 2009. IEEE International Conference on. IEEE, 2009.
.. [Auckenthaler2000] Auckenthaler, Roland, Michael Carey, and Harvey Lloyd-Thomas. "Score normalization for text-independent speaker verification systems." Digital Signal Processing 10.1 (2000): 42-54.
.. [Mariethoz2005] Mariethoz, Johnny, and Samy Bengio. "A unified framework for score normalization techniques applied to text-independent speaker verification." IEEE signal processing letters 12.7 (2005): 532-535.
Indices and tables Indices and tables
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment