Inconsistent use of standard deviation / variance in Gaussian vs. WeightedGaussian
Created by: siebenkopf
During porting the bindings of the bob.ip.base.Gaussian and bob.ip.base.WeightedGaussian classes, I found a difference in the usage of the variance. While in the former, sigma is used as standard deviation, the latter uses sigma2, i.e., the variance. Apart from the inconsistency, this is not yet an issue.
But, while porting the bob.ip.base.MultiscaleRetinex algorithm, which uses several Gaussians, and the bob.ip.base.SelfQuotientImage, which uses WeightedGaussians, I found that a formula is used to compute Gaussians or WeightedGaussians in different scales:
// size of the kernel
size_t s_size = m_size_min + s * m_size_step;
// sigma of the kernel
double s_sigma2 = m_sigma2 * s_size / m_size_min;
// Initialize the Gaussian
m_wgaussians[s].reset(s_size, s_size, s_sigma2, s_sigma2, m_conv_border);
Interestingly, this formula is the same for both Gaussians and WeightedGaussians except that one uses sigma and one uses sigma2.
I tried to read the referenced papers, but I couldn't find this formula in neither of the papers. So, we have to decide, which version is correct. I assume that the version with sigma is better since the size of the Gaussian should be scaled with its standard deviation, and not with its variance.
@laurentes : Should we unify this?