Skip to content

KMeans returns NaNs

Created by: siebenkopf

I have lately run into a problem, where the bob.learn.em.KMeansTrainer returns a machine, where some of the means are nan. I have enough training data (several millions), and I want to have 1000 means.

I guess that this problem is related to the fact that some means are under-represented with data (i.e., no data point is assigned for a specific mean). Then, re-computing the means will end up in a division by zero, which turns into nan values. To avoid that, it is possible to re-initialize the under-represented mean by selecting the data point that is furthest away from the (other) current means, something like:

# get the maximum distance
furthest_training_sample = numpy.argmax([max(distance(data, mean) for mean in means) for data in training_data])
# assign new mean
new_mean = training_data[furthest_training_sample]
Edited by Tiago de Freitas Pereira