Skip to content

  • Projects
  • Groups
  • Snippets
  • Help
    • Loading...
    • Help
    • Support
    • Submit feedback
    • Contribute to GitLab
  • Sign in
bob.learn.em
bob.learn.em
  • Project overview
    • Project overview
    • Details
    • Activity
    • Releases
  • Repository
    • Repository
    • Files
    • Commits
    • Branches
    • Tags
    • Contributors
    • Graph
    • Compare
  • Issues 6
    • Issues 6
    • List
    • Boards
    • Labels
    • Milestones
  • Merge Requests 2
    • Merge Requests 2
  • CI / CD
    • CI / CD
    • Pipelines
    • Jobs
    • Schedules
  • Analytics
    • Analytics
    • CI / CD
    • Repository
    • Value Stream
  • Members
    • Members
  • Collapse sidebar
  • Activity
  • Graph
  • Create a new issue
  • Jobs
  • Commits
  • Issue Boards
  • bob
  • bob.learn.embob.learn.em
  • Issues
  • #3

Closed
Open
Opened Apr 23, 2015 by André Anjos@andre.anjos💬
  • Report abuse
  • New issue
Report abuse New issue

KMeans returns NaNs

Created by: siebenkopf

I have lately run into a problem, where the bob.learn.em.KMeansTrainer returns a machine, where some of the means are nan. I have enough training data (several millions), and I want to have 1000 means.

I guess that this problem is related to the fact that some means are under-represented with data (i.e., no data point is assigned for a specific mean). Then, re-computing the means will end up in a division by zero, which turns into nan values. To avoid that, it is possible to re-initialize the under-represented mean by selecting the data point that is furthest away from the (other) current means, something like:

# get the maximum distance
furthest_training_sample = numpy.argmax([max(distance(data, mean) for mean in means) for data in training_data])
# assign new mean
new_mean = training_data[furthest_training_sample]
Assignee
Assign to
None
Milestone
None
Assign milestone
Time tracking
None
Due date
None
1
Labels
enhancement
Assign labels
  • View project labels
Reference: bob/bob.learn.em#3