Skip to content
GitLab
Projects Groups Snippets
  • /
  • Help
    • Help
    • Support
    • Community forum
    • Submit feedback
    • Contribute to GitLab
  • Sign in
  • bob.bio.gmm bob.bio.gmm
  • Project information
    • Project information
    • Activity
    • Labels
    • Members
  • Repository
    • Repository
    • Files
    • Commits
    • Branches
    • Tags
    • Contributors
    • Graph
    • Compare
  • Issues 2
    • Issues 2
    • List
    • Boards
    • Service Desk
    • Milestones
  • Merge requests 1
    • Merge requests 1
  • CI/CD
    • CI/CD
    • Pipelines
    • Jobs
    • Schedules
  • Deployments
    • Deployments
    • Environments
    • Releases
  • Monitor
    • Monitor
    • Incidents
  • Analytics
    • Analytics
    • Value stream
    • CI/CD
    • Repository
  • Activity
  • Graph
  • Create a new issue
  • Jobs
  • Commits
  • Issue Boards
Collapse sidebar
  • bobbob
  • bob.bio.gmmbob.bio.gmm
  • Issues
  • #17
Closed
Open
Issue created May 24, 2017 by Amir MOHAMMADI@amohammadiOwner

The script will run out of memory if training data is rather big

I am trying to run a GMM algorithm but it runs out of memory in the k-init step. I saw that in the code it does something like numpy.vstack(list(features)) which I think would require twice the memory of training features (to create a list and then the numpy array)

Looking at numpy's documentation looks like there is only one function that takes an iterable: https://docs.scipy.org/doc/numpy/reference/generated/numpy.fromiter.html But working with that is tricky.

I am addressing this issue in !9 (merged)

Assignee
Assign to
Time tracking