Skip to content
GitLab
Projects Groups Snippets
  • /
  • Help
    • Help
    • Support
    • Community forum
    • Submit feedback
    • Contribute to GitLab
  • Sign in
  • bob.learn.tensorflow bob.learn.tensorflow
  • Project information
    • Project information
    • Activity
    • Labels
    • Members
  • Repository
    • Repository
    • Files
    • Commits
    • Branches
    • Tags
    • Contributors
    • Graph
    • Compare
  • Issues 11
    • Issues 11
    • List
    • Boards
    • Service Desk
    • Milestones
  • Merge requests 0
    • Merge requests 0
  • CI/CD
    • CI/CD
    • Pipelines
    • Jobs
    • Schedules
  • Deployments
    • Deployments
    • Environments
    • Releases
  • Monitor
    • Monitor
    • Incidents
  • Analytics
    • Analytics
    • Value stream
    • CI/CD
    • Repository
  • Activity
  • Graph
  • Create a new issue
  • Jobs
  • Commits
  • Issue Boards
Collapse sidebar
  • bobbob
  • bob.learn.tensorflowbob.learn.tensorflow
  • Issues
  • #60
Closed
Open
Issue created Jul 12, 2018 by Saeed SARFJOO@ssarfjooDeveloper

Selecting best checkpoints based on accuracy not loss in eval.py

When we use cross-entropy as loss function, it is possible to have higher classification accuracy with the higher loss. Especially in the beginning of training. For instance, with a batch size of 10 and 0.5 as softmax probability in the binary classifier, we will get 6.931471805599453 as the loss with 50% accuracy however if just for one sample with 1 as a true class we mistakenly get 0.0001 as softmax probability the loss will be 9.210340371976182 with 90% accuracy. In this case for saving the best checkpoints, this is better to investigate the accuracy, not loss.

To upload designs, you'll need to enable LFS and have an admin enable hashed storage. More information
Assignee
Assign to
Time tracking