Skip to content

Selecting best checkpoints based on accuracy not loss in eval.py

When we use cross-entropy as loss function, it is possible to have higher classification accuracy with the higher loss. Especially in the beginning of training. For instance, with a batch size of 10 and 0.5 as softmax probability in the binary classifier, we will get 6.931471805599453 as the loss with 50% accuracy however if just for one sample with 1 as a true class we mistakenly get 0.0001 as softmax probability the loss will be 9.210340371976182 with 90% accuracy. In this case for saving the best checkpoints, this is better to investigate the accuracy, not loss.

To upload designs, you'll need to enable LFS and have an admin enable hashed storage. More information