Balancing losses requires some thinking
As of now, we can ask the ptbench trainer to balance losses (ratio between positive and total examples) as to get a better sense of the performance per class instead over the whole dataset.
However, we should be very careful about the loss balancing - we should only do this for the train loss if the random-sampler is not already balanced (ie. using a WeightedRandomSampler). Otherwise, we'll be acting on this twice. The other problem is the validation - each of the validation loaders (there can be many), would need to be "balanced" as well as the different validation datasets have different ratios between positive and negative samples. Applying a single one to all of them is not OK. So, for the time being I preferred to not have this feature, until we think it through properly.