Make code and tests flexible to the use of a pre-processed Montgomery dataset...
Make code and tests flexible to the use of a pre-processed Montgomery dataset to speed-up testing where possible (closes #74 (closed)).
This MR also:
- Removes unused test data (closes #75 (closed))
- DRY on the
make_split()
function available in many data module instances - Implements a method to easily re-write reference histograms for testing
- Add a script to help pre-processing the Montgomery dataset using the same code-base as mednet
- Increases the default line length to 88 (instead of 80 to match other packages in the software group)
- Remove unused
_make_balanced_random_sampler()
from the basedatamodule
module - Remove the setting of any custom
sampler
objects on pytorch data loaders - Remove the unused
CSVDatabaseSplit
from the code - Mark slows tests (that that take more than 1 minute to execute)
- Provide pixi task
test-fast
to run test suit avoiding tests marked as slow; Also provide atest-slow
to only run slow tests (closes #73 (closed))
Edited by André Anjos