Create common library
These changes incorporate biosignal/software/deepdraw> models, data modules and configurations into this package by streamlining common code and defining specialisations for classification and segmentation of medical images.
Most CLI apps remain unaltered. When required, specialised CLIs for classification (classify
) and segmentation (segment
) were put in place.
To do:
-
Custom upload script per library -
Move imports inside functions -
Update evaluation measure code -
Fix issue in segmentation/engine/adabound.py
iflr
isNone
-
Update docstrings and documentation to ensure the segmentation
lib does not mentionclassification
-
Make model and augmentation transforms configurable -
When no validation set is available, make it clear in the logs we are using the unaugmented train split -
Fix extension of saved files during prediction and evaluation -
Fix CPU memory leak during training -
Fix suboptimal usage of GPU -
Move all tests to a common subdirectory -
Optimize how predictions are saved to avoid keeping all images in memory -
Do not call saved predictions as img
in segmentation/predict as that is misleading -
@dcarron Redefine segmentation sample as TypedDict("image", "target", "mask"), dict(str, Any) -
@dcarron Apply augmentation transforms on every model input in segmentation task -
Investigate integer check in make_z_normalizer()
-
@aanjos Implement second-annotator evaluation in evaluate.py
-
@aanjos Re-visit the classification/evaluate.py script and classification/evaluator.py engine to understand what could be simplified/made common w.r.t. the segmentation equivalent. -
Copy segmentation databases to the CI machines so the various database tests can run (the following are now available on both Linux and macOS CIs: CHASE-DB1, DRIVE, HRF, JSRT, MontgomeryXraySet, STARE, ShenzhenXraySet, TBXpredict, montgomery-preprocessed, tbx11k). -
Integrate available CI datasets to segmentation tests -
Add script to save an image with each element colorized with a different color ( view
)-
Colorize with transparency (alpha level could be a parameter given by the user) -
Apply mask to predicted image to clean the output a bit -
Add tests -
Add logging -
currently only operates on a single file but it might make more sense to specify a result folder and recursively glob all hdf5 files.
-
-
@aanjos Revise predict_step()
on all models so they only return "predictions" and not metadata associated with samples. Use the approach from this example to recover sample metadata during prediction. -
Remove use of LFS for src/mednet/libs/segmentation/config/data/cxr8/default.json
(just use a bzip2 compressed version). -
@dcarron Study how to incorporate a learning-rate scheduler in the shared trainer (cosine annealing was used for lwnet in deepdraw, e.g.)
Issues addressed by this MR:
- Closes #79 (closed)
Edited by André Anjos