Skip to content
Snippets Groups Projects
Commit 21440564 authored by Daniel CARRON's avatar Daniel CARRON :b: Committed by André Anjos
Browse files

[doc] Add segmentation configs in documentation

parent a4bc1166
No related branches found
No related tags found
1 merge request!46Create common library
Showing
with 575 additions and 44 deletions
......@@ -209,10 +209,17 @@ CNN and other models implemented.
.. autosummary::
:toctree: api/models
mednet.libs.segmentation.models.driu_bn
mednet.libs.segmentation.models.driu_od
mednet.libs.segmentation.models.driu_pix
mednet.libs.segmentation.models.driu
mednet.libs.segmentation.models.hed
mednet.libs.segmentation.models.losses
mednet.libs.segmentation.models.lwnet
mednet.libs.segmentation.models.m2unet
mednet.libs.segmentation.models.separate
mednet.libs.segmentation.models.typing
mednet.libs.segmentation.models.unet
.. _mednet.libs.segmentation.api.utils:
......
......@@ -2,19 +2,25 @@
..
.. SPDX-License-Identifier: GPL-3.0-or-later
=====================
Preset Configurations
=====================
.. _mednet.libs.classification.config:
Preset Configurations
---------------------
------------------------------------
Classification Preset Configurations
------------------------------------
This module contains preset configurations for baseline CNN architectures and
DataModules.
DataModules in a classification task.
.. _mednet.libs.classification.config.models:
Pre-configured Models
=====================
^^^^^^^^^^^^^^^^^^^^^
Pre-configured models you can readily use.
......@@ -32,32 +38,16 @@ Pre-configured models you can readily use.
mednet.libs.classification.config.models.pasa
Data Augmentations
==================
Sequences of data augmentations you can readily use.
.. _mednet.libs.common.config.augmentations:
.. autosummary::
:toctree: api/config.augmentations
:template: config.rst
mednet.libs.common.config.augmentations.elastic
mednet.libs.common.config.augmentations.affine
.. _mednet.libs.classification.config.datamodules:
DataModule support
==================
^^^^^^^^^^^^^^^^^^
Base DataModules and raw data loaders for the various databases currently
supported in this package, for your reference. Each pre-configured DataModule
can receive the name of one or more splits as argument to build a fully
functional DataModule that can be used in training, prediction or testing.
.. _mednet.config.datamodules:
.. autosummary::
:toctree: api/config.datamodules
......@@ -79,7 +69,7 @@ functional DataModule that can be used in training, prediction or testing.
.. _mednet.libs.classification.config.datamodule-instances:
Pre-configured DataModules
==========================
^^^^^^^^^^^^^^^^^^^^^^^^^^
DataModules provide access to preset pytorch dataloaders for training,
validating, testing and running prediction tasks. Each of the pre-configured
......@@ -109,7 +99,7 @@ DataModule is based on one (or more) of the :ref:`supported base DataModules
.. _mednet.libs.classification.config.datamodule-instances.folds:
Cross-validation DataModules
============================
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
We support cross-validation with precise preset folds. In this section, you
will find the configuration for the first fold (fold-0) for all supported
......@@ -134,4 +124,123 @@ DataModules. Nine other folds are available for every configuration (from 1 to
mednet.libs.classification.config.data.tbx11k.v2_fold_0
.. _mednet.libs.segmentation.config:
------------------------------------
Classification Preset Configurations
------------------------------------
This module contains preset configurations for baseline CNN architectures and
DataModules in a segmentation task.
.. _mednet.libs.segmentation.config.models:
Pre-configured Models
^^^^^^^^^^^^^^^^^^^^^
Pre-configured models you can readily use.
.. autosummary::
:toctree: api/config.models
:template: config.rst
mednet.libs.segmentation.config.models.driu_bn
mednet.libs.segmentation.config.models.driu_od
mednet.libs.segmentation.config.models.driu_pix
mednet.libs.segmentation.config.models.driu
mednet.libs.segmentation.config.models.hed
mednet.libs.segmentation.config.models.lwnet
mednet.libs.segmentation.config.models.m2unet
mednet.libs.segmentation.config.models.unet
.. _mednet.libs.segmentation.config.datamodules:
DataModule support
^^^^^^^^^^^^^^^^^^
Base DataModules and raw data loaders for the various databases currently
supported in this package, for your reference. Each pre-configured DataModule
can receive the name of one or more splits as argument to build a fully
functional DataModule that can be used in training, prediction or testing.
.. autosummary::
:toctree: api/config.datamodules
mednet.libs.segmentation.config.data.chasedb1.datamodule
mednet.libs.segmentation.config.data.cxr8.datamodule
mednet.libs.segmentation.config.data.drhagis.datamodule
mednet.libs.segmentation.config.data.drionsdb.datamodule
mednet.libs.segmentation.config.data.drishtigs1.datamodule
mednet.libs.segmentation.config.data.drive.datamodule
mednet.libs.segmentation.config.data.hrf.datamodule
mednet.libs.segmentation.config.data.iostar.datamodule
mednet.libs.segmentation.config.data.jsrt.datamodule
mednet.libs.segmentation.config.data.montgomery.datamodule
mednet.libs.segmentation.config.data.refuge.datamodule
mednet.libs.segmentation.config.data.rimoner3.datamodule
mednet.libs.segmentation.config.data.shenzhen.datamodule
mednet.libs.segmentation.config.data.stare.datamodule
.. _mednet.libs.segmentation.config.datamodule-instances:
Pre-configured DataModules
^^^^^^^^^^^^^^^^^^^^^^^^^^
DataModules provide access to preset pytorch dataloaders for training,
validating, testing and running prediction tasks. Each of the pre-configured
DataModule is based on one (or more) of the :ref:`supported base DataModules
<mednet.libs.segmentation.config.datamodules>`.
.. autosummary::
:toctree: api/config.datamodule-instances
:template: config.rst
mednet.libs.segmentation.config.data.chasedb1.first_annotator
mednet.libs.segmentation.config.data.chasedb1.second_annotator
mednet.libs.segmentation.config.data.cxr8.default
mednet.libs.segmentation.config.data.drhagis.default
mednet.libs.segmentation.config.data.drionsdb.expert1
mednet.libs.segmentation.config.data.drionsdb.expert2
mednet.libs.segmentation.config.data.drishtigs1.optic_cup_all
mednet.libs.segmentation.config.data.drishtigs1.optic_cup_any
mednet.libs.segmentation.config.data.drishtigs1.optic_disc_all
mednet.libs.segmentation.config.data.drishtigs1.optic_disc_any
mednet.libs.segmentation.config.data.drive.default
mednet.libs.segmentation.config.data.drive.drive_2nd
mednet.libs.segmentation.config.data.hrf.default
mednet.libs.segmentation.config.data.iostar.optic_disc
mednet.libs.segmentation.config.data.iostar.vessel
mednet.libs.segmentation.config.data.jsrt.default
mednet.libs.segmentation.config.data.montgomery.default
mednet.libs.segmentation.config.data.refuge.disc
mednet.libs.segmentation.config.data.refuge.cup
mednet.libs.segmentation.config.data.rimoner3.cup_exp1
mednet.libs.segmentation.config.data.rimoner3.cup_exp2
mednet.libs.segmentation.config.data.rimoner3.disc_exp1
mednet.libs.segmentation.config.data.rimoner3.disc_exp2
mednet.libs.segmentation.config.data.shenzhen.default
mednet.libs.segmentation.config.data.stare.ah
mednet.libs.segmentation.config.data.stare.vk
------------------
Data Augmentations
------------------
Sequences of data augmentations you can readily use.
.. _mednet.libs.common.config.augmentations:
.. autosummary::
:toctree: api/config.augmentations
:template: config.rst
mednet.libs.common.config.augmentations.elastic
mednet.libs.common.config.augmentations.affine
.. include:: links.rst
......@@ -294,4 +294,221 @@ Please contact the authors of these databases to have access to the data.
- 243
.. _mednet.libs.segmentation.setup.databases.retinography:
Retinography
------------
.. list-table:: Supported Retinography Datasets (``*``: provided within this package)
* - Dataset
- Reference
- H x W
- Samples
- Mask
- Vessel
- OD
- Cup
- Split Reference
- Train
- Test
* - DRIVE_
- [DRIVE-2004]_
- 584 x 565
- 40
- ``x``
- ``x``
-
-
- [DRIVE-2004]_
- 20
- 20
* - STARE_
- [STARE-2000]_
- 605 x 700
- 20
- ``*``
- ``x``
-
-
- [MANINIS-2016]_
- 10
- 10
* - CHASE-DB1_
- [CHASEDB1-2012]_
- 960 x 999
- 28
- ``*``
- ``x``
-
-
- [CHASEDB1-2012]_
- 8
- 20
* - HRF_
- [HRF-2013]_
- 2336 x 3504
- 45
- ``x``
- ``x``
-
-
- [ORLANDO-2017]_
- 15
- 30
* - IOSTAR_
- [IOSTAR-2016]_
- 1024 x 1024
- 30
- ``x``
- ``x``
- ``x``
-
- [MEYER-2017]_
- 20
- 10
* - DRIONS-DB_
- [DRIONSDB-2008]_
- 400 x 600
- 110
-
-
- ``x``
-
- [MANINIS-2016]_
- 60
- 50
* - `RIM-ONE r3`_
- [RIMONER3-2015]_
- 1424 x 1072
- 159
-
-
- ``x``
- ``x``
- [MANINIS-2016]_
- 99
- 60
* - Drishti-GS1_
- [DRISHTIGS1-2014]_
- varying
- 101
-
-
- ``x``
- ``x``
- [DRISHTIGS1-2014]_
- 50
- 51
* - REFUGE_
- [REFUGE-2018]_
- 2056 x 2124 (1634 x 1634)
- 1200
-
-
- ``x``
- ``x``
- [REFUGE-2018]_
- 400 (+400)
- 400
* - DRHAGIS_
- [DRHAGIS-2017]_
- Varying
- 39
- ``x``
- ``x``
-
-
- [DRHAGIS-2017]_
- 19
- 20
.. warning:: **REFUGE Dataset Support**
The original directory ``Training400/AMD`` in REFUGE is considered to be
replaced by an updated version provided by the `AMD Grand-Challenge`_ (with
matching names).
The changes concerns images ``A0012.jpg``, which was corrupted in REFUGE, and
``A0013.jpg``, which only exists in the AMD Grand-Challenge version.
.. _mednet.libs.segmentation.setup.databases.xray:
X-Ray
-----
.. list-table:: Supported X-Ray Datasets
* - Dataset
- Reference
- H x W
- Radiography Type
- Samples
- Mask
- Split Reference
- Train
- Test
* - `Montgomery County`_
- [MC-2014]_
- 4020 x 4892, or 4892 x 4020
- Digital Radiography (DR)
- 138
- ``*``
- [GAAL-2020]_
- 96 (+14)
- 28
* - JSRT_
- [JSRT-2000]_
- 2048 x 2048
- Digitized Radiography (laser digitizer)
- 247
- ``*``
- [GAAL-2020]_
- 172 (+25)
- 50
* - Shenzhen_
- [SHENZHEN-2014]_
- Varying
- Computed Radiography (CR)
- 662
- ``*``
- [GAAL-2020]_
- 396 (+56)
- 114
* - CXR8_
- [CXR8-2017]_
- 1024 x 1024
- Digital Radiography
- 112120
- ``x``
- [GAAL-2020]_
- 78484 (+11212)
- 22424
.. warning:: **SHENZHEN/JSRT/CXR8 Dataset Support**
For some datasets (in which the annotations/masks are downloaded separately
from the dataset with the original images), both the original images and
annotations must be downloaded and placed inside the same directory, to match
the dataset reference dictionary's path.
* The Shenzhen_ root directory should then contain at least these two
subdirectories:
- ``CXR_png/`` (directory containing the CXR images)
- ``mask/`` (contains masks downloaded from `Shenzhen Annotations`_)
* The CXR8_ root directory:
- ``images/`` (directory containing the CXR images)
- ``segmentations/`` (contains masks downloaded from `CXR8 Annotations`_)
* The JSRT_ root directory:
- ``All247images/`` (directory containing the CXR images, in raw format)
- ``scratch/`` (contains masks downloaded from `JSRT Annotations`_)
.. include:: links.rst
......@@ -28,9 +28,111 @@
.. _TBX11K: https://mmcheng.net/tb/
.. _TBX11K_simplified: https://www.kaggle.com/datasets/vbookshelf/tbx11k-simplified
.. _drive: https://github.com/wfdubowen/Retina-Unet/tree/master/DRIVE/
.. _stare: http://cecas.clemson.edu/~ahoover/stare/
.. _hrf: https://www5.cs.fau.de/research/data/fundus-images/
.. _iostar: http://www.retinacheck.org/datasets
.. _chase-db1: https://blogs.kingston.ac.uk/retinal/chasedb1/
.. _drions-db: http://www.ia.uned.es/~ejcarmona/DRIONS-DB.html
.. _rim-one r3: http://medimrg.webs.ull.es/research/downloads/
.. _drishti-gs1: http://cvit.iiit.ac.in/projects/mip/drishti-gs/mip-dataset2/Home.php
.. _refuge: https://refuge.grand-challenge.org/Details/
.. _amd grand-challenge: https://amd.grand-challenge.org/
.. _drhagis: https://personalpages.manchester.ac.uk/staff/niall.p.mcloughlin/
.. _montgomery county: https://openi.nlm.nih.gov/faq#faq-tb-coll
.. _jsrt: http://db.jsrt.or.jp/eng.php
.. _jsrt-kaggle: https://www.kaggle.com/datasets/raddar/nodules-in-chest-xrays-jsrt
.. _cxr8: https://nihcc.app.box.com/v/ChestXray-NIHCC
.. Annotation data websites
.. _shenzhen annotations: https://www.kaggle.com/yoctoman/shcxr-lung-mask
.. _cxr8 annotations: https://github.com/lucasmansilla/NIH_chest_xray14_segmentations
.. _jsrt annotations: https://www.isi.uu.nl/Research/Databases/SCR/download.php
.. models
.. _imagenet: https://www.image-net.org
.. _alexnet: https://en.wikipedia.org/wiki/AlexNet
.. _alexnet-pytorch: https://pytorch.org/hub/pytorch_vision_alexnet/
.. _densenet: https://arxiv.org/abs/1608.06993
.. _densenet-pytorch: https://pytorch.org/hub/pytorch_vision_densenet/
.. Pretrained models
.. _baselines_driu_drive: https://www.idiap.ch/software/bob/data/bob/deepdraw/master/baselines/driu-drive-1947d9fa.pth
.. _baselines_hed_drive: https://www.idiap.ch/software/bob/data/bob/deepdraw/master/baselines/hed-drive-c8b86082.pth
.. _baselines_m2unet_drive: https://www.idiap.ch/software/bob/data/bob/deepdraw/master/baselines/m2unet-drive-ce4c7a53.pth
.. _baselines_unet_drive: https://www.idiap.ch/software/bob/data/bob/deepdraw/master/baselines/unet-drive-0ac99e2e.pth
.. _baselines_driu_stare: https://www.idiap.ch/software/bob/data/bob/deepdraw/master/baselines/driu-stare-79dec93a.pth
.. _baselines_hed_stare: https://www.idiap.ch/software/bob/data/bob/deepdraw/master/baselines/hed-stare-fcdb7671.pth
.. _baselines_m2unet_stare: https://www.idiap.ch/software/bob/data/bob/deepdraw/master/baselines/m2unet-stare-952778c2.pth
.. _baselines_unet_stare: https://www.idiap.ch/software/bob/data/bob/deepdraw/master/baselines/unet-stare-49b6a6d0.pth
.. _baselines_driu_chase: https://www.idiap.ch/software/bob/data/bob/deepdraw/master/baselines/driu-chasedb1-e7cf53c3.pth
.. _baselines_hed_chase: https://www.idiap.ch/software/bob/data/bob/deepdraw/master/baselines/hed-chasedb1-55ec6d34.pth
.. _baselines_m2unet_chase: https://www.idiap.ch/software/bob/data/bob/deepdraw/master/baselines/m2unet-chasedb1-0becbf29.pth
.. _baselines_unet_chase: https://www.idiap.ch/software/bob/data/bob/deepdraw/master/baselines/unet-chasedb1-be41b5a5.pth
.. _baselines_driu_hrf: https://www.idiap.ch/software/bob/data/bob/deepdraw/master/baselines/driu-hrf-c9e6a889.pth
.. _baselines_hed_hrf: https://www.idiap.ch/software/bob/data/bob/deepdraw/master/baselines/hed-hrf-3f4ab1c4.pth
.. _baselines_m2unet_hrf: https://www.idiap.ch/software/bob/data/bob/deepdraw/master/baselines/m2unet-hrf-2c3f2485.pth
.. _baselines_unet_hrf: https://www.idiap.ch/software/bob/data/bob/deepdraw/master/baselines/unet-hrf-9a559821.pth
.. _baselines_driu_iostar: https://www.idiap.ch/software/bob/data/bob/deepdraw/master/baselines/driu-iostar-vessel-ef8cc27b.pth
.. _baselines_hed_iostar: https://www.idiap.ch/software/bob/data/bob/deepdraw/master/baselines/hed-iostar-vessel-37cfaee1.pth
.. _baselines_m2unet_iostar: https://www.idiap.ch/software/bob/data/bob/deepdraw/master/baselines/m2unet-iostar-vessel-223b61ef.pth
.. _baselines_unet_iostar: https://www.idiap.ch/software/bob/data/bob/deepdraw/master/baselines/unet-iostar-vessel-86c78e87.pth
.. _baselines_m2unet_jsrt: https://bobconda.lab.idiap.ch/public/data/bob/deepdraw/master/baselines/m2unet-jsrt-5f062009.pth
.. _baselines_m2unet_montgomery: https://bobconda.lab.idiap.ch/public/data/bob/deepdraw/master/baselines/m2unet-montgomery-1c24519a.pth
.. _baselines_m2unet_shenzhen: https://bobconda.lab.idiap.ch/public/data/bob/deepdraw/master/baselines/m2unet-shenzhen-7c9688e6.pth
.. _baselines_lwnet_jsrt: https://bobconda.lab.idiap.ch/public/data/bob/deepdraw/master/baselines/lwnet-jsrt-73807eb1.pth
.. _baselines_lwnet_montgomery: https://bobconda.lab.idiap.ch/public/data/bob/deepdraw/master/baselines/lwnet-montgomery-9c6bf39b.pth
.. _baselines_lwnet_shenzhen: https://bobconda.lab.idiap.ch/public/data/bob/deepdraw/master/baselines/lwnet-shenzhen-10196d9c.pth
.. _covd_driu_drive: https://www.idiap.ch/software/bob/data/bob/deepdraw/master/covd/driu/drive/model.pth
.. _covd_hed_drive: https://www.idiap.ch/software/bob/data/bob/deepdraw/master/covd/hed/drive/model.pth
.. _covd_m2unet_drive: https://www.idiap.ch/software/bob/data/bob/deepdraw/master/covd/m2unet/drive/model.pth
.. _covd_unet_drive: https://www.idiap.ch/software/bob/data/bob/deepdraw/master/covd/unet/drive/model.pth
.. _covd_driu_stare: https://www.idiap.ch/software/bob/data/bob/deepdraw/master/covd/driu/stare/model.pth
.. _covd_hed_stare: https://www.idiap.ch/software/bob/data/bob/deepdraw/master/covd/hed/stare/model.pth
.. _covd_m2unet_stare: https://www.idiap.ch/software/bob/data/bob/deepdraw/master/covd/m2unet/stare/model.pth
.. _covd_unet_stare: https://www.idiap.ch/software/bob/data/bob/deepdraw/master/covd/unet/stare/model.pth
.. _covd_driu_chase: https://www.idiap.ch/software/bob/data/bob/deepdraw/master/covd/driu/chasedb1/model.pth
.. _covd_hed_chase: https://www.idiap.ch/software/bob/data/bob/deepdraw/master/covd/hed/chasedb1/model.pth
.. _covd_m2unet_chase: https://www.idiap.ch/software/bob/data/bob/deepdraw/master/covd/m2unet/chasedb1/model.pth
.. _covd_unet_chase: https://www.idiap.ch/software/bob/data/bob/deepdraw/master/covd/unet/chasedb1/model.pth
.. _covd_driu_hrf: https://www.idiap.ch/software/bob/data/bob/deepdraw/master/covd/driu/hrf/model.pth
.. _covd_hed_hrf: https://www.idiap.ch/software/bob/data/bob/deepdraw/master/covd/hed/hrf/model.pth
.. _covd_m2unet_hrf: https://www.idiap.ch/software/bob/data/bob/deepdraw/master/covd/m2unet/hrf/model.pth
.. _covd_unet_hrf: https://www.idiap.ch/software/bob/data/bob/deepdraw/master/covd/unet/hrf/model.pth
.. _covd_driu_iostar: https://www.idiap.ch/software/bob/data/bob/deepdraw/master/covd/driu/iostar-vessel/model.pth
.. _covd_hed_iostar: https://www.idiap.ch/software/bob/data/bob/deepdraw/master/covd/hed/iostar-vessel/model.pth
.. _covd_m2unet_iostar: https://www.idiap.ch/software/bob/data/bob/deepdraw/master/covd/m2unet/iostar-vessel/model.pth
.. _covd_unet_iostar: https://www.idiap.ch/software/bob/data/bob/deepdraw/master/covd/unet/iostar-vessel/model.pth
.. DRIVE
.. _driu_drive.pth: https://www.idiap.ch/software/bob/data/bob/deepdraw/master/DRIU_DRIVE.pth
.. _m2unet_drive.pth: https://www.idiap.ch/software/bob/data/bob/deepdraw/master/M2UNet_DRIVE.pth
.. _m2unet_covd-drive.pth: https://www.idiap.ch/software/bob/data/bob/deepdraw/master/M2UNet_COVD-DRIVE.pth
.. _m2unet_covd-drive_ssl.pth: https://www.idiap.ch/software/bob/data/bob/deepdraw/master/M2UNet_COVD-DRIVE_SSL.pth
.. STARE
.. _driu_stare.pth: https://www.idiap.ch/software/bob/data/bob/deepdraw/master/DRIU_STARE.pth
.. _m2unet_stare.pth: https://www.idiap.ch/software/bob/data/bob/deepdraw/master/M2UNet_STARE.pth
.. _m2unet_covd-stare.pth: https://www.idiap.ch/software/bob/data/bob/deepdraw/master/M2UNet_COVD-STARE.pth
.. _m2unet_covd-stare_ssl.pth: https://www.idiap.ch/software/bob/data/bob/deepdraw/master/M2UNet_COVD-STARE_SSL.pth
.. CHASE-DB1
.. _driu_chasedb1.pth: https://www.idiap.ch/software/bob/data/bob/deepdraw/master/DRIU_CHASEDB1.pth
.. _m2unet_chasedb1.pth: https://www.idiap.ch/software/bob/data/bob/deepdraw/master/M2UNet_CHASEDB1.pth
.. _m2unet_covd-chasedb1.pth: https://www.idiap.ch/software/bob/data/bob/deepdraw/master/M2UNet_COVD-CHASEDB1.pth
.. _m2unet_covd-chasedb1_ssl.pth: https://www.idiap.ch/software/bob/data/bob/deepdraw/master/M2UNet_COVD-CHASEDB1_SSL.pth
.. IOSTAR
.. _driu_iostar.pth: https://www.idiap.ch/software/bob/data/bob/deepdraw/master/DRIU_IOSTARVESSEL.pth
.. _m2unet_iostar.pth: https://www.idiap.ch/software/bob/data/bob/deepdraw/master/M2UNet_IOSTARVESSEL.pth
.. _m2unet_covd-iostar.pth: https://www.idiap.ch/software/bob/data/bob/deepdraw/master/M2UNet_COVD-IOSTAR.pth
.. _m2unet_covd-iostar_ssl.pth: https://www.idiap.ch/software/bob/data/bob/deepdraw/master/M2UNet_COVD-IOSTAR_SSL.pth
.. HRF
.. _driu_hrf.pth: https://www.idiap.ch/software/bob/data/bob/deepdraw/master/DRIU_HRF1168.pth
.. _m2unet_hrf.pth: https://www.idiap.ch/software/bob/data/bob/deepdraw/master/M2UNet_HRF1168.pth
.. _m2unet_covd-hrf.pth: https://www.idiap.ch/software/bob/data/bob/deepdraw/master/M2UNet_COVD-HRF.pth
.. _m2unet_covd-hrf_ssl.pth: https://www.idiap.ch/software/bob/data/bob/deepdraw/master/M2UNet_COVD-HRF_SSL.pth
......@@ -106,3 +106,98 @@
of precision, recall and F-score, with implication for evaluation**,
European conference on Advances in Information Retrieval Research, 2005.
https://doi.org/10.1007/978-3-540-31865-1_25
.. [JSRT-2000] *J. Shiraishi, S. Katsuragawa, J. Ikezoe, T. Matsumoto, T.
Kobayashi, K. Komatsu, M. Matsui, H. Fujita, Y. Kodera, K. Doi*,
**Development of a digital image database for chest radiographs with and
without a lung nodule: Receiver operating characteristic analysis of
radiologists’ detection of pulmonary nodules.**, American Journal of
Roentgenology. 2000. https://pubmed.ncbi.nlm.nih.gov/10628457
.. [CXR8-2017] *Xiaosong Wang, Yifan Peng, Le Lu, Zhiyong Lu,
Mohammadhadi Bagheri, Ronald Summers*, **ChestX-ray8: Hospital-scale Chest
X-ray Database and Benchmarks on Weakly-Supervised Classification and
Localization of Common Thorax Diseases.**, IEEE CVPR, pp. 3462-3471, 2017.
https://arxiv.org/abs/1705.02315
.. [GAAL-2020] *G. Gaál, B. Maga, A. Lukács*, **Attention U-Net Based
Adversarial Architectures for Chest X-ray Lung Segmentation.**, 2020.
https://arxiv.org/abs/2003.10304v1
.. [DRISHTIGS1-2014] *J. Sivaswamy, S. R. Krishnadas, G. Datt Joshi, M. Jain and
A. U. Syed Tabish*, **Drishti-GS: Retinal image dataset for optic nerve
head (ONH) segmentation**, 2014 IEEE 11th International Symposium on
Biomedical Imaging (ISBI), Beijing, 2014, pp. 53-56.
https://doi.org/10.1109/ISBI.2014.6867807
.. [DRIVE-2004] *J. Staal, M. D. Abramoff, M. Niemeijer, M. A. Viergever and B.
van Ginneken*, **Ridge-based vessel segmentation in color images of the
retina**, in IEEE Transactions on Medical Imaging, vol. 23, no. 4, pp.
501-509, April 2004. https://doi.org/10.1109/TMI.2004.825627
.. [ORLANDO-2017] *J. I. Orlando, E. Prokofyeva and M. B. Blaschko*, **A
Discriminatively Trained Fully Connected Conditional Random Field Model for
Blood Vessel Segmentation in Fundus Images**, in IEEE Transactions on
Biomedical Engineering, vol. 64, no. 1, pp. 16-27, Jan. 2017.
https://doi.org/10.1109/TBME.2016.2535311
.. [MEYER-2017] *M. I. Meyer, P. Costa, A. Galdran, A. M. Mendonça, and A.
Campilho*, **A Deep Neural Network for Vessel Segmentation of Scanning Laser
Ophthalmoscopy Images**, in Image Analysis and Recognition, vol. 10317, F.
Karray, A. Campilho, and F. Cheriet, Eds. Cham: Springer International
Publishing, 2017, pp. 507–515. https://doi.org/10.1007/978-3-319-59876-5_56
.. [REFUGE-2018] https://refuge.grand-challenge.org/Details/
.. [CHASEDB1-2012] *M. M. Fraz et al.*, **An Ensemble Classification-Based
Approach Applied to Retinal Blood Vessel Segmentation**, in IEEE
Transactions on Biomedical Engineering, vol. 59, no. 9, pp. 2538-2548, Sept.
2012. https://doi.org/10.1109/TBME.2012.2205687
.. [DRIONSDB-2008] *Enrique J. Carmona, Mariano Rincón, Julián García-Feijoó, José
M. Martínez-de-la-Casa*, **Identification of the optic nerve head with
genetic algorithms**, in Artificial Intelligence in Medicine, Volume 43,
Issue 3, pp. 243-259, 2008. http://dx.doi.org/10.1016/j.artmed.2008.04.005
.. [HRF-2013] *A. Budai, R. Bock, A. Maier, J. Hornegger, and G. Michelson*,
**Robust Vessel Segmentation in Fundus Images**, in International Journal of
Biomedical Imaging, vol. 2013, p. 11, 2013.
http://dx.doi.org/10.1155/2013/154860
.. [IOSTAR-2016] *J. Zhang, B. Dashtbozorg, E. Bekkers, J. P. W. Pluim, R. Duits
and B. M. ter Haar Romeny*, **Robust Retinal Vessel Segmentation via Locally
Adaptive Derivative Frames in Orientation Scores**, in IEEE Transactions on
Medical Imaging, vol. 35, no. 12, pp. 2631-2644, Dec. 2016.
.. [RIMONER3-2015] *F. Fumero, J. Sigut, S. Alayón, M. González-Hernández, M.
González de la Rosa*, **Interactive Tool and Database for Optic Disc and Cup
Segmentation of Stereo and Monocular Retinal Fundus Images**, Conference on
Computer Graphics, Visualization and Computer Vision, 2015.
https://dspace5.zcu.cz/bitstream/11025/29670/1/Fumero.pdf
.. [SHENZHEN-2014] *S. Jaeger, S. Candemir, S. Antani, Y. X. Wáng, P. X. Lu, G.
Thoma*, **Two public chest X-ray datasets for computer-aided screening of
pulmonary diseases.**, Quantitative imaging in medicine and surgery. 2014.
https://doi:10.3978/j.issn.2223-4292.2014.11.20
.. [STARE-2000] *A. D. Hoover, V. Kouznetsova and M. Goldbaum*, **Locating blood
vessels in retinal images by piecewise threshold probing of a matched filter
response**, in IEEE Transactions on Medical Imaging, vol. 19, no. 3, pp.
203-210, March 2000. https://doi.org/10.1109/42.845178
.. [SANDLER-2018] *M. Sandler, A. Howard, M. Zhu, A. Zhmoginov, L.-C.h Chen*,
**MobileNetV2: Inverted Residuals and Linear Bottlenecks**, 2018.
https://arxiv.org/abs/1801.04381
.. [RONNEBERGER-2015] *O. Ronneberger, P. Fischer, T. Brox*, **U-Net:
Convolutional Networks for Biomedical Image Segmentation**, 2015.
https://arxiv.org/abs/1505.04597
.. [DRHAGIS-2017] *S. Holm, G. Russell, V. Nourrit, N. McLoughlin*, **DR HAGIS – A Novel Fundus Image Database for the Automatic Extraction of Retinal Surface Vessels**,
SPIE Journal of Medical Imaging, 2017.
https://doi.org/10.1117/1.jmi.4.1.014503
.. [MC-2014] *S. Jaeger, S. Candemir, S. Antani, Y. X. Wáng, P. X. Lu, G.
Thoma*, **Two public chest X-ray datasets for computer-aided screening of
pulmonary diseases.**, Quantitative imaging in medicine and surgery. 2014.
https://doi:10.3978/j.issn.2223-4292.2014.11.20
# SPDX-FileCopyrightText: Copyright © 2024 Idiap Research Institute <contact@idiap.ch>
#
# SPDX-License-Identifier: GPL-3.0-or-later
"""CXR8 dataset for Vessel Segmentation (default protocol).
"""CXR8 Dataset (default protocol).
* Split reference: [CXR8-2004]_
* This configuration resolution: 544 x 544 (center-crop)
* See :py:mod:`deepdraw.data.cxr8` for dataset details
* This dataset offers a second-annotator comparison for the test set only
* Split reference: [GAAL-2020]_
* Configuration resolution: 256 x 256
"""
from mednet.libs.segmentation.config.data.cxr8.datamodule import DataModule
......
......@@ -5,7 +5,6 @@
* Configuration resolution: 416 x 608 (after padding)
* Split reference: [MANINIS-2016]_
* See :py:mod:`deepdraw.data.drionsdb` for dataset details
"""
from mednet.libs.segmentation.config.data.drionsdb.datamodule import DataModule
......
......@@ -5,7 +5,6 @@
* Configuration resolution: 416 x 608 (after padding)
* Split reference: [MANINIS-2016]_
* See :py:mod:`deepdraw.data.drionsdb` for dataset details
"""
from mednet.libs.segmentation.config.data.drionsdb.datamodule import DataModule
......
......@@ -105,12 +105,11 @@ class DataModule(CachingDataModule):
and notching information.
* Reference (including train/test split): [DRISHTIGS1-2014]_
* Original resolution (height x width): varying (min: 1749 x 2045, max: 1845 x
2468)
* Original resolution (height x width): varying (min: 1749 x 2045, max: 1845 x2468)
* Configuration resolution: 1760 x 2048 (after center cropping)
* Protocols ``optic-disc`` and ``optic-cup``:
* Training: 50
* Test: 51
* Training: 50
* Test: 51
Parameters
----------
......
......@@ -5,7 +5,6 @@
* Configuration resolution: 1760 x 2048 (after center cropping)
* Reference (includes split): [DRISHTIGS1-2014]_
* See :py:mod:`deepdraw.data.drishtigs1` for dataset details
"""
from mednet.libs.segmentation.config.data.drishtigs1.datamodule import DataModule
......
......@@ -5,7 +5,6 @@
* Configuration resolution: 1760 x 2048 (after center cropping)
* Reference (includes split): [DRISHTIGS1-2014]_
* See :py:mod:`deepdraw.data.drishtigs1` for dataset details
"""
from mednet.libs.segmentation.config.data.drishtigs1.datamodule import DataModule
......
......@@ -5,7 +5,6 @@
* Configuration resolution: 1760 x 2048 (after center cropping)
* Reference (includes split): [DRISHTIGS1-2014]_
* See :py:mod:`deepdraw.data.drishtigs1` for dataset details
"""
from mednet.libs.segmentation.config.data.drishtigs1.datamodule import DataModule
......
......@@ -5,7 +5,6 @@
* Configuration resolution: 1760 x 2048 (after center cropping)
* Reference (includes split): [DRISHTIGS1-2014]_
* See :py:mod:`deepdraw.data.drishtigs1` for dataset details
"""
from mednet.libs.segmentation.config.data.drishtigs1.datamodule import DataModule
......
......@@ -5,7 +5,6 @@
* Split reference: [DRIVE-2004]_
* This configuration resolution: 544 x 544 (center-crop)
* See :py:mod:`deepdraw.data.drive` for dataset details
* This dataset offers a second-annotator comparison for the test set only
"""
......
......@@ -5,7 +5,6 @@
* Split reference: [DRIVE-2004]_
* This configuration resolution: 544 x 544 (center-crop)
* See :py:mod:`deepdraw.data.drive` for dataset details
* This dataset offers a second-annotator comparison for the test set only
"""
......
......@@ -5,7 +5,6 @@
* Split reference: [ORLANDO-2017]_
* Configuration resolution: 1168 x 1648 (about half full HRF resolution)
* See :py:mod:`deepdraw.data.hrf` for dataset details
"""
from mednet.libs.segmentation.config.data.hrf.datamodule import (
......
# SPDX-FileCopyrightText: Copyright © 2024 Idiap Research Institute <contact@idiap.ch>
#
# SPDX-License-Identifier: GPL-3.0-or-later
"""IOSTAR dataset for Optic Disc Segmentation (default protocol).
* Split reference: [MEYER-2017]_
* Configuration resolution: 1024 x 1024 (original resolution)
"""
from mednet.libs.segmentation.config.data.iostar.datamodule import DataModule
datamodule = DataModule("optic-disc.json")
# SPDX-FileCopyrightText: Copyright © 2024 Idiap Research Institute <contact@idiap.ch>
#
# SPDX-License-Identifier: GPL-3.0-or-later
"""IOSTAR dataset for Vessel Segmentation (default protocol).
* Split reference: [MEYER-2017]_
* Configuration resolution: 1024 x 1024 (original resolution)
"""
from mednet.libs.segmentation.config.data.iostar.datamodule import DataModule
datamodule = DataModule("vessel.json")
......@@ -41,10 +41,10 @@ class SegmentationRawDataLoader(_SegmentationRawDataLoader):
def load_pil_raw_12bit_jsrt(self, path: pathlib.Path) -> PIL.Image.Image:
"""Load a raw 16-bit sample data.
This method was designed to handle the raw images from the JSRT_ dataset.
This method was designed to handle the raw images from the JSRT dataset.
It reads the data file and applies a simple histogram equalization to the
8-bit representation of the image to obtain something along the lines of
the PNG (unofficial) version distributed at `JSRT-Kaggle`_.
the PNG (unofficial) version distributed at `JSRT-Kaggle`.
Parameters
----------
......
......@@ -6,7 +6,6 @@
* Split reference: [GAAL-2020]_
* Configuration resolution: 256 x 256
* See :py:mod:`deepdraw.data.jsrt` for dataset details
"""
from mednet.libs.segmentation.config.data.jsrt.datamodule import (
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment