Skip to content
Snippets Groups Projects
Commit e6a4630b authored by André Anjos's avatar André Anjos :speech_balloon:
Browse files

[configs] Documented all configuration files; Added script to...

[configs] Documented all configuration files; Added script to list/describe/copy configuration files; Re-structured user guide
parent c95a0e99
No related branches found
No related tags found
1 merge request!12Streamlining
Pipeline #38266 passed
Showing
with 428 additions and 5 deletions
#!/usr/bin/env python
# -*- coding: utf-8 -*-
"""RIM-ONE r3 (training set) for Cup Segmentation
The dataset contains 159 stereo eye fundus images with a resolution of 2144 x
1424. The right part of the stereo image is disregarded. Two sets of
ground-truths for optic disc and optic cup are available. The first set is
commonly used for training and testing. The second set acts as a “human”
baseline.
* Reference: [RIMONER3-2015]_
* Original resolution (height x width): 1424 x 1072
* Configuration resolution: 1440 x 1088 (after padding)
* Training samples: 99
* Split reference: [MANINIS-2016]_
"""
from bob.db.rimoner3 import Database as RIMONER3
from bob.ip.binseg.data.transforms import *
from bob.ip.binseg.data.binsegdataset import BinSegDataset
......
#!/usr/bin/env python
# -*- coding: utf-8 -*-
"""RIM-ONE r3 (test set) for Cup Segmentation
The dataset contains 159 stereo eye fundus images with a resolution of 2144 x
1424. The right part of the stereo image is disregarded. Two sets of
ground-truths for optic disc and optic cup are available. The first set is
commonly used for training and testing. The second set acts as a “human”
baseline.
* Reference: [RIMONER3-2015]_
* Original resolution (height x width): 1424 x 1072
* Configuration resolution: 1440 x 1088 (after padding)
* Test samples: 60
* Split reference: [MANINIS-2016]_
"""
from bob.db.rimoner3 import Database as RIMONER3
from bob.ip.binseg.data.transforms import *
from bob.ip.binseg.data.binsegdataset import BinSegDataset
......
#!/usr/bin/env python
# -*- coding: utf-8 -*-
"""RIM-ONE r3 (training set) for Optic Disc Segmentation
The dataset contains 159 stereo eye fundus images with a resolution of 2144 x
1424. The right part of the stereo image is disregarded. Two sets of
ground-truths for optic disc and optic cup are available. The first set is
commonly used for training and testing. The second set acts as a “human”
baseline.
* Reference: [RIMONER3-2015]_
* Original resolution (height x width): 1424 x 1072
* Configuration resolution: 1440 x 1088 (after padding)
* Training samples: 99
* Split reference: [MANINIS-2016]_
"""
from bob.db.rimoner3 import Database as RIMONER3
from bob.ip.binseg.data.transforms import *
from bob.ip.binseg.data.binsegdataset import BinSegDataset
......
#!/usr/bin/env python
# -*- coding: utf-8 -*-
"""RIM-ONE r3 (test set) for Optic Disc Segmentation
The dataset contains 159 stereo eye fundus images with a resolution of 2144 x
1424. The right part of the stereo image is disregarded. Two sets of
ground-truths for optic disc and optic cup are available. The first set is
commonly used for training and testing. The second set acts as a “human”
baseline.
* Reference: [RIMONER3-2015]_
* Original resolution (height x width): 1424 x 1072
* Configuration resolution: 1440 x 1088 (after padding)
* Test samples: 60
* Split reference: [MANINIS-2016]_
"""
from bob.db.rimoner3 import Database as RIMONER3
from bob.ip.binseg.data.transforms import *
from bob.ip.binseg.data.binsegdataset import BinSegDataset
......
#!/usr/bin/env python
# -*- coding: utf-8 -*-
"""STARE (training set) for Vessel Segmentation
A subset of the original STARE dataset contains 20 annotated eye fundus images
with a resolution of 605 x 700 (height x width). Two sets of ground-truth
vessel annotations are available. The first set by Adam Hoover is commonly used
for training and testing. The second set by Valentina Kouznetsova acts as a
“human” baseline.
* Reference: [STARE-2000]_
* Original resolution (height x width): 605 x 700
* Configuration resolution: 608 x 704 (after padding)
* Training samples: 10
* Split reference: [MANINIS-2016]_
"""
from bob.db.stare import Database as STARE
from bob.ip.binseg.data.transforms import *
from bob.ip.binseg.data.binsegdataset import BinSegDataset
......@@ -9,7 +24,7 @@ from bob.ip.binseg.data.binsegdataset import BinSegDataset
transforms = Compose(
[
Pad((2, 1, 2, 2)),
Pad((2, 1, 2, 2)), #(left, top, right, bottom)
RandomHFlip(),
RandomVFlip(),
RandomRotation(),
......
#!/usr/bin/env python
# -*- coding: utf-8 -*-
"""STARE (test set) for Vessel Segmentation
A subset of the original STARE dataset contains 20 annotated eye fundus images
with a resolution of 605 x 700 (height x width). Two sets of ground-truth
vessel annotations are available. The first set by Adam Hoover is commonly used
for training and testing. The second set by Valentina Kouznetsova acts as a
“human” baseline.
* Reference: [STARE-2000]_
* Original resolution (height x width): 605 x 700
* Configuration resolution: 608 x 704 (after padding)
* Test samples: 10
* Split reference: [MANINIS-2016]_
"""
from bob.db.stare import Database as STARE
from bob.ip.binseg.data.transforms import *
from bob.ip.binseg.data.binsegdataset import BinSegDataset
......
#!/usr/bin/env python
# coding=utf-8
"""DRIU Network for Vessel Segmentation
Deep Retinal Image Understanding (DRIU), a unified framework of retinal image
analysis that provides both retinal vessel and optic disc segmentation using
deep Convolutional Neural Networks (CNNs).
Reference: [MANINIS-2016]_
"""
from torch.optim.lr_scheduler import MultiStepLR
from bob.ip.binseg.modeling.driu import build_driu
from bob.ip.binseg.utils.model_zoo import modelurls
......
#!/usr/bin/env python
# -*- coding: utf-8 -*-
"""DRIU Network for Vessel Segmentation with Batch Normalization
Deep Retinal Image Understanding (DRIU), a unified framework of retinal image
analysis that provides both retinal vessel and optic disc segmentation using
deep Convolutional Neural Networks (CNNs). This implementation includes batch
normalization as a regularization mechanism.
Reference: [MANINIS-2016]_
"""
from torch.optim.lr_scheduler import MultiStepLR
from bob.ip.binseg.modeling.driubn import build_driu
from bob.ip.binseg.utils.model_zoo import modelurls
......
#!/usr/bin/env python
# -*- coding: utf-8 -*-
"""DRIU Network for Vessel Segmentation using SSL and Batch Normalization
Deep Retinal Image Understanding (DRIU), a unified framework of retinal image
analysis that provides both retinal vessel and optic disc segmentation using
deep Convolutional Neural Networks (CNNs). This version of our model includes
a loss that is suitable for Semi-Supervised Learning (SSL). This version also
includes batch normalization as a regularization mechanism.
Reference: [MANINIS-2016]_
"""
from torch.optim.lr_scheduler import MultiStepLR
from bob.ip.binseg.modeling.driubn import build_driu
from bob.ip.binseg.utils.model_zoo import modelurls
......
#!/usr/bin/env python
# -*- coding: utf-8 -*-
"""DRIU Network for Optic Disc Segmentation
Deep Retinal Image Understanding (DRIU), a unified framework of retinal image
analysis that provides both retinal vessel and optic disc segmentation using
deep Convolutional Neural Networks (CNNs).
Reference: [MANINIS-2016]_
"""
from torch.optim.lr_scheduler import MultiStepLR
from bob.ip.binseg.modeling.driuod import build_driuod
from bob.ip.binseg.utils.model_zoo import modelurls
......
#!/usr/bin/env python
# -*- coding: utf-8 -*-
"""DRIU Network for Vessel Segmentation using SSL
Deep Retinal Image Understanding (DRIU), a unified framework of retinal image
analysis that provides both retinal vessel and optic disc segmentation using
deep Convolutional Neural Networks (CNNs). This version of our model includes
a loss that is suitable for Semi-Supervised Learning (SSL).
Reference: [MANINIS-2016]_
"""
from torch.optim.lr_scheduler import MultiStepLR
from bob.ip.binseg.modeling.driu import build_driu
from bob.ip.binseg.utils.model_zoo import modelurls
......
#!/usr/bin/env python
# -*- coding: utf-8 -*-
"""HED Network for Vessel Segmentation
Holistically-nested edge detection (HED), turns pixel-wise edge classification
into image-to-image prediction by means of a deep learning model that leverages
fully convolutional neural networks and deeply-supervised nets.
Reference: [XIE-2015]_
"""
from torch.optim.lr_scheduler import MultiStepLR
from bob.ip.binseg.modeling.hed import build_hed
from bob.ip.binseg.modeling.losses import HEDSoftJaccardBCELogitsLoss
......
#!/usr/bin/env python
# -*- coding: utf-8 -*-
"""MobileNetV2 U-Net Model for Vessel Segmentation
The MobileNetV2 architecture is based on an inverted residual structure where
the input and output of the residual block are thin bottleneck layers opposite
to traditional residual models which use expanded representations in the input
an MobileNetV2 uses lightweight depthwise convolutions to filter features in
the intermediate expansion layer. This model implements a MobileNetV2 U-Net
model, henceforth named M2U-Net, combining the strenghts of U-Net for medical
segmentation applications and the speed of MobileNetV2 networks.
References: [SANDLER-2018]_, [RONNEBERGER-2015]_
"""
from torch.optim.lr_scheduler import MultiStepLR
from bob.ip.binseg.modeling.m2u import build_m2unet
from bob.ip.binseg.utils.model_zoo import modelurls
......
#!/usr/bin/env python
# -*- coding: utf-8 -*-
"""MobileNetV2 U-Net Model for Vessel Segmentation using SSL
The MobileNetV2 architecture is based on an inverted residual structure where
the input and output of the residual block are thin bottleneck layers opposite
to traditional residual models which use expanded representations in the input
an MobileNetV2 uses lightweight depthwise convolutions to filter features in
the intermediate expansion layer. This model implements a MobileNetV2 U-Net
model, henceforth named M2U-Net, combining the strenghts of U-Net for medical
segmentation applications and the speed of MobileNetV2 networks. This version
of our model includes a loss that is suitable for Semi-Supervised Learning
(SSL).
References: [SANDLER-2018]_, [RONNEBERGER-2015]_
"""
from torch.optim.lr_scheduler import MultiStepLR
from bob.ip.binseg.modeling.m2u import build_m2unet
from bob.ip.binseg.utils.model_zoo import modelurls
......
#!/usr/bin/env python
# -*- coding: utf-8 -*-
"""Residual U-Net for Vessel Segmentation
A semantic segmentation neural network which combines the strengths of residual
learning and U-Net is proposed for road area extraction. The network is built
with residual units and has similar architecture to that of U-Net. The benefits
of this model is two-fold: first, residual units ease training of deep
networks. Second, the rich skip connections within the network could facilitate
information propagation, allowing us to design networks with fewer parameters
however better performance.
Reference: [ZHANG-2017]_
"""
from torch.optim.lr_scheduler import MultiStepLR
from bob.ip.binseg.modeling.resunet import build_res50unet
from bob.ip.binseg.utils.model_zoo import modelurls
......
#!/usr/bin/env python
# -*- coding: utf-8 -*-
"""U-Net for Vessel Segmentation
U-Net is a convolutional neural network that was developed for biomedical image
segmentation at the Computer Science Department of the University of Freiburg,
Germany. The network is based on the fully convolutional network (FCN) and its
architecture was modified and extended to work with fewer training images and
to yield more precise segmentations.
Reference: [RONNEBERGER-2015]_
"""
from torch.optim.lr_scheduler import MultiStepLR
from bob.ip.binseg.modeling.unet import build_unet
from bob.ip.binseg.utils.model_zoo import modelurls
......
......@@ -39,11 +39,31 @@ logger = logging.getLogger(__name__)
@with_plugins(pkg_resources.iter_entry_points("bob.ip.binseg.cli"))
@click.group(cls=AliasedGroup)
def binseg():
"""Binary 2D Fundus Image Segmentation Benchmark commands."""
"""Binary 2D Image Segmentation Benchmark commands."""
# Train
@binseg.command(entry_point_group="bob.ip.binseg.config", cls=ConfigCommand)
@binseg.command(entry_point_group="bob.ip.binseg.config", cls=ConfigCommand,
epilog="""
Examples:
1. Builds recipe from one of our build dependencies (inside bob.conda):
\b
$ cd bob.conda
$ bdt build -vv conda/libblitz
2. Builds recipe from one of our packages, for Python 3.6 (if that is not already the default for you):
$ bdt build --python=3.6 -vv path/to/conda/dir
3. To build multiple recipes, just pass the paths to them:
$ bdt build --python=3.6 -vv path/to/recipe-dir1 path/to/recipe-dir2
"""
)
@click.option(
"--output-path", "-o", required=True, default="output", cls=ResourceOption
)
......@@ -173,7 +193,7 @@ def train(
)
@verbosity_option(cls=ResourceOption)
def test(model, output_path, device, batch_size, dataset, weight, **kwargs):
""" Run inference and evalaute the model performance """
""" Run inference and evaluate the model performance """
# PyTorch dataloader
data_loader = DataLoader(
......@@ -420,7 +440,7 @@ def transformfolder(source_path, target_path, transforms, **kwargs):
)
@verbosity_option(cls=ResourceOption)
def predict(model, output_path, device, batch_size, dataset, weight, **kwargs):
""" Run inference and evalaute the model performance """
""" Run inference and evaluate the model performance """
# PyTorch dataloader
data_loader = DataLoader(
......
#!/usr/bin/env python
# coding=utf-8
import shutil
import inspect
import click
import pkg_resources
from click_plugins import with_plugins
from bob.extension.scripts.click_helper import (
verbosity_option,
AliasedGroup,
)
import logging
logger = logging.getLogger(__name__)
@click.group(cls=AliasedGroup)
def config():
"""Commands for listing, describing and copying configuration resources"""
pass
@config.command(
epilog="""
\b
Examples:
\b
1. Lists all configuration resources (type: bob.ip.binseg.config) installed:
\b
$ bob binseg config list
\b
2. Lists all configuration resources and their descriptions (notice this may
be slow as it needs to load all modules once):
\b
$ bob binseg config list -v
"""
)
@verbosity_option()
def list(verbose):
"""Lists configuration files installed"""
entry_points = pkg_resources.iter_entry_points("bob.ip.binseg.config")
entry_points = dict([(k.name, k) for k in entry_points])
# all modules with configuration resources
modules = set(
k.module_name.rsplit(".", 1)[0] for k in entry_points.values()
)
# sort data entries by originating module
entry_points_by_module = {}
for k in modules:
entry_points_by_module[k] = {}
for name, ep in entry_points.items():
if ep.module_name.startswith(k):
entry_points_by_module[k][name] = ep
for config_type in sorted(entry_points_by_module):
# calculates the longest config name so we offset the printing
longest_name_length = max(
len(k) for k in entry_points_by_module[config_type].keys()
)
# set-up printing options
print_string = " %%-%ds %%s" % (longest_name_length,)
# 79 - 4 spaces = 75 (see string above)
description_leftover = 75 - longest_name_length
print("module: %s" % (config_type,))
for name in sorted(entry_points_by_module[config_type]):
ep = entry_points[name]
if verbose >= 1:
module = ep.load()
doc = inspect.getdoc(module)
if doc is not None:
summary = doc.split("\n\n")[0]
else:
summary = "<DOCSTRING NOT AVAILABLE>"
else:
summary = ""
summary = (
(summary[: (description_leftover - 3)] + "...")
if len(summary) > (description_leftover - 3)
else summary
)
print(print_string % (name, summary))
@config.command(
epilog="""
\b
Examples:
\b
1. Describes the DRIVE (training) dataset configuration:
\b
$ bob binseg config describe drive
\b
2. Describes the DRIVE (training) dataset configuration and lists its
contents:
\b
$ bob binseg config describe drive -v
"""
)
@click.argument(
"name", required=True, nargs=-1,
)
@verbosity_option()
def describe(name, verbose):
"""Describes a specific configuration file"""
entry_points = pkg_resources.iter_entry_points("bob.ip.binseg.config")
entry_points = dict([(k.name, k) for k in entry_points])
for k in name:
if k not in entry_points:
logger.error("Cannot find configuration resource '%s'", k)
continue
ep = entry_points[k]
print("Configuration: %s" % (ep.name,))
print("Python Module: %s" % (ep.module_name,))
print("")
mod = ep.load()
if verbose >= 1:
fname = inspect.getfile(mod)
print("Contents:")
with open(fname, "r") as f:
print(f.read())
else: #only output documentation
print("Documentation:")
print(inspect.getdoc(mod))
@config.command(
epilog="""
\b
Examples:
\b
1. Makes a copy of one of the stock configuration files locally, so it can be
adapted:
\b
$ bob binseg config copy drive -vvv newdataset.py
"""
)
@click.argument(
"source", required=True, nargs=1,
)
@click.argument(
"destination", required=True, nargs=1,
)
@verbosity_option()
def copy(source, destination, verbose):
"""Copies a specific configuration resource so it can be modified locally"""
entry_points = pkg_resources.iter_entry_points("bob.ip.binseg.config")
entry_points = dict([(k.name, k) for k in entry_points])
if source not in entry_points:
logger.error("Cannot find configuration resource '%s'", source)
return 1
ep = entry_points[source]
mod = ep.load()
src_name = inspect.getfile(mod)
logger.info('cp %s -> %s' % (src_name, destination))
shutil.copyfile(src_name, destination)
......@@ -47,6 +47,15 @@ test:
commands:
# test commands ("script" entry-points) from your package here
- bob binseg --help
- bob binseg config --help
- bob binseg config list --help
- bob binseg config list
- bob binseg config list -v
- bob binseg config describe --help
- bob binseg config describe drive
- bob binseg config describe drive -v
- bob binseg config copy --help
- bob binseg config copy drive /tmp/test.py
- bob binseg compare --help
- bob binseg evalpred --help
- bob binseg gridtable --help
......
......@@ -93,6 +93,8 @@ Scripts
bob.ip.binseg.script.binseg
.. _bob.ip.binseg.configs:
Preset Configurations
---------------------
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment