diff --git a/doc/databases/detect/cxr8.rst b/doc/databases/detect/cxr8.rst
new file mode 100644
index 0000000000000000000000000000000000000000..2b4061aac749ad8b70176c4702a09464c5b0f7a1
--- /dev/null
+++ b/doc/databases/detect/cxr8.rst
@@ -0,0 +1,20 @@
+.. SPDX-FileCopyrightText: Copyright © 2024 Idiap Research Institute <contact@idiap.ch>
+..
+.. SPDX-License-Identifier: GPL-3.0-or-later
+
+.. _mednet.databases.detect.cxr8:
+
+=======
+ CXR-8
+=======
+
+* DataModule and support code: :py:mod:`.data.detect.cxr8`
+* Splits:
+
+  .. list-table::
+     :align: left
+
+     * - Config. key
+       - Module
+     * - ``cxr8-detect``
+       - :py:mod:`.config.detect.data.cxr8.default`
diff --git a/doc/databases/detect/index.rst b/doc/databases/detect/index.rst
new file mode 100644
index 0000000000000000000000000000000000000000..44280e508187dbf3fbac5671e01a1b691b8d5136
--- /dev/null
+++ b/doc/databases/detect/index.rst
@@ -0,0 +1,17 @@
+.. SPDX-FileCopyrightText: Copyright © 2024 Idiap Research Institute <contact@idiap.ch>
+..
+.. SPDX-License-Identifier: GPL-3.0-or-later
+
+.. _mednet.databases.detect:
+
+==================
+ Object Detection
+==================
+
+.. toctree::
+   :maxdepth: 1
+
+   cxr8
+   jsrt
+   montgomery
+   shenzhen
diff --git a/doc/databases/detect/jsrt.rst b/doc/databases/detect/jsrt.rst
new file mode 100644
index 0000000000000000000000000000000000000000..4e70f318a3ea24e690c7a56fb2f1692418b76b64
--- /dev/null
+++ b/doc/databases/detect/jsrt.rst
@@ -0,0 +1,20 @@
+.. SPDX-FileCopyrightText: Copyright © 2024 Idiap Research Institute <contact@idiap.ch>
+..
+.. SPDX-License-Identifier: GPL-3.0-or-later
+
+.. _mednet.databases.detect.jsrt:
+
+======
+ JSRT
+======
+
+* DataModule and support code: :py:mod:`.data.detect.jsrt`
+* Splits:
+
+  .. list-table::
+     :align: left
+
+     * - Config. key
+       - Module
+     * - ``jsrt-detect``
+       - :py:mod:`.config.detect.data.jsrt.default`
diff --git a/doc/databases/detect/montgomery.rst b/doc/databases/detect/montgomery.rst
new file mode 100644
index 0000000000000000000000000000000000000000..9e8db7ef1fa6e7feec759902db730264caa816a4
--- /dev/null
+++ b/doc/databases/detect/montgomery.rst
@@ -0,0 +1,20 @@
+.. SPDX-FileCopyrightText: Copyright © 2024 Idiap Research Institute <contact@idiap.ch>
+..
+.. SPDX-License-Identifier: GPL-3.0-or-later
+
+.. _mednet.databases.detect.montgomery:
+
+===================
+ Montgomery County
+===================
+
+* DataModule and support code: :py:mod:`.data.detect.montgomery`
+* Splits:
+
+  .. list-table::
+     :align: left
+
+     * - Config. key
+       - Module
+     * - ``montgomery-detect``
+       - :py:mod:`.config.detect.data.montgomery.default`
diff --git a/doc/databases/detect/shenzhen.rst b/doc/databases/detect/shenzhen.rst
new file mode 100644
index 0000000000000000000000000000000000000000..747548ed3ea514b0f49597c85aa844c5d056d3a0
--- /dev/null
+++ b/doc/databases/detect/shenzhen.rst
@@ -0,0 +1,20 @@
+.. SPDX-FileCopyrightText: Copyright © 2024 Idiap Research Institute <contact@idiap.ch>
+..
+.. SPDX-License-Identifier: GPL-3.0-or-later
+
+.. _mednet.databases.detect.shenzhen:
+
+===================
+ Shenzhen Hospital
+===================
+
+* DataModule and support code: :py:mod:`.data.detect.shenzhen`
+* Splits:
+
+  .. list-table::
+     :align: left
+
+     * - Config. key
+       - Module
+     * - ``shenzhen-detect``
+       - :py:mod:`.config.detect.data.shenzhen.default`
diff --git a/doc/databases/index.rst b/doc/databases/index.rst
index 75e1f13a81da764ee32fc03683e596ddb9d9eec7..d6a103b31a14cbfaebaa438a819b84d05277cf46 100644
--- a/doc/databases/index.rst
+++ b/doc/databases/index.rst
@@ -15,3 +15,4 @@
 
    classify/index
    segment/index
+   detect/index
diff --git a/doc/models.rst b/doc/models.rst
index 115e40ba46db78a6acea2e4783c4d2bc77da4b03..84ea09cc4af5e367904afd85c891d12c02188157 100644
--- a/doc/models.rst
+++ b/doc/models.rst
@@ -89,4 +89,22 @@ Pre-configured models supporting semantic segmentation tasks.
      - :py:class:`.models.segment.unet.Unet`
 
 
+.. _mednet.models.detect:
+
+Object Detection
+----------------
+
+Pre-configured models supporting object detection tasks.
+
+.. list-table:: Pre-configured object detection models
+   :align: left
+
+   * - Config. key
+     - Module
+     - Base type
+   * - ``faster-rcnn``
+     - :py:mod:`.config.detect.models.faster_rcnn`
+     - :py:class:`.models.detect.faster_rcnn.FasterRCNN`
+
+
 .. include:: links.rst
diff --git a/doc/usage/evaluation.rst b/doc/usage/evaluation.rst
index c6e2c6126ffd91c9bd14be7a5acc259a297c898f..95a0eeb2ce4de1bfff5037fe2bca0b6cb4b4d1f9 100644
--- a/doc/usage/evaluation.rst
+++ b/doc/usage/evaluation.rst
@@ -34,7 +34,10 @@ pre-configured :ref:`datamodule <mednet.datamodel>`, run the one of following:
    mednet predict -vv pasa montgomery --weight=<results/model.ckpt> --output-folder=predictions
 
    # example for a segmentation task
-   mednet predict -vv lwnet drive  --weight=<results/model.ckpt> --output-folder=predictions
+   mednet predict -vv lwnet drive --weight=<results/model.ckpt> --output-folder=predictions
+
+   # example for a object detection task
+   mednet predict -vv faster-rcnn montgomery-detect --weight=<results/model.ckpt> --output-folder=predictions
 
 
 Replace ``<results/model.ckpt>`` to a path leading to the pre-trained model.
@@ -71,5 +74,8 @@ the following:
    # segmentation task
    mednet segment evaluate -vv --predictions=path/to/predictions.json
 
+   # object detection task
+   mednet detect evaluate -vv --predictions=path/to/predictions.json
+
 
 .. include:: ../links.rst
diff --git a/doc/usage/experiment.rst b/doc/usage/experiment.rst
index 6a1d1f10df59a461bb47443efec18fc09a6b4e61..34a84dd2a061e84d93baf594c835a78729c76b4d 100644
--- a/doc/usage/experiment.rst
+++ b/doc/usage/experiment.rst
@@ -29,6 +29,11 @@ performance curves, run the one of following:
    $ mednet experiment -vv lwnet drive
    # check results in the "results" folder
 
+   # example object detection task using the "faster-rcnn" network model
+   # on the "montgomery" (for object detection) datamodule
+   $ mednet experiment -vv montgomery-detect faster-rcnn
+   # check results in the "results" folder
+
 You may run the system on a GPU by using the ``--device=cuda``, or
 ``--device=mps``  option.
 
diff --git a/doc/usage/index.rst b/doc/usage/index.rst
index f3ac5a77af3048f232ef69bfdc737624e80c2625..03fdbdf872ac9664c6a705874a532ac6bfb9817a 100644
--- a/doc/usage/index.rst
+++ b/doc/usage/index.rst
@@ -8,14 +8,14 @@
  Usage
 =======
 
-This package supports a fully reproducible research experimentation cycle for
-medical image classification and segmentation with support for the following
+This package supports a fully reproducible research experimentation cycle for medical
+image classification, segmentation, and object detection with support for the following
 activities:
 
-* Training: Images are fed to a deep neural network that is trained to match
-  (classification) or reconstruct (segmentation) annotations automatically, via
-  error back propagation. The objective of this phase is to produce a model.
-  We support training on CPU and a few GPU architectures (``cuda`` or ``mps``).
+* Training: Images are fed to a deep neural network that is trained to match labels
+  (classification), reconstruct (segmentation), or find objects (detections)
+  automatically, via error back propagation. The objective of this phase is to produce a
+  model. We support training on CPU and a few GPU architectures (``cuda`` or ``mps``).
 * Prediction (inference): The model is used to generate predictions
 * Evaluation: Predictions are used evaluate model performance against provided
   annotations, or visualize prediction results overlayed on the original raw
@@ -32,7 +32,7 @@ generate intermediate outputs required for subsequent commands:
 .. graphviz:: img/cli-core-dark.dot
    :align: center
    :class: only-dark
-   :caption: Overview of core CLI commands for model training, inference and evaluation. Clicking on each item leads to the appropriate specific documentation. The workflow is the same across different task types (e.g. classification or segmentation), except for evaluation, that remains task-specific. The right implementation is chosen based on the type of datamodule being used.
+   :caption: Overview of core CLI commands for model training, inference and evaluation. Clicking on each item leads to the appropriate specific documentation. The workflow is the same across different task types (e.g. classification, segmentation or object detection), except for evaluation, that remains task-specific. The right implementation is chosen based on the type of datamodule being used.
 
 The CLI interface is configurable using :ref:`clapper's extensible
 configuration framework <clapper.config>`.  In essence, each command-line
diff --git a/doc/usage/training.rst b/doc/usage/training.rst
index f4d6871864008296767521df15a8308448e6b1e3..1e9ecb62fcbdcd17edd759d3da2d6e3790d38d71 100644
--- a/doc/usage/training.rst
+++ b/doc/usage/training.rst
@@ -27,6 +27,10 @@ For example, to train a model on a pre-configured :ref:`datamodule
    mednet train -vv lwnet drive
    # check results in the "results" folder
 
+   # example object detection task
+   $ mednet train -vv montgomery-detect faster-rcnn
+   # check results in the "results" folder
+
 You may run the system on a GPU by using the ``--device=cuda``, or
 ``--device=mps``  option.
 
diff --git a/src/mednet/models/detect/faster_rcnn.py b/src/mednet/models/detect/faster_rcnn.py
index 3322b64e4510e7dfc5239ff128113c332e700eb2..30d9cd51117ba43703017e8777febb6c1b028997 100644
--- a/src/mednet/models/detect/faster_rcnn.py
+++ b/src/mednet/models/detect/faster_rcnn.py
@@ -65,7 +65,7 @@ class FasterRCNN(Model):
         num_classes: int = 1,
         variant: typing.Literal[
             "resnet50-v1", "resnet50-v2", "mobilenetv3-large", "mobilenetv3-small"
-        ] = "resnet50-v1",
+        ] = "mobilenetv3-small",
     ):
         super().__init__(
             name=f"faster-rcnn[{variant}]",