Commit cb257bf5 authored by André Anjos's avatar André Anjos 💬

[doc] Improve documentation; Fix CI

parent ba86a670
Pipeline #26421 passed with stages
in 20 minutes and 28 seconds
......@@ -196,7 +196,7 @@ class AnnotatorApp(tkinter.Tk):
pixel_skip (:py:obj:`int`, Optional): The number of pixels skipped every
time the user uses a motion key with the Shift key pressed.
default_mode (:py:object:`str`, Optional): If the default object mode
default_mode (:py:obj:`str`, Optional): If the default object mode
during object creation is "line" or "polygon"
"""
......
......@@ -63,8 +63,8 @@ def load(fp):
Parameters:
fp (file, str): The name of a file, with full path, to be used for
reading the data or an already opened file-like object, that accepts
fp (str, :py:obj:`object`): The name of a file, with full path, to be used
for reading the data or an already opened file-like object, that accepts
the "read()" call.
......
......@@ -22,11 +22,11 @@ Examples:
$ bob annotate -v images annotations
Annotate the image of Lena, with a 2x zoom, get the result in `lena.json`:
Annotate the image of Lena, with a 2x zoom, get the result in `lenna.json`:
$ bob annotate -v --zoom=2 bob/ip/annotator/data -e '.jpg'
$ bob annotate -v --zoom=2 bob/ip/annotator/data
Visualize annotations for Lena, with a 0.5x zoom, from the file `lena.json`:
Visualize annotations for Lena, with a 0.5x zoom, from the file `lenna.json`:
$ bob annotate -v --readonly --zoom=0.5 bob/ip/annotator/data
''')
......
......@@ -27,9 +27,9 @@ class Annotation(object):
active, the annotation is editable with keyboard shortcuts.
Args:
Parameters:
canvas (object): The canvas object where I'm drawing myself in
canvas (:py:obj:`object`): The canvas object where I'm drawing myself in
shape (tuple): The shape of the image where I'm drawing mysel of top of.
To be specified as ``(height, width)``. This should correspond to the
......@@ -50,7 +50,7 @@ class Annotation(object):
pixel_skip (:py:obj:`int`, Optional): The number of pixels skipped every
time the user uses a motion key with the Shift key pressed.
mode (:py:object:`str`, Optional): If the default object mode is "line" or
mode (:py:obj:`str`, Optional): If the default object mode is "line" or
"polygon"
"""
......@@ -353,10 +353,12 @@ class Annotation(object):
def move_active_point(self, p, key, state):
"""Moves the keypoint closes to ``p`` using the keyboard
Args:
Parameters:
p (tuple): point in ``(y, x)`` format
key: the event keysim value (arrow keys, left right movement)
state: the event state value (test for <SHIFT> key pressed)
"""
......
.. -*- coding: utf-8 -*-
=============================
GUIDE: Wrist ROI annotation
=============================
=================
Annotator Guide
=================
In this guide is summarized all that one should need to know to be able to
annotate ROI in Idiap's project BIOWAVE wrist vein images.
This guide explains in a step by step, how to annotate images using the
built-in annotation app. We will use an example image from Lenna_ for this
tutorial. The annotator app is agnostic to images and can load anything that
is supported by :py:mod:`bob.io.image`.
To do that you will be using tool *annotate* from the ``non.ip.annotator``
package.
All work can be divided in to 3 stages:
1. Selecting the image *batches*;
2. Annotating the images;
3. Finalizing the annotation.
Selection of image *batches*
----------------------------
Go to the Google docs spreadsheet_ . There is listed *111 batches* with wrist
vein images. Each batch contains 6 images (it should take you 3-4 minutes to
complete one batch's ROI annotations). Choose batches whose images you will
annotate by writing your name in the corresponding cell.
Image annotation
----------------
First, open the ``terminal`` in your Idiap machine and switch ``CONDA``
environment by typing::
The application is launched via command-line, which means you need a command
prompt, pre-configured with the conda_ environment containing this package.
Open such prompt (e.g. via a *Terminal* application) and pass the root path
containing the image you would like to annotate:
source /idiap/group/torch5spro/conda/bin/activate bob-2.3.4-py27_0
.. code-block:: sh
Now you can annotate one batch (6 images) or multiple batches. To annotate one,
type::
$ conda activate myenv
(myenv) $ bob annotate -vv /path/to/images /path/to/annotations
/idiap/project/biowave/biowave/bob.ip.annotator/bin/annotate.py -v -f -s --zoom=2 --output=/idiap/project/biowave/biowave/ROI_annotations/ --basedir=/idiap/project/biowave/biowave/annotation_images/ /idiap/project/biowave/biowave/annotation_images/BATCH1/*.png
The output path, marked on the above example with ``/path/to/annotations`` is
the directory where annotated points will be written to. This path is created
if it does not exist. If it does exist, annotations are preloaded for each
image inside ``/path/to/images`` if they exist and annotation is resumed from
the point it was stopped.
Where ``BATCH1``, your chosen batch name, e.g. ``001/Right``
The app will scan for image files with a given extension name (``.png`` by
default) on the input path provided (e.g.: ``/path/to/images``). Images will
then be displayed in order, one after the other so you can annotate or review
their annotation. Annotations are saved to the output directory copying the
*same folder structure* found on the input path. For example, if the images
were lying like this on the input path:
You can also run several batches, by replacing::
.. code-block:: text
BATCH1
/path/to/images/subfolder1/image.png
/path/to/images/subfolder2/image.png
/path/to/images/subfolder3/subfolder/image.png
with::
{BATCH1,BATCH2,BATCHn}
Then, the output path will contain the following files, after the user
annotated all images:
**Be careful -- no spaces between batch names!** E.g., if I want to annotate
batches ``001/Right``, ``001/Left``, ``002/Right``, the complete command is::
.. code-block:: text
/idiap/project/biowave/biowave/bob.ip.annotator/bin/annotate.py -v -f -s --zoom=2 --output=/idiap/project/biowave/biowave/ROI_annotations/ --basedir=/idiap/project/biowave/biowave/annotation_images/ /idiap/project/biowave/biowave/annotation_images/{001/Right,001/Left,002/Right}/*.png
/path/to/annotations/subfolder1/image.json
/path/to/annotations/subfolder2/image.json
/path/to/annotations/subfolder3/subfolder/image.json
**Again remember -- no spaces!**
After running the command an GUI will open. Now you can start to mark ROI in
the images:
Annotation format
-----------------
1. Start from the top left of the image;
2. Click to add as many points as you find sufficient to match the wrist border
- as you draw the points, you'll see a polygon being formed indicating the
area you're selecting. It is better if the border appears outside of the
polygon (ROI);
3. Make sure you get the whole wrist ROI, as accurately as possible (30 seconds
to 1 minute per image) should be enough.
4. If you make a mistake, press the ``d`` key to remove last point;
5. Use the arrow keys on your keyboard to fine-tune the point location, in case
your mouse click was not precise enough (only works for the last annotated
point, in red)
6. Use the mouse scrollwheel (down), to move to the next image in the batch.
7. Once you're done with the batch (scrolling down does not change the image
anymore), hit ``q`` to quit the program
8. All the above is explained in the help dialogue of the annotator, hit ``?``
to get this window.
Annotations are saved in JSON_ format, which is easily readable and loadable in
a variety of programming environments. The specific format used by the
annotator app may change, but it essentially just lists annotated points, in
the order objects are created.
When you scroll the scrollwheel as well as when you exit the GUI by pressing
``q``, your annotations are saved. You can go back (by scrolling the
scrollwheel or opening the GUI again) and continue to add annotations (points),
delete one or more of them (by pressing ``d`` multiple times) or delete the
image's annotation completely by pressing ``D`` key.
Zoom
----
The number of points in each annotation is arbitrary - we just need to the get
shape of the illuminated wrist area right.
You may control the size of the image being annotated by passing a *zoom*
parameter (floating point number within the range ``]0,+inf[``). A zoom of
``1.0`` (the default), displays images as they are. A zoom larger than ``1.0``
upscales the input image making it look bigger than they originally are. A
zoom factor smaller than ``1.0`` does the inverse, scaling down the input
image. Annotations recorded on the image are *independent* of the zoom factor
and compensated upon saving operations. You can start annotating an image with
a zoom factor of ``1.0``, quit the program and then resume with a different
zoom factor. To change the zoom factor, use the ``--zoom`` parameter of the
annotator app, while starting the application. For example, to start the
application with a zoom factor of ``2.0`` do:
Example:
.. code-block:: sh
.. image:: img/ROI.png
(myenv) $ bob annotate -vv --zoom=2.0 /path/to/images /path/to/annotations
To see more examples, in ``terminal`` run command (this command opens
annotations for first 2 batches in the read-only mode)::
.. tip::
/idiap/project/biowave/biowave/bob.ip.annotator/bin/annotate.py -v -f -s --readonly --zoom=2 --output=/idiap/project/biowave/biowave/ROI_annotations/ --basedir=/idiap/project/biowave/biowave/annotation_images/ /idiap/project/biowave/biowave/annotation_images/{001/Right,001/Left}/*.png
You should try to use a zoom factor which is the largest possible given your
screen resolution. The image should fit comfortably on the screen without
resizing the drawing window. The higher the zoom factor, the more precise
will be your annotation. Conversely, the lower, the less precise.
Finalization of annotation
--------------------------
Further help
------------
When you are finished all annotations:
Use the flag ``--help`` to list all options and examples from the annotation
app:
1. Go back to the Google spreadsheet_ and mark that you are done with the ROI's
annotation;
2. Run this command in the console (this command gives write access to all
people in the group, so I can move/edit the files if needed)::
.. code-block:: sh
find /idiap/project/biowave/biowave/ROI_annotations/ -user $USER -exec chmod g+rw {} \;
(myenv) $ bob annotate --help
That's it!
Annotating images
-----------------
.. _spreadsheet: https://docs.google.com/spreadsheets/d/1-YcOitDkGDL4T0eccdkAQ0RdfqplzPvVrX-WcL8dUS8/edit?usp=sharing
.. todo:: This section still needs to be created
.. include:: links.rst
......@@ -8,6 +8,11 @@
.. todolist::
This guide contains information about our custom annotation application. The
application is built in a modular way and can be modified to annotate different
object types or operate on videos.
Documentation
-------------
......
......@@ -5,4 +5,7 @@
.. _idiap: http://www.idiap.ch
.. _bob: http://www.idiap.ch/software/bob
.. _installation: https://www.idiap.ch/software/bob/install
.. _mailing list: https://www.idiap.ch/software/bob/discuss
\ No newline at end of file
.. _mailing list: https://www.idiap.ch/software/bob/discuss
.. _lenna: https://en.wikipedia.org/wiki/Lenna
.. _conda: https://conda.io/en/latest/
.. _json: https://en.wikipedia.org/wiki/JSON
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment