Skip to content
Snippets Groups Projects
Commit 1c1600a8 authored by André Anjos's avatar André Anjos :speech_balloon:
Browse files

[libs.segmentation.engine.evaluator] Fix doc strings

parent 40aa5aba
No related branches found
No related tags found
1 merge request!46Create common library
......@@ -532,25 +532,33 @@ def run(
* ``counts``: dictionary where keys are thresholds, and values are
sequence of integers containing the TP, FP, TN, FN (in this order).
* ``auc_score``: a float indicating the area under the ROC curve
for the split. It is calculated using a trapezoidal rule.
* ``average_precision_score``: a float indicating the area under the
precision-recall curve, calculated using a rectangle rule.
* ``curves``: dictionary with 2 keys:
* ``roc``: dictionary with 3 keys:
* ``fpr``: a list of floats with the false-positive rate
* ``tpr``: a list of floats with the true-positive rate
* ``thresholds``: a list of thresholds uniformily separated by
``steps``, at which both ``fpr`` and ``tpr`` are evaluated.
* ``precision_recall``: a dictionary with 3 keys:
* ``precision``: a list of floats with the precision
* ``recall``: a list of floats with the recall
* ``thresholds``: a list of thresholds uniformily separated by
``steps``, at which both ``precision`` and ``recall`` are
evaluated.
* ``threshold_a_priori``: boolean indicating if the threshold for unary
metrics where computed with a threshold chosen a priori or a
posteriori in this split.
* ``<metric-name>``: a float representing the supported metric at the
threshold that maximizes ``metric``. There will be one entry of this
type for each of the :py:obj:`SUPPORTED_METRIC_TYPE`'s.
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please register or to comment