|
1 | | -Predictor Evaluation Workflows |
2 | | -============================== |
| 1 | +Predictor Evaluations |
| 2 | +===================== |
3 | 3 |
|
4 | | -A :class:`~citrine.informatics.workflows.predictor_evaluation_workflow.PredictorEvaluationWorkflow` evaluates the performance of a :doc:`Predictor <predictors>`. |
5 | | -Each workflow is composed of one or more :class:`PredictorEvaluators <citrine.informatics.predictor_evaluator.PredictorEvaluator>`. |
| 4 | +A :class:`~citrine.informatics.executions.predictor_evaluation.PredictorEvaluation` evaluates the performance of a :doc:`Predictor <predictors>`. |
| 5 | +Each evaluation utilizes one or more :class:`PredictorEvaluators <citrine.informatics.predictor_evaluator.PredictorEvaluator>`. |
6 | 6 |
|
7 | 7 | Predictor evaluators |
8 | 8 | -------------------- |
9 | 9 |
|
10 | 10 | A predictor evaluator defines a method to evaluate a predictor and any relevant configuration, e.g., k-fold cross-validation evaluation that specifies 3 folds. |
11 | 11 | Minimally, each predictor evaluator specifies a name, a set of predictor responses to evaluate and a set of metrics to compute for each response. |
12 | | -Evaluator names must be unique within a single workflow (more on that `below <#execution-and-results>`__). |
| 12 | +Evaluator names must be unique within a single evaluation (more on that `below <#execution-and-results>`__). |
13 | 13 | Responses are specified as a set of strings, where each string corresponds to a descriptor key of a predictor output. |
14 | 14 | Metrics are specified as a set of :class:`PredictorEvaluationMetrics <citrine.informatics.predictor_evaluation_metrics.PredictorEvaluationMetric>`. |
15 | 15 | The evaluator will only compute the subset of metrics valid for each response, so the top-level metrics defined by an evaluator should contain the union of all metrics computed across all responses. |
@@ -102,22 +102,21 @@ For categorical responses, performance metrics include the area under the receiv |
102 | 102 | Execution and results |
103 | 103 | --------------------- |
104 | 104 |
|
105 | | -Triggering a Predictor Evaluation Workflow produces a :class:`~citrine.resources.predictor_evaluation_execution.PredictorEvaluationExecution`. |
106 | | -This execution allows you to track the progress using its ``status`` and ``status_info`` properties. |
107 | | -The ``status`` can be one of ``INPROGRESS``, ``READY``, or ``FAILED``. |
108 | | -Information about the execution status, e.g., warnings or reasons for failure, can be accessed via ``status_info``. |
| 105 | +Once triggered, you can track the evaluation's progress using its ``status`` and ``status_detail`` properties. |
| 106 | +The ``status`` can be one of ``INPROGRESS``, ``SUCCEEDED``, or ``FAILED``. |
| 107 | +Information about the execution status, e.g., warnings or reasons for failure, can be accessed via ``status_detail``. |
109 | 108 |
|
110 | | -When the ``status`` is ``READY``, results for each evaluation defined as part of the workflow can be accessed using the ``results`` method: |
| 109 | +When the ``status`` is ``SUCCEEDED``, results for each evaluator defined as part of the evaluation can be accessed using the ``results`` method: |
111 | 110 |
|
112 | 111 | .. code:: python |
113 | 112 |
|
114 | | - results = execution.results('evaluator_name') |
| 113 | + results = evaluation.results('evaluator_name') |
115 | 114 |
|
116 | | -or by indexing into the execution object directly: |
| 115 | +or by indexing into the evaluation object directly: |
117 | 116 |
|
118 | 117 | .. code:: python |
119 | 118 |
|
120 | | - results = execution['evaluator_name'] |
| 119 | + results = evaluation['evaluator_name'] |
121 | 120 |
|
122 | 121 | Both methods return a :class:`~citrine.informatics.predictor_evaluation_result.PredictorEvaluationResult`. |
123 | 122 |
|
@@ -153,7 +152,7 @@ Each data point defines properties ``uuid``, ``identifiers``, ``trial``, ``fold` |
153 | 152 | Example |
154 | 153 | ------- |
155 | 154 |
|
156 | | -The following demonstrates how to create a :class:`~citrine.informatics.predictor_evaluator.CrossValidationEvaluator`, add it to a :class:`~citrine.informatics.workflows.predictor_evaluation_workflow.PredictorEvaluationWorkflow`, and use it to evaluate a :class:`~citrine.informatics.predictors.predictor.Predictor`. |
| 155 | +The following demonstrates how to create a :class:`~citrine.informatics.predictor_evaluator.CrossValidationEvaluator` and use it to evaluate a :class:`~citrine.informatics.predictors.predictor.Predictor`. |
157 | 156 |
|
158 | 157 | The predictor we'll evaluate is defined below: |
159 | 158 |
|
@@ -215,36 +214,19 @@ In this example we'll create a cross-validation evaluator for the response ``y`` |
215 | 214 | metrics={RMSE(), PVA()} |
216 | 215 | ) |
217 | 216 |
|
218 | | -Then add the evaluator to a :class:`~citrine.informatics.workflows.predictor_evaluation_workflow.PredictorEvaluationWorkflow`, register it with your project, and wait for validation to finish: |
| 217 | +Then, trigger an evaluation and wait for the results to be ready: |
219 | 218 |
|
220 | 219 | .. code:: python |
221 | 220 |
|
222 | | - from citrine.informatics.workflows import PredictorEvaluationWorkflow |
223 | | -
|
224 | | - workflow = PredictorEvaluationWorkflow( |
225 | | - name='workflow that evaluates y', |
226 | | - evaluators=[evaluator] |
227 | | - ) |
228 | | -
|
229 | | - workflow = project.predictor_evaluation_workflows.register(workflow) |
230 | | - wait_while_validating(collection=project.predictor_evaluation_workflows, module=workflow) |
231 | | -
|
232 | | -Trigger the workflow against a predictor to start an execution. |
233 | | -Then wait for the results to be ready: |
234 | | - |
235 | | -.. code:: python |
236 | | -
|
237 | | - from citrine.jobs.waiting import wait_while_executing |
238 | | -
|
239 | | - execution = workflow.executions.trigger(predictor.uid, predictor_version=predictor.version) |
240 | | - wait_while_executing(collection=project.predictor_evaluation_executions, execution=execution, print_status_info=True) |
| 221 | + evaluation = project.predictor_evaluations.trigger(evaluators=[evaluator], predictor_id=predictor.uid) |
| 222 | + wait_while_executing(collection=project.predictor_evaluations, execution=evaluation, print_status_info=True) |
241 | 223 |
|
242 | 224 | Finally, load the results and inspect the metrics and their computed values: |
243 | 225 |
|
244 | 226 | .. code:: python |
245 | 227 |
|
246 | 228 | # load the results computed by the CV evaluator defined above |
247 | | - cv_results = execution[evaluator.name] |
| 229 | + cv_results = evaluation[evaluator.name] |
248 | 230 |
|
249 | 231 | # load results for y |
250 | 232 | y_results = cv_results['y'] |
@@ -280,18 +262,17 @@ Finally, load the results and inspect the metrics and their computed values: |
280 | 262 |
|
281 | 263 | Archive and restore |
282 | 264 | ------------------- |
283 | | -Both :class:`PredictorEvaluationWorkflows <citrine.informatics.workflows.predictor_evaluation_workflow.PredictorEvaluationWorkflow>` and :class:`PredictorEvaluationExecutions <citrine.resources.predictor_evaluation_execution.PredictorEvaluationExecution>` can be archived and restored. |
284 | | -To archive a workflow: |
| 265 | +:class:`PredictorEvaluation <citrine.informatics.executions.predictor_evaluation.PredictorEvaluation>` can be archived and restored. |
285 | 266 |
|
286 | 267 | .. code:: python |
287 | 268 |
|
288 | | - project.predictor_evaluation_workflows.archive(workflow.uid) |
| 269 | + project.predictor_evaluation.archive(evaluation.uid) |
289 | 270 |
|
290 | | -and to archive all executions associated with a predictor evaluation workflow: |
| 271 | +and to archive all evaluations associated with a predictor: |
291 | 272 |
|
292 | 273 | .. code:: python |
293 | 274 |
|
294 | | - for execution in workflow.executions.list(): |
295 | | - project.predictor_evaluation_executions.archive(execution.uid) |
| 275 | + for evaluation in project.predictor_evaluations.list(predictor_id=predictor.uid): |
| 276 | + project.predictor_evaluation.archive(evaluation.uid) |
296 | 277 |
|
297 | | -To restore a workflow or execution, simply replace ``archive`` with ``restore`` in the code above. |
| 278 | +To restore an evaluation, simply replace ``archive`` with ``restore`` in the code above. |
0 commit comments