|
| 1 | +================================== |
| 2 | +Migrating to Predictor Evaluations |
| 3 | +================================== |
| 4 | + |
| 5 | +Summary |
| 6 | +======= |
| 7 | + |
| 8 | +In version 4.0, :py:class:`Predictor Evaluation Workflows <citrine.resources.predictor_evaluation_workflow.PredictorEvaluationWorkflowCollection>` and :py:class:`Predictor Evaluation Executions <citrine.resources.predictor_evaluation_execution.PredictorEvaluationExecutionCollection>` (collectively, PEWs) will be merged into a single entity called :py:class:`Predictor Evaluations <citrine.resources.predictor_evaluation.PredictorEvaluationCollection>`. The new entity will retain the functionality of its predecessors, while simplyfing interactions with it. And it will support the continuing evolution of the platform. |
| 9 | + |
| 10 | +Basic Usage |
| 11 | +=========== |
| 12 | + |
| 13 | +The most common pattern for interacting with PEWs is executing the default evaluators and waiting for the result: |
| 14 | + |
| 15 | +.. code:: python |
| 16 | +
|
| 17 | + pew = project.predictor_evaluation_workflows.create_default(predictor_id=predictor.uid) |
| 18 | + execution = next(pew.executions.list(), None) |
| 19 | + execution = wait_while_executing(collection=project.predictor_evaluation_executions, execution=execution) |
| 20 | +
|
| 21 | +With Predictor Evaluations, it's more straight-forward: |
| 22 | + |
| 23 | +.. code:: python |
| 24 | +
|
| 25 | + evaluation = project.predictor_evaluations.trigger_default(predictor_id=predictor.uid) |
| 26 | + evaluation = wait_while_executing(collection=project.predictor_evaluations, execution=evaluation) |
| 27 | +
|
| 28 | +The evaluators used are available with :py:meth:`~citrine.informatics.executions.predictor_evaluation.PredictorEvaluation.evaluators`. |
| 29 | + |
| 30 | +Working With Evaluators |
| 31 | +======================= |
| 32 | + |
| 33 | +You can still construct evaluators (such as :class:`~~citrine.informatics.predictor_evaluator.CrossValidationEvaluator`) the same way as you always have, and run them against your predictor: |
| 34 | + |
| 35 | +.. code:: python |
| 36 | +
|
| 37 | + evaluation = project.predictor_evaluations.trigger(predictor_id=predictor.uid, evaluators=evaluators) |
| 38 | +
|
| 39 | +If you don't wish to construct evaluators by hand, you can retrieve the default one(s): |
| 40 | + |
| 41 | +.. code:: python |
| 42 | +
|
| 43 | + evaluators = project.predictor_evaluations.default(predictor_id=predictor.uid) |
| 44 | +
|
| 45 | +You can evaluate your predictor even if it hasn't been registered to the platform yet: |
| 46 | + |
| 47 | +.. code:: python |
| 48 | +
|
| 49 | + evaluators = project.predictor_evaluations.default_from_config(predictor) |
| 50 | +
|
| 51 | +Once evaluation is complete, the results will be available by calling :py:meth:`~citrine.informatics.executions.predictor_evaluation.PredictorEvaluation.results` with the name of the desired evaluator (which are all available through :py:meth:`~citrine.informatics.executions.predictor_evaluation.PredictorEvaluation.evaluator_names`). |
0 commit comments