Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
144 changes: 0 additions & 144 deletions docs/source/FAQ/data_manager_migration.rst

This file was deleted.

3 changes: 1 addition & 2 deletions docs/source/FAQ/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -6,5 +6,4 @@ FAQ
:maxdepth: 2

prohibited_data_patterns
v3_migration
data_manager_migration
predictor_evaluation_migration
51 changes: 51 additions & 0 deletions docs/source/FAQ/predictor_evaluation_migration.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,51 @@
==================================
Migrating to Predictor Evaluations
==================================

Summary
=======

In version 4.0, :py:class:`Predictor Evaluation Workflows <citrine.resources.predictor_evaluation_workflow.PredictorEvaluationWorkflowCollection>` and :py:class:`Predictor Evaluation Executions <citrine.resources.predictor_evaluation_execution.PredictorEvaluationExecutionCollection>` (collectively, PEWs) will be merged into a single entity called :py:class:`Predictor Evaluations <citrine.resources.predictor_evaluation.PredictorEvaluationCollection>`. The new entity will retain the functionality of its predecessors, while simplyfing interactions with it. And it will support the continuing evolution of the platform.

Basic Usage
===========

The most common pattern for interacting with PEWs is executing the default evaluators and waiting for the result:

.. code:: python

pew = project.predictor_evaluation_workflows.create_default(predictor_id=predictor.uid)
execution = next(pew.executions.list(), None)
execution = wait_while_executing(collection=project.predictor_evaluation_executions, execution=execution)

With Predictor Evaluations, it's more straight-forward:

.. code:: python

evaluation = project.predictor_evaluations.trigger_default(predictor_id=predictor.uid)
evaluation = wait_while_executing(collection=project.predictor_evaluations, execution=evaluation)

The evaluators used are available with :py:meth:`~citrine.informatics.executions.predictor_evaluation.PredictorEvaluation.evaluators`.

Working With Evaluators
=======================

You can still construct evaluators (such as :class:`~~citrine.informatics.predictor_evaluator.CrossValidationEvaluator`) the same way as you always have, and run them against your predictor:

.. code:: python

evaluation = project.predictor_evaluations.trigger(predictor_id=predictor.uid, evaluators=evaluators)

If you don't wish to construct evaluators by hand, you can retrieve the default one(s):

.. code:: python

evaluators = project.predictor_evaluations.default(predictor_id=predictor.uid)

You can evaluate your predictor even if it hasn't been registered to the platform yet:

.. code:: python

evaluators = project.predictor_evaluations.default_from_config(predictor)

Once evaluation is complete, the results will be available by calling :py:meth:`~citrine.informatics.executions.predictor_evaluation.PredictorEvaluation.results` with the name of the desired evaluator (which are all available through :py:meth:`~citrine.informatics.executions.predictor_evaluation.PredictorEvaluation.evaluator_names`).
158 changes: 0 additions & 158 deletions docs/source/FAQ/v3_migration.rst

This file was deleted.

2 changes: 1 addition & 1 deletion src/citrine/__version__.py
Original file line number Diff line number Diff line change
@@ -1 +1 @@
__version__ = "3.25.2"
__version__ = "3.26.0"
Loading