Skip to content

Commit 518e98e

Browse files
committed
Update the FAQ docs.
Drop the data manager and v3.0 migration docs, as they shouldn't be needed any more. Adds a FAQ to clearly demonstrate the usage of predictor_evaluators.
1 parent 603dcdd commit 518e98e

File tree

5 files changed

+54
-305
lines changed

5 files changed

+54
-305
lines changed

docs/source/FAQ/data_manager_migration.rst

Lines changed: 0 additions & 144 deletions
This file was deleted.

docs/source/FAQ/index.rst

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -6,5 +6,4 @@ FAQ
66
:maxdepth: 2
77

88
prohibited_data_patterns
9-
v3_migration
10-
data_manager_migration
9+
predictor_evaluation_migration
Lines changed: 52 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,52 @@
1+
==================================
2+
Migrating to Predictor Evaluations
3+
==================================
4+
5+
Summary
6+
=======
7+
8+
In version 4.0, :py:class:`Predictor Evaluation Workflows <citrine.resources.predictor_evaluation_workflow.PredictorEvaluationWorkflowCollection>` and :py:class:`Predictor Evaluation Executions <citrine.resources.predictor_evaluation_execution.PredictorEvaluationExecutionCollection>` (collectively, PEWs) will be merged into a single entity called :py:class:`Predictor Evaluations <citrine.resources.predictor_evaluation.PredictorEvaluationCollection>`. The new entity will retain the functionality of its predecessors, while simplyfing interactions with it. And it will support the continuing evolution of the platform.
9+
10+
Basic Usage
11+
===========
12+
13+
The most common pattern for interacting with PEWs is executing the default evaluators and waiting for the result:
14+
15+
.. code:: python
16+
17+
pew = project.predictor_evaluation_workflows.create_default(predictor_id=predictor.uid)
18+
execution = next(pew.executions.list(), None)
19+
execution = wait_while_executing(collection=project.predictor_evaluation_executions, execution=execution)
20+
21+
With Predictor Evaluations, it's more straight-forward:
22+
23+
.. code:: python
24+
25+
evaluation = project.predictor_evaluations.trigger_default(predictor_id=predictor.uid)
26+
evaluation = wait_while_executing(collection=project.predictor_evaluations, execution=evaluation)
27+
28+
The evaluators used are available with :py:meth:`~citrine.informatics.executions.predictor_evaluation.PredictorEvaluation.evaluators`.
29+
30+
Working With Evaluators
31+
=======================
32+
33+
You can also run your predictor against a list of specific evaluators:
34+
35+
.. code:: python
36+
37+
evaluation = project.predictor_evaluations.trigger(predictor_id=predictor.uid, evaluators=evaluators)
38+
39+
If you don't wish to construct evaluators by hand, you can retrieve the default one(s):
40+
41+
.. code:: python
42+
43+
evaluators = project.predictor_evaluations.default(predictor_id=predictor.uid)
44+
45+
Even if the predictor hasn't been registered to platform yet:
46+
47+
.. code:: python
48+
49+
evaluators = project.predictor_evaluations.default_from_config(predictor)
50+
51+
52+
Once evaluation is complete, the results will be available by calling :py:meth:`~citrine.informatics.executions.predictor_evaluation.PredictorEvaluation.results` with the name of the desired evaluator (which are all available through :py:meth:`~citrine.informatics.executions.predictor_evaluation.PredictorEvaluation.evaluator_names`).

docs/source/FAQ/v3_migration.rst

Lines changed: 0 additions & 158 deletions
This file was deleted.

src/citrine/__version__.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1 +1 @@
1-
__version__ = "3.25.0"
1+
__version__ = "3.25.1"

0 commit comments

Comments
 (0)