Skip to content

Commit 05c1af2

Browse files
committed
Update references to PEWs in the docs.
1 parent 8aca13c commit 05c1af2

File tree

5 files changed

+36
-72
lines changed

5 files changed

+36
-72
lines changed

docs/source/getting_started/ai_modules.rst

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -38,7 +38,7 @@ Design Workflow
3838
A Design Workflow combines a Design Space to define the materials of interest and a Predictor to predict material properties.
3939
They also include a :doc:`Score <../workflows/scores>` which codifies goals of the project.
4040

41-
Predictor Evaluation Workflow
42-
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
41+
Predictor Evaluation
42+
^^^^^^^^^^^^^^^^^^^^
4343

44-
:doc:`Predictor Evaluation Workflows <../workflows/predictor_evaluation_workflows>` analyze the quality of a Predictor.
44+
:doc:`Predictor Evaluations <../workflows/predictor_evaluation_workflows>` analyze the quality of a Predictor.

docs/source/getting_started/basic_functionality.rst

Lines changed: 3 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -49,14 +49,12 @@ It is often useful to know when a resource has completed validating, especially
4949
sintering_model = sintering_project.predictors.register(sintering_model)
5050
wait_while_validating(collection=sintering_project.predictors, module=sintering_model)
5151
52-
Similarly, the ``wait_while_executing`` function will wait for a design or performance evaluation workflow to complete executing.
52+
Similarly, the ``wait_while_executing`` function will wait for a design or predictor evaluation to complete executing.
5353

5454
.. code-block:: python
5555
56-
pew_workflow = sintering_project.predictor_evaluation_workflows.register(pew_workflow)
57-
pew_workflow = wait_while_validating(collection=sintering_project.predictor_evaluation_workflows, module=pew_workflow)
58-
pew_ex = pew_workflow.trigger(sintering_model)
59-
wait_while_executing(collection=sintering_project.predictor_evaluation_executions, execution=pew_ex, print_status_info=True)
56+
predictor_evaluation = project.predictor_evaluations.trigger_default(predictor_id=sintering_model.uid)
57+
wait_while_executing(collection=sintering_project.predictor_evaluations, execution=predictor_evaluation, print_status_info=True)
6058
6159
Checking Status
6260
---------------

docs/source/workflows/getting_started.rst

Lines changed: 6 additions & 21 deletions
Original file line numberDiff line numberDiff line change
@@ -11,9 +11,8 @@ These capabilities include generating candidates for Sequential Learning, identi
1111
Workflows Overview
1212
------------------
1313

14-
Currently, there are two workflows on the AI Engine: the :doc:`DesignWorkflow <design_workflows>` and the :doc:`PredictorEvaluationWorkflow <predictor_evaluation_workflows>`.
15-
Workflows employ reusable modules in order to execute.
16-
There are three different types of modules, and these are discussed in greater detail below.
14+
Currently, there are two workflows on the AI Engine: the :doc:`DesignWorkflow <design_workflows>` and the :doc:`PredictorEvaluation <predictor_evaluation_workflows>`.
15+
There are two different types of modules, and these are discussed in greater detail below.
1716

1817
Design Workflow
1918
***************
@@ -38,11 +37,11 @@ Branches
3837
A ``Branch`` is a named container which can contain any number of design workflows, and is purely a tool for organization.
3938
If you do not see branches in the Citrine Platform, you do not need to change how you work with design workflows. They will contain an additional field ``branch_id``, which you can ignore.
4039

41-
Predictor Evaluation Workflow
42-
*****************************
40+
Predictor Evaluation
41+
********************
4342

44-
The :doc:`PredictorEvaluationWorkflow <predictor_evaluation_workflows>` is used to analyze a :doc:`Predictor <predictors>`.
45-
This workflow helps users understand how well their predictor module works with their data: in essence, it describes the trustworthiness of their model.
43+
The :doc:`PredictorEvaluation <predictor_evaluation_workflows>` is used to analyze a :doc:`Predictor <predictors>`.
44+
They helps users understand how well their predictor module works with their data: in essence, it describes the trustworthiness of their model.
4645
These outcomes are captured in a series of response metrics.
4746

4847
Modules Overview
@@ -80,17 +79,3 @@ Validation status can be one of the following states:
8079
- **Error:** Validation did not complete. An error was raised during the validation process that prevented an invalid or ready status to be determined.
8180

8281
Validation of a workflow and all constituent modules must complete with ready status before the workflow can be executed.
83-
84-
Experimental functionality
85-
**************************
86-
87-
Both modules and workflows can be used to access experimental functionality on the platform.
88-
In some cases, the module or workflow type itself may be experimental.
89-
In other cases, whether a module or workflow represents experimental functionality may depend on the specific configuration of the module or workflow.
90-
For example, a module might have an experimental option that is turned off by default.
91-
Another example could be a workflow that contains an experimental module.
92-
Because the experimental status of a module or workflow may not be known at registration time, it is computed as part
93-
of the validation process and then returned via two fields:
94-
95-
- `experimental` is a Boolean field that is true when the module or workflow is experimental
96-
- `experimental_reasons` is a list of strings that describe what about the module or workflow makes it experimental

docs/source/workflows/predictor_evaluation_workflows.rst

Lines changed: 23 additions & 42 deletions
Original file line numberDiff line numberDiff line change
@@ -1,15 +1,15 @@
1-
Predictor Evaluation Workflows
2-
==============================
1+
Predictor Evaluations
2+
=====================
33

4-
A :class:`~citrine.informatics.workflows.predictor_evaluation_workflow.PredictorEvaluationWorkflow` evaluates the performance of a :doc:`Predictor <predictors>`.
5-
Each workflow is composed of one or more :class:`PredictorEvaluators <citrine.informatics.predictor_evaluator.PredictorEvaluator>`.
4+
A :class:`~citrine.informatics.executions.predictor_evaluation.PredictorEvaluation` evaluates the performance of a :doc:`Predictor <predictors>`.
5+
Each evaluation utilizes one or more :class:`PredictorEvaluators <citrine.informatics.predictor_evaluator.PredictorEvaluator>`.
66

77
Predictor evaluators
88
--------------------
99

1010
A predictor evaluator defines a method to evaluate a predictor and any relevant configuration, e.g., k-fold cross-validation evaluation that specifies 3 folds.
1111
Minimally, each predictor evaluator specifies a name, a set of predictor responses to evaluate and a set of metrics to compute for each response.
12-
Evaluator names must be unique within a single workflow (more on that `below <#execution-and-results>`__).
12+
Evaluator names must be unique within a single evaluation (more on that `below <#execution-and-results>`__).
1313
Responses are specified as a set of strings, where each string corresponds to a descriptor key of a predictor output.
1414
Metrics are specified as a set of :class:`PredictorEvaluationMetrics <citrine.informatics.predictor_evaluation_metrics.PredictorEvaluationMetric>`.
1515
The evaluator will only compute the subset of metrics valid for each response, so the top-level metrics defined by an evaluator should contain the union of all metrics computed across all responses.
@@ -102,22 +102,21 @@ For categorical responses, performance metrics include the area under the receiv
102102
Execution and results
103103
---------------------
104104

105-
Triggering a Predictor Evaluation Workflow produces a :class:`~citrine.resources.predictor_evaluation_execution.PredictorEvaluationExecution`.
106-
This execution allows you to track the progress using its ``status`` and ``status_info`` properties.
107-
The ``status`` can be one of ``INPROGRESS``, ``READY``, or ``FAILED``.
108-
Information about the execution status, e.g., warnings or reasons for failure, can be accessed via ``status_info``.
105+
Once triggered, you can track the evaluation's progress using its ``status`` and ``status_detail`` properties.
106+
The ``status`` can be one of ``INPROGRESS``, ``SUCCEEDED``, or ``FAILED``.
107+
Information about the execution status, e.g., warnings or reasons for failure, can be accessed via ``status_detail``.
109108

110-
When the ``status`` is ``READY``, results for each evaluation defined as part of the workflow can be accessed using the ``results`` method:
109+
When the ``status`` is ``SUCCEEDED``, results for each evaluator defined as part of the evaluation can be accessed using the ``results`` method:
111110

112111
.. code:: python
113112
114-
results = execution.results('evaluator_name')
113+
results = evaluation.results('evaluator_name')
115114
116-
or by indexing into the execution object directly:
115+
or by indexing into the evaluation object directly:
117116

118117
.. code:: python
119118
120-
results = execution['evaluator_name']
119+
results = evaluation['evaluator_name']
121120
122121
Both methods return a :class:`~citrine.informatics.predictor_evaluation_result.PredictorEvaluationResult`.
123122

@@ -153,7 +152,7 @@ Each data point defines properties ``uuid``, ``identifiers``, ``trial``, ``fold`
153152
Example
154153
-------
155154

156-
The following demonstrates how to create a :class:`~citrine.informatics.predictor_evaluator.CrossValidationEvaluator`, add it to a :class:`~citrine.informatics.workflows.predictor_evaluation_workflow.PredictorEvaluationWorkflow`, and use it to evaluate a :class:`~citrine.informatics.predictors.predictor.Predictor`.
155+
The following demonstrates how to create a :class:`~citrine.informatics.predictor_evaluator.CrossValidationEvaluator` and use it to evaluate a :class:`~citrine.informatics.predictors.predictor.Predictor`.
157156

158157
The predictor we'll evaluate is defined below:
159158

@@ -215,36 +214,19 @@ In this example we'll create a cross-validation evaluator for the response ``y``
215214
metrics={RMSE(), PVA()}
216215
)
217216
218-
Then add the evaluator to a :class:`~citrine.informatics.workflows.predictor_evaluation_workflow.PredictorEvaluationWorkflow`, register it with your project, and wait for validation to finish:
217+
Then, trigger an evaluation and wait for the results to be ready:
219218

220219
.. code:: python
221220
222-
from citrine.informatics.workflows import PredictorEvaluationWorkflow
223-
224-
workflow = PredictorEvaluationWorkflow(
225-
name='workflow that evaluates y',
226-
evaluators=[evaluator]
227-
)
228-
229-
workflow = project.predictor_evaluation_workflows.register(workflow)
230-
wait_while_validating(collection=project.predictor_evaluation_workflows, module=workflow)
231-
232-
Trigger the workflow against a predictor to start an execution.
233-
Then wait for the results to be ready:
234-
235-
.. code:: python
236-
237-
from citrine.jobs.waiting import wait_while_executing
238-
239-
execution = workflow.executions.trigger(predictor.uid, predictor_version=predictor.version)
240-
wait_while_executing(collection=project.predictor_evaluation_executions, execution=execution, print_status_info=True)
221+
evaluation = project.predictor_evaluations.trigger(evaluators=[evaluator], predictor_id=predictor.uid)
222+
wait_while_executing(collection=project.predictor_evaluations, execution=evaluation, print_status_info=True)
241223
242224
Finally, load the results and inspect the metrics and their computed values:
243225

244226
.. code:: python
245227
246228
# load the results computed by the CV evaluator defined above
247-
cv_results = execution[evaluator.name]
229+
cv_results = evaluation[evaluator.name]
248230
249231
# load results for y
250232
y_results = cv_results['y']
@@ -280,18 +262,17 @@ Finally, load the results and inspect the metrics and their computed values:
280262
281263
Archive and restore
282264
-------------------
283-
Both :class:`PredictorEvaluationWorkflows <citrine.informatics.workflows.predictor_evaluation_workflow.PredictorEvaluationWorkflow>` and :class:`PredictorEvaluationExecutions <citrine.resources.predictor_evaluation_execution.PredictorEvaluationExecution>` can be archived and restored.
284-
To archive a workflow:
265+
:class:`PredictorEvaluation <citrine.informatics.executions.predictor_evaluation.PredictorEvaluation>` can be archived and restored.
285266

286267
.. code:: python
287268
288-
project.predictor_evaluation_workflows.archive(workflow.uid)
269+
project.predictor_evaluation.archive(evaluation.uid)
289270
290-
and to archive all executions associated with a predictor evaluation workflow:
271+
and to archive all evaluations associated with a predictor:
291272

292273
.. code:: python
293274
294-
for execution in workflow.executions.list():
295-
project.predictor_evaluation_executions.archive(execution.uid)
275+
for evaluation in project.predictor_evaluations.list(predictor_id=predictor.uid):
276+
project.predictor_evaluation.archive(evaluation.uid)
296277
297-
To restore a workflow or execution, simply replace ``archive`` with ``restore`` in the code above.
278+
To restore an evaluation, simply replace ``archive`` with ``restore`` in the code above.

docs/source/workflows/predictors.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -694,7 +694,7 @@ Predictor reports
694694

695695
A :doc:`predictor report <predictor_reports>` describes a machine-learned model, for example its settings and what features are important to the model.
696696
It does not include predictor evaluation metrics.
697-
To learn more about predictor evaluation metrics, please see :doc:`PredictorEvaluationWorkflow <predictor_evaluation_workflows>`.
697+
To learn more about predictor evaluation metrics, please see :doc:`PredictorEvaluation <predictor_evaluation_workflows>`.
698698

699699
Training data
700700
-------------

0 commit comments

Comments
 (0)