You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/configure.rst
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -194,7 +194,7 @@ You can view logger's log level as a general cut off.
194
194
For example, if we have set it to ``warning``, no debug or informational messages would ever be printed.
195
195
196
196
Finally, there is a special set of handlers for handling performance log messages.
197
-
Performance log messages are generated *only* for `performance tests <tutorial_basics.html#writing-a-performance-test>`__, i.e., tests defining the :attr:`perf_variables <reframe.core.pipeline.RegressionTest.perf_variables>` attribute.
197
+
Performance log messages are generated *only* for `performance tests <tutorial_basics.html#writing-a-performance-test>`__, i.e., tests defining the :attr:`~reframe.core.pipeline.RegressionTest.perf_variables` or the :attr:`~reframe.core.pipeline.RegressionTest.perf_patterns` attributes.
198
198
The performance log handlers are stored in the ``handlers_perflog`` property.
199
199
The ``filelog`` handler used in this example will create a file per test and per system/partition combination (``./<system>/<partition>/<testname>.log``) and will append to it the obtained performance data every time a performance test is run.
200
200
Notice how the message to be logged is structured in the ``format`` property, such that it can be easily parsed from post processing tools.
Copy file name to clipboardExpand all lines: docs/tutorial_advanced.rst
+4-4Lines changed: 4 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -161,7 +161,7 @@ Let's inspect the build script generated by ReFrame:
161
161
162
162
trap _onerror ERR
163
163
164
-
make -j 1 CPPFLAGS="-DELEM_TYPE=float"
164
+
make -j 1 CPPFLAGS="-DELEM_TYPE=float" CC=cc CXX=CC
165
165
166
166
167
167
The compiler variables (``CC``, ``CXX`` etc.) are set based on the corresponding values specified in the `configuration <config_reference.html#environment-configuration>`__ of the current environment.
@@ -175,7 +175,7 @@ In this case, ``make`` will be invoked as follows:
175
175
176
176
.. code::
177
177
178
-
make -j 1 CPPFLAGS="-DELEM_TYPE=float"
178
+
make -j 1 CPPFLAGS="-DELEM_TYPE=float" CC=cc CXX=CC
179
179
180
180
Notice that the ``-j 1`` option is always generated.
181
181
We can increase the build concurrency by setting the :attr:`~reframe.core.buildsystems.Make.max_concurrency` attribute.
@@ -681,8 +681,8 @@ The test will verify that all the nodes print the expected host name:
681
681
:emphasize-lines: 10-
682
682
683
683
The first thing to notice in this test is that :attr:`~reframe.core.pipeline.RegressionTest.num_tasks` is set to zero as default, which is a requirement for flexible tests.
684
-
However, this value is set to the actual number of tasks during the ``run`` pipeline stage.
685
-
Lastly, the sanity check of this test counts the host names printed and verifies that the total count equals:attr:`~reframe.core.pipeline.RegressionTest.num_tasks`.
684
+
However, with flexible tests, this value is updated right after the job completes to the actual number of tasks that were used.
685
+
Consequenly, this allows the sanity function of the tests to assert that the number host names printed matches:attr:`~reframe.core.pipeline.RegressionTest.num_tasks`.
Copy file name to clipboardExpand all lines: docs/tutorial_basics.rst
+54-36Lines changed: 54 additions & 36 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -664,11 +664,13 @@ For running the benchmark, we need to set the OpenMP number of threads and pin t
664
664
You can set environment variables in a ReFrame test through the :attr:`~reframe.core.pipeline.RegressionTest.variables` dictionary.
665
665
666
666
What makes a ReFrame test a performance test is the definition of at least one :ref:`performance function<deferrable-performance-functions>`.
667
-
Similarly to a test's :func:`@sanity_function<reframe.core.pipeline.RegressionMixin.sanity_function>`, a performance function is simply a member function decorated with the :attr:`@performance_function<reframe.core.pipeline.RegressionMixin.performance_function>` decorator, which is responsible for extracting a specified performance quantity from a regression test.
668
-
The :attr:`@performance_function<reframe.core.pipeline.RegressionMixin.performance_function>` decorator must be passed the units of the quantity to be extracted, and it also takes the optional argument ``perf_key`` to customize the name of the extracted performance variable.
669
-
If ``perf_key`` is not provided, the performance variable will take the name of the decorated performance function.
667
+
Similarly to a test's :func:`@sanity_function<reframe.core.pipeline.RegressionMixin.sanity_function>`, a performance function is a member function decorated with the :attr:`@performance_function<reframe.core.pipeline.RegressionMixin.performance_function>` decorator, which binds the decorated function to a given unit.
668
+
These functions can be used by the regression test to extract, measure or compute a given quantity of interest; where in this context, the values returned by a performance function are referred to as performance variables.
669
+
Alternatively, performance functions can also be thought as `tools` available to the regression test for extracting performance variables.
670
+
By default, ReFrame will attempt to execute all the available performance functions during the test's ``performance`` stage, producing a single performance variable out of each of the available performance functions.
671
+
These default-generated performance variables are defined in the regression test's attribute :attr:`~reframe.core.pipeline.RegressionTest.perf_variables` during class instantiation, and their default name matches the name of their associated performance function.
672
+
However, one could customize the default-generated performance variable's name by passing the ``perf-key`` argument to the :attr:`@performance_function<reframe.core.pipeline.RegressionMixin.performance_function>` decorator of the associated performance function.
670
673
671
-
ReFrame identifies all member functions that use the :attr:`@performance_function<reframe.core.pipeline.RegressionMixin.performance_function>` decorator, and will automatically schedule them for execution during the ``performance`` pipeline stage of the test.
672
674
In this example, we extract four performance variables, namely the memory bandwidth values for each of the "Copy", "Scale", "Add" and "Triad" sub-benchmarks of STREAM, where each of the performance functions use the :func:`~reframe.utility.sanity.extractsingle` utility function.
673
675
For each of the sub-benchmarks we extract the "Best Rate MB/s" column of the output (see below) and we convert that to a float.
674
676
@@ -731,6 +733,45 @@ The :option:`--performance-report` will generate a short report at the end for e
In the above STREAM example, all four performance functions were almost identical except for a small part of the regex pattern, which led to some code repetition.
741
+
Even though the performance functions were rather simple and the code repetition was not much in that case, this is still not a good practice and it is certainly an approach that would not scale when using more complex performance functions.
742
+
Hence, in this example, we show how to collapse all these four performance functions into a single function and how to reuse this single performance function to create multiple performance variables.
As shown in the highlighted lines, this example collapses the four performance functions from the previous example into the :func:`extract_bw` function, which is also decorated with the :attr:`@performance_function<reframe.core.pipeline.RegressionMixin.performance_function>` decorator with the units set to ``'MB/s'``.
753
+
However, the :func:`extract_bw` function now takes the optional argument ``kind`` which selects the STREAM benchmark to extract.
754
+
By default, this argument is set to ``'Copy'`` because functions decorated with :attr:`@performance_function<reframe.core.pipeline.RegressionMixin.performance_function>` are only allowed to have ``self`` as a non-default argument.
755
+
Thus, from this performance function definition, ReFrame will default-generate a single performance variable during the test instantiation under the name ``extract_bw``, where this variable will report the performance results from the ``Copy`` benchmark.
756
+
With no further action from our side, ReFrame would just report the performance of the test based on this default-generated performance variable, but that is not what we are after here.
757
+
Therefore, we must modify these default performance variables so that this version of the STREAM test produces the same results as in the previous example.
758
+
As mentioned before, the performance variables (also the default-generated ones) are stored in the :attr:`~reframe.core.pipeline.RegressionTest.perf_variables` dictionary, so all we need to do is to redefine this mapping with our desired performance variables as done in the pre-performance pipeline hook :func:`set_perf_variables`.
759
+
760
+
.. tip::
761
+
Performance functions may also be generated inline using the :func:`~reframe.utility.sanity.make_performance_function` utility as shown below.
762
+
763
+
.. code-block:: python
764
+
765
+
@run_before('performance')
766
+
defset_perf_vars(self):
767
+
self.perf_variables = {
768
+
'Copy': sn.make_performance_function(
769
+
sn.extractsingle(r'Copy:\s+(\S+)\s+.*',
770
+
self.stdout, 1, float),
771
+
'MB/s'
772
+
)
773
+
}
774
+
734
775
-----------------------
735
776
Adding reference values
736
777
-----------------------
@@ -747,22 +788,23 @@ In the following example, we set the reference values for all the STREAM sub-ben
The performance reference tuple consists of the reference value, the lower and upper thresholds expressed as fractional numbers relative to the reference value, and the unit of measurement.
759
800
If any of the thresholds is not relevant, :class:`None` may be used instead.
801
+
Also, the units in this :attr:`~reframe.core.pipeline.RegressionTest.reference` variable are entirely optional, since they were already provided through the :attr:`@performance_function<reframe.core.pipeline.RegressionMixin.performance_function>` decorator.
760
802
761
803
If any obtained performance value is beyond its respective thresholds, the test will fail with a summary as shown below:
Also, note how the performance syntax for this example is far more compact in comparison to our first iteration of the STREAM test.
783
-
In that first STREAM example, all four performance functions were almost identical, except for a small part of the regex pattern, which led to a lot of code repetition.
784
-
Hence, this example collapses all four performance functions into a single performance function, which now takes an optional argument to select the quantity to extract.
785
-
Then, the performance variables of the test can be defined by setting the respective entries in the :attr:`~reframe.core.pipeline.RegressionTest.perf_variables` dictionary.
Copy file name to clipboardExpand all lines: docs/tutorial_tips_tricks.rst
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -90,7 +90,7 @@ As suggested by the warning message, passing :option:`-v` will give you the stac
90
90
Debugging deferred expressions
91
91
==============================
92
92
93
-
Although deferred expression that are used in sanity and performance functions behave similarly to normal Python expressions, you need to understand their `implicit evaluation rules <deferrable_functions_reference.html#implicit-evaluation-of-sanity-functions>`__.
93
+
Although deferred expressions that are used in sanity and performance functions behave similarly to normal Python expressions, you need to understand their `implicit evaluation rules <deferrable_functions_reference.html#implicit-evaluation-of-sanity-functions>`__.
94
94
One of the rules is that :func:`str` triggers the implicit evaluation, so trying to use the standard :func:`print` function with a deferred expression, you might get unexpected results if that expression is not yet to be evaluated.
95
95
For this reason, ReFrame offers a sanity function counterpart of :func:`print`, which allows you to safely print deferred expressions.
0 commit comments