Write how-to doc on dataflow cost benchmarking#33702
Write how-to doc on dataflow cost benchmarking#33702jrmccluskey merged 8 commits intoapache:masterfrom
Conversation
|
assign set of reviewers |
|
Assigning reviewers. If you would like to opt out of this review, comment R: @tvalentyn for label python. Available commands:
The PR bot will only process comments in the main thread (not review comments). |
| ### Choosing a Pipeline | ||
| Pipelines that are worth benchmarking in terms of performance and cost have a few straightforward requirements. | ||
|
|
||
| 1. The transforms used in the pipeline should be native to Beam *or* be lightweight and readily available in the given pipeline |
There was a problem hiding this comment.
lightweight and readily available
How do we know if this requirement is met?
There was a problem hiding this comment.
In this case I mean "short and simple code that is contained in the source code of the pipeline if it isn't a native beam transform." This is a somewhat subjective criterion, but the idea is that we want to minimize the performance impact of code that isn't Beam-provided since custom code is more variable (and generally outside our control)
| Pipelines that are worth benchmarking in terms of performance and cost have a few straightforward requirements. | ||
|
|
||
| 1. The transforms used in the pipeline should be native to Beam *or* be lightweight and readily available in the given pipeline | ||
| 1. The pipeline itself should run on a consistent data set and have consistent internals (such as model versions for `RunInference` workloads.) |
There was a problem hiding this comment.
such as model versions for
RunInferenceworkloads
Do you mean: such as example benchmarks of Runinference workloads? or what is model versions? should this include a link?
There was a problem hiding this comment.
This is referring to keeping the same version of a model in a RunInference pipeline rather than doing something like automatically updating to the latest version. A fully specified benchmark should be running on an identical configuration every time, from details like model version and framework all the way up to the GCP region the job runs in. I'll see if I can nail down better wording
| Pipelines that are worth benchmarking in terms of performance and cost have a few straightforward requirements. | ||
|
|
||
| 1. The transforms used in the pipeline should be native to Beam *or* be lightweight and readily available in the given pipeline | ||
| 1. The pipeline itself should run on a consistent data set and have consistent internals (such as model versions for `RunInference` workloads.) |
There was a problem hiding this comment.
have consistent internals
How do we know if this requirement is met?
There was a problem hiding this comment.
Same sentiment as above, every part of the environment that can be specified needs to be identical from run to run or clearly marked if something changes (most commonly this would be beam incrementing a dependency version that impacts the benchmark)
| ```yaml | ||
| - name: Run wordcount on Dataflow | ||
| uses: ./.github/actions/gradle-command-self-hosted-action | ||
| timeout-minutes: 30 |
There was a problem hiding this comment.
what happens to test runs that timed out? are the collected metrics ignored (because they are likely incorrect)? Will the failure surface somewhere?
There was a problem hiding this comment.
If the run times out the workflow itself will fail, so we'd get a surfaced error in GitHub. The metrics would likely never surface in that situation, since the workflow is more likely stuck in the pipeline step rather than the metrics gathering/writing step.
Co-authored-by: tvalentyn <tvalentyn@users.noreply.github.com>
|
Updated the doc with some more elaborative wording |
* Write how-to doc on dataflow cost benchmarking * trailing whitespace * add streaming information, links * add context for BQ stuff * remove trailing whitespace * Apply suggestions from code review Co-authored-by: tvalentyn <tvalentyn@users.noreply.github.com> * Elaborate on requirements * remove errant char --------- Co-authored-by: tvalentyn <tvalentyn@users.noreply.github.com>
Creates a quick overview on how to write cost benchmarks within the current framework.
Thank you for your contribution! Follow this checklist to help us incorporate your contribution quickly and easily:
addresses #123), if applicable. This will automatically add a link to the pull request in the issue. If you would like the issue to automatically close on merging the pull request, commentfixes #<ISSUE NUMBER>instead.CHANGES.mdwith noteworthy changes.See the Contributor Guide for more tips on how to make review process smoother.
To check the build health, please visit https://github.com/apache/beam/blob/master/.test-infra/BUILD_STATUS.md
GitHub Actions Tests Status (on master branch)
See CI.md for more information about GitHub Actions CI or the workflows README to see a list of phrases to trigger workflows.