Re-cost indexByteString and add cost model visualization#7700
Re-cost indexByteString and add cost model visualization#7700
Conversation
|
indexByteString denotation change: benchmark resultsWhat changedPR #7699 simplified the Old: indexByteStringDenotation xs n = do
unless (n >= 0 && n < BS.length xs) $
fail "Index out of bounds"
pure $ BS.index xs nNew: indexByteStringDenotation xs n =
maybe (fail "Index out of bounds") pure $ BS.indexMaybe xs nGHC Core verificationeffectfully suggested verifying that the intermediate Costing benchmark comparisonI ran benchmarks on plutus-bench for both old and new code to separate the denotation change from infrastructure drift.
The 14x gap between the historical value and today's numbers comes from changes in the benchmarking environment since March 2024 (machine or Nix config, not the code). The denotation change itself is a ~3.5% reduction, consistent with dropping one redundant bounds check from a constant-time operation. |
Re-ran costing benchmarks on plutus-bench for IndexByteString after the denotation change in #7699. Updated CSV with 150 fresh data points and CPU cost parameters in all three model variants (A, B, C): 13,169 -> 183,300 picoseconds (constant_cost). The increase is due to benchmarking environment drift since the original March 2024 measurements, not the code change itself. The denotation change shows a ~3.5% improvement vs the old code on the same machine.
Summary
IndexByteStringon plutus-bench after the denotation change in Remove double range check in indexByteString #7699IndexByteStringto the cost-models siteRelates to #7469
Context
PR #7699 switched
indexByteStringfrom a manual bounds check +BS.index(double check) toBS.indexMaybe(single check). I verified via GHC Core dump that the intermediateMaybecompiles away entirely, and ran benchmarks on both old and new code to attribute the parameter change. See the PR comment for full benchmark results.