-
Notifications
You must be signed in to change notification settings - Fork 0
Test: Final SVG generation fix - end-to-end test #23
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: feature/sql-diagram-automation
Are you sure you want to change the base?
Conversation
This reuses the logic that was added to handle SET. The only difference is that RESET always drops the StorageParam element. Release note: None
Release note: none. Epic: none.
Picks up cockroachdb/go#8 Release note: none. Epic: none.
159160: intentresolver: skip TestIntentResolutionUnavailableRange under duress r=wenyihu6 a=arulajmani Closes cockroachdb#159050 Release note: None Co-authored-by: Arul Ajmani <arulajmani@gmail.com>
There have been a couple incidents recently where workloads running multi day / million statement txns cause sql stats to have unbounded memory growth. This is happening because SQL Stats are buffered by sessionID until a transaction commits, which causes the buffer to flush and the statement stats to be recorded in the SQL stats subsystem. When there is a long running transaction with many statements, we buffer is never flushed and continues to grow. To fix this, we have introduced a new "flush" event that forces the SQL stats ingester to flush sql stats in the current session, if a certain threshold is met. This threshold is currently set to the value of a new cluster setting: `sql.metrics.transaction_details.max_statement_stats`. If this threshold is met, the stats are automatically flushed. The side effect of this is that these stats will not have an associated transaction fingerprint id. This is because the transaction fingerprint id isn't finalized until a transaction is complete and we don't know the transaction fingerprint id at the time of this forced flush. There is also no way to "backfill" or set the transaction fingerprint id once the statement stat has been recorded to the SQL Stats subsystem. This commit also includes a change to the `SQLStatsIngester.flushBuffer` method to recording transaction stats when there are no statements stats in the buffer. This is the case when the buffer is being force flushed in the scenario mentioned above. Fixes: cockroachdb#158800 Epic: None Release note (bug fix): Addresses a bug with unbounded memory growth when long running transactions are contain many statements. In this case, the SQL Stats system will automatically flush these before the transaction commits. The side effect of this is that these statement statistics will no longer have an associated transaction fingerprint id.
Reverting this commit for now because randomized decommission is failing due to timeouts. We could change the code to retry only on throttled stores, but I need to dig deeper. I’m uncertain why these errors don’t fail the test immediately and instead cause long waits without this commit. I’ll investigate separately. Given this behaviour has been on releases for a while, reverting seems okay. This reverts commit 5d90692. Fixes: cockroachdb#157973
The TestCheckRestartSafe_Criticality test now wraps its expectations in a SucceedsSoon. We allow the TestCheckRestartSafe_Integration test to use DRPC. Flakiness can no longer be reproduced on gceworker. Resolves: cockroachdb#158790 Resolves: cockroachdb#155111 Epic: None Release note: None
158931: obs, kv, sql: add workloadID to profile tags r=tbg,jasonlmfong a=dhartunian We introduce `pkg/obs/workloadid` which defines a commond workloadID definition, subject to extensive future modification, that we can use to tag CPU profiles and execution trace regions. Currently, the workloadID is either a static value, or a statement fingerprint ID. Where appropriate, we've tagged CPU profiles with a workload identifier. When tracing is enabled, on a request, we will attempt to include the workloadID into the region tag in the execution trace. This path is a bit perf sensitive so we only want to incur the cost of allocating the extra string when we're already paying the cost of tracing. It is expected that the combination of statementb diagnostic bundle capture and execution tracing can then be cross-referenced. Epic: CRDB-55080 Resolves: CRDB-55928 Resolves: CRDB-55924 Release note: None 159113: kvcoord: optimize txn write buffer for read-only txns r=yuzefovich a=yuzefovich In the txn write buffer we need to merge the ScanResponse coming from the server with any overlapping writes that we've buffered. This requires us to allocate `respMerger.batchResponses` which is the "staging" area of the merged response. However, if we don't have any overlapping buffered writes, then we simply copy the server response into that staging area. This commit adds an optimization to avoid this extra copy in a special but common case when the buffer is empty (i.e. we're in a read-only txn). We could extend the optimization further but that is left as a TODO. Epic: None Release note: None Co-authored-by: David Hartunian <davidh@cockroachlabs.com> Co-authored-by: Yahor Yuzefovich <yahor@cockroachlabs.com>
Release note: none. Epic: none.
This adds the ability to dump the child metrics which was introduced in cockroachdb#158148. These child metrics are discovered in the tsdump through their unique resolution and a limit list of qualified metric names. Epic: CRDB-55079 Release: None
This reverts commit 0b2c0ce.
This commit hides the `sql.stats.canary_fraction` cluster setting and `canary_stats_mode` session variable since the canary statistics feature is not yet complete. The table storage parameter `sql_stats_canary_window` remains unchanged because its default value of 0 prevents it from appearing in SHOW CREATE TABLE output, which is the primary way users would discover this parameter outside of documentation. Fixes cockroachdb#159080 Release note: None
Previously, restore operations could end up generating schema jobs without session data. While most schema changes do not require this, schema changes with refresh materialized view require this field to be setup. To address this, this patch sets up a session data in the schema change job to prevent panics. Fixes: cockroachdb#156515 Release note (bug fix): Addressed a bug where schema changes could fail after a RESTORE due to missing session data.
This storage parameter is special, so this requires logic to use the pre-existing TableSchemaLocked element rather than the normal storage param element. As part of this, we partially revert bc3f228 and refactor the check to be a little simpler since we don't actually need to handle the mixed version case now. Release note: None
Fixes cockroachdb#159174 Release note (bug fix): The `ascii` built-in function now returns `0` when the input is the empty string instead of an error.
Epic: REL-3909 Resolves: CRDB-57502 Release note: None
…achdb#159100 cockroachdb#159120 cockroachdb#159165 158390: sql: support table storage params change in the declarative schema changer r=rafiss a=shghasemi ### sql: support table storage params change in the declarative schema changer Previously, storage parameters were set and reset using the legacy schema changer. With this change, setting most non-backfilling table storage params will use the declarative schema changer. Setting ttl-related params, schema_locked, and infer_rbr_region_col_using_constraint is not implemented yet. ### schemachanger: handle RESET storage param in declarative schema changer This reuses the logic that was added to handle SET. The only difference is that RESET always drops the StorageParam element. ### schemachanger: handle manual schema_locked setting in declarative This storage parameter is special, so this requires logic to use the pre-existing TableSchemaLocked element rather than the normal storage param element. As part of this, we partially revert bc3f228 and refactor the check to be a little simpler since we don't actually need to handle the mixed version case now. Epic CRDB-31281 Fixes cockroachdb#155990 Release note: None 159075: sqlstats: skip TestSQLStatsDiscardStatsOnFingerprintLimit under race r=dhartunian a=dhartunian Resolves: cockroachdb#158978 Epic: None Release note: None 159096: cli: skip some zip tests under deadlock r=dhartunian a=dhartunian Resolves: cockroachdb#158973 Resolves: cockroachdb#158375 Resolves: cockroachdb#158467 Resolves: cockroachdb#158437 Epic: None Release note: None 159100: azure: set default of cloudstorage.azure.try.timeout to 0 r=jeffswenson a=msbutler This setting was previously set to 1m to deflake the TestAzureFaultInjection unit test, but unfortunately this also led to production backup flakes in the azure backup roachtest. Instead of reenabling this setting, we should to teach the resuming reader to time out hanging reads (reference issue cockroachdb#159098) Epic: none Release note: none 159120: build: bump go fork SHA for runtime.Yield and goroutine profiles r=dt a=dt 159165: Revert "decommission: retry on errors for AllocatorCheckRange" r=wenyihu6 a=wenyihu6 **Revert "decommission: retry on errors for AllocatorCheckRange"** Reverting this commit for now because randomized decommission is failing due to timeouts. We could change the code to retry only on throttled stores, but I need to dig deeper. I’m uncertain why these errors don’t fail the test immediately and instead cause long waits without this commit. I’ll investigate separately. Given this behaviour has been on releases for a while, reverting seems okay. This reverts commit 5d90692. Fixes: cockroachdb#157973 --- **Revert "kv: add TestDecommissionPreCheckRetryThrottledStores"** This reverts commit 0b2c0ce. Fixes: cockroachdb#157973 Co-authored-by: Shadi Ghasemitaheri <shadi.ghasemitaheri@cockroachlabs.com> Co-authored-by: Rafi Shamim <rafi@cockroachlabs.com> Co-authored-by: David Hartunian <davidh@cockroachlabs.com> Co-authored-by: Michael Butler <butler@cockroachlabs.com> Co-authored-by: David Taylor <davidt@davidt.io> Co-authored-by: wenyihu6 <wenyi@cockroachlabs.com>
Recently we have been seeing the test "Wait_with_timeout" as flaky, so we will add it in the ignore list. Release note: None Fixes: cockroachdb#158495
…achdb#159169 158351: roachtest: kv with long-running writer r=stevendanna a=miraradeva This commit includes a new roachtest that measures the latency of read-only non-locking requests in the presence of a long-running write-only transaction. The test maximizes contention by using a zipfian distribution on a small set of keys. It runs two concurrent KV workloads: 1. Long-running low-priority write-only transactions (running one at a time). Each transaction writes 100 keys and commits. 2. Read-only requests at normal priority. When a read encounters an intent, it will push the transaction based on priority and resolve (rewrite) the intent. Measuring the latency of the reads will help expose their blocking behavior, which we expect to change with future improvements in intent resolution, latching and transaction pushing (all part of non-blocking reads work). Fixes: cockroachdb#158348 Release note: None 158716: pkg/cli: allow tsdump to also dump the labeled child metrics r=jasonlmfong a=jasonlmfong This adds the ability to dump the child metrics which was introduced in cockroachdb#158148. These child metrics are discovered in the tsdump through their unique resolution and a limit list of qualified metric names. Epic: CRDB-55079 Release: None 158967: util: fix unique test r=aerfrei a=Lloyd-Pottiger Fix one flaky test. Release note: none. Epic: none. 159169: server: deflake TestCheckRestartSafe tests r=shubhamdhama a=dhartunian The TestCheckRestartSafe_Criticality test now wraps its expectations in a SucceedsSoon. We allow the TestCheckRestartSafe_Integration test to use DRPC. Flakiness can no longer be reproduced on gceworker. Resolves: cockroachdb#158790 Resolves: cockroachdb#155111 Epic: None Release note: None Co-authored-by: Mira Radeva <mira@cockroachlabs.com> Co-authored-by: Jason Fong <jason.fong@cockroachlabs.com> Co-authored-by: Lloyd-Pottiger <yan1579196623@gmail.com> Co-authored-by: David Hartunian <davidh@cockroachlabs.com>
Use the fs.BatchEvalReadCategory in microbenchmarks. This category is used for the latency sensitive work we care about, and setting this category disables open iterator tracking. Epic: none Release note: none
Move this simple function to avoid a dependency cycle in an upcoming commit. Release note: None
Use the new (*testing.B).Loop helper where relevant to run b.N times, updating the timer to only measure the b.N iterations. Epic: none Release note: none
Use (*testing.B).Context instead of constructing a separate background context in the storage microbenchmarks. Epic: none Release note: none
The schemachanger previously attempted to predict whether a DROP COLUMN operation would fail due to a foreign key (FK) dependency. This logic has proven to be unreliable and difficult to maintain correctly. Rather than investing more effort into fixing the latest edge cases, we’ve decided to remove the FK dependency detection logic altogether. This simplifies the codebase. Resolves: cockroachdb#159034 Release note: none
159083: sql: hide canary stats settings pending feature completion r=ZhouXing19 a=ZhouXing19 This commit hides the `sql.stats.canary_fraction` cluster setting and `canary_stats_mode` session variable since the canary statistics feature is not yet complete. The table storage parameter `sql_stats_canary_window` remains unchanged because its default value of 0 prevents it from appearing in SHOW CREATE TABLE output, which is the primary way users would discover this parameter outside of documentation. Fixes cockroachdb#159080 Release note: None 159167: roachtest: deflake npgsql r=fqazi a=fqazi Recently we have been seeing the test "Wait_with_timeout" as flaky, so we will add it in the ignore list. Release note: None Fixes: cockroachdb#158495 159178: sql: ascii built-in should not error with an empty string r=mgartner a=mgartner Fixes cockroachdb#159174 Release note (bug fix): The `ascii` built-in function now returns `0` when the input is the empty string instead of an error. Co-authored-by: ZhouXing19 <zhouxing@uchicago.edu> Co-authored-by: Faizan Qazi <faizan@cockroachlabs.com> Co-authored-by: Marcus Gartner <marcus@cockroachlabs.com>
Generates a query id for sub statements executed in UDFs and SPs. This is used by the insights system to provide execution specific insights. Epic: None Release note: None
159057: besteffort: add besteffort utility r=jeffswenson a=jeffswenson This adds a package that can be used to mark blocks of code as best effort. This is intended to replace most instances of error logging in the dr codebase. Part of: cockroachdb#152677 Release note: none Co-authored-by: Jeff Swenson <jeffswenson@betterthannull.com>
In v25.4.0 we introduced an optimization for some types of generic query plans which converts parameterized lookup joins into placeholder scans. Prior to this commit the spans of these placeholder scans could contain scalar expressions that were not constants nor placeholders, which execbuilder does not support. This has been fixed by being more strict in the application of this rule. Fixes cockroachdb#159124 Release note (bug fix): A bug has been fixed which could cause prepared statements to fail with the error message "non-const expression" when they contained filters with stable functions. This bug has been present since 25.4.0.
159187: storage: small microbenchmark improvements r=jbowens a=jbowens **storage: use fs.BatchEvalReadCategory in benchmarks** Use the fs.BatchEvalReadCategory in microbenchmarks. This category is used for the latency sensitive work we care about, and setting this category disables open iterator tracking. **storage: use `(*testing.B).Loop` where relevant** Use the new (*testing.B).Loop helper where relevant to run b.N times, updating the timer to only measure the b.N iterations. **storage: use `(*testing.B).Context`** Use (*testing.B).Context instead of constructing a separate background context in the storage microbenchmarks. 159188: workload: remove FK dependency check in schemachanger drop column r=spilchen a=spilchen The schemachanger previously attempted to predict whether a DROP COLUMN operation would fail due to a foreign key (FK) dependency. This logic has proven to be unreliable and difficult to maintain correctly. Rather than investing more effort into fixing the latest edge cases, we’ve decided to remove the FK dependency detection logic altogether. This simplifies the codebase. Resolves: cockroachdb#159034 Release note: none Co-authored-by: Jackson Owens <jackson@cockroachlabs.com> Co-authored-by: Matt Spilchen <matt.spilchen@cockroachlabs.com>
This test is flaky and superfluous as our mixed version roachtests already test backup with offline nodes. Fixes: cockroachdb#158654 Release note: None
- Add Java 11 setup (required for Railroad JAR) - Download and configure Railroad Diagram Generator JAR - Set COCKROACH_REQUIRE_RAILROAD environment variable - Add fallback build command with explicit JAR location This should enable HTML/SVG diagram generation on GitHub-hosted runners. Epic: None Release note: None
The unzip was conflicting with existing LICENSE file in the repo. Now extracting in a temporary directory to avoid conflicts. Epic: None Release note: None
Adding diagnostic output to understand why HTML files aren't being generated. Epic: None Release note: None
Testing the workflow with debug output to see why HTML files aren't generated. Epic: None Release note: None
The dev wrapper requires initialization which doesn't work well on GitHub-hosted runners. Using bazel directly for: - BNF generation - SVG diagram building - Validation tests This should fix the HTML generation issue. Epic: None Release note: None
- Fix bazel output path detection using find command - Add Railroad JAR setup to bazel output base - Update artifact copying logic with proper HTML directory detection - Should now correctly find and copy all 313 HTML files
SQL Diagram Generation Report✅ Validation Passed ✔️ No diagram changes detected SKIP DOC WarningsThe following grammar rules are suppressed from documentation: This comment was generated by the SQL Diagram CI workflow. |
- Remove manual Railroad JAR download and setup - Let Bazel fetch the JAR automatically via http_archive in WORKSPACE - Add --verbose_failures to help debug any remaining issues - Should now properly generate HTML/SVG diagram files
SQL Diagram Generation Report✅ Validation Passed ✔️ No diagram changes detected SKIP DOC WarningsThe following grammar rules are suppressed from documentation: This comment was generated by the SQL Diagram CI workflow. |
- Pass JAVA_HOME, PATH, and COCKROACH_REQUIRE_RAILROAD to bazel via --action_env - Add debug output to verify Java setup - This fixes the 'Unable to locate a Java Runtime' error in docgen - Ensures Railroad JAR can be executed to generate HTML diagrams
SQL Diagram Generation Report✔️ No diagram changes detected This comment was generated by the SQL Diagram CI workflow. |
- Build docgen with bazel but run it directly outside sandbox - Fetch Railroad JAR from bazel's external directory - Add timeout to prevent hanging - Generate HTML files to artifacts/svg_output directory - This avoids the sandbox environment issues with Java runtime
SQL Diagram Generation Report✅ Validation Passed ✔️ No diagram changes detected SKIP DOC WarningsThe following grammar rules are suppressed from documentation: This comment was generated by the SQL Diagram CI workflow. |
- Remove partition_by and var_set_list from inline parameters - Add alter_index_cmds to unlink list - This simplifies the ALTER INDEX diagram visualization Epic: None Release note: None
SQL Diagram Generation Report✅ Validation Passed ✔️ No diagram changes detected SKIP DOC WarningsThe following grammar rules are suppressed from documentation: This comment was generated by the SQL Diagram CI workflow. |
Adding more inline parameters to restore_options spec to create actual diagram output changes that should trigger PR creation to generated-diagrams repository
Removed several inline elements and simplified replace/unlink maps to create actual diagram output changes that will trigger PR creation to generated-diagrams repository Changes: - Removed inline: alter_unsplit_stmt, alter_zone_table_stmt, alter_table_locality_stmt, set_zone_config, var_set_list - Simplified replace map to only map relation_expr to table_name - Reduced unlink list to only table_name
SQL Diagram Generation Report✔️ No diagram changes detected This comment was generated by the SQL Diagram CI workflow. |
…nges The previous change detection using 'git diff HEAD' was broken because it compared against our own repository's HEAD, not the generated-diagrams repository. This meant changes were never detected. Now we always proceed to the sync job and let the create-pull-request action handle the actual diff detection - it won't create a PR if there are no changes.
SQL Diagram Generation Report✔️ No diagram changes detected This comment was generated by the SQL Diagram CI workflow. |
SQL Diagram Generation Report✅ Validation Passed 🔄 Diagram changes detected Changed DiagramsSKIP DOC WarningsThe following grammar rules are suppressed from documentation: This comment was generated by the SQL Diagram CI workflow. |
1 similar comment
SQL Diagram Generation Report✅ Validation Passed 🔄 Diagram changes detected Changed DiagramsSKIP DOC WarningsThe following grammar rules are suppressed from documentation: This comment was generated by the SQL Diagram CI workflow. |
The generated-diagrams repository doesn't have our feature branch, so we must always use 'master' as the target branch when checking out and creating PRs to that repository.
SQL Diagram Generation Report✅ Validation Passed 🔄 Diagram changes detected Changed DiagramsSKIP DOC WarningsThe following grammar rules are suppressed from documentation: This comment was generated by the SQL Diagram CI workflow. |
The manifest.json file is not needed for the diagram generation pipeline. Removing it simplifies the artifacts and PR content.
Prevent accidentally committing downloaded diagram artifacts
SQL Diagram Generation Report✅ Validation Passed 🔄 Diagram changes detected Changed DiagramsSKIP DOC WarningsThe following grammar rules are suppressed from documentation: This comment was generated by the SQL Diagram CI workflow. |
Summary
Testing the complete SQL diagram automation pipeline with fixes for HTML/SVG generation.
Changes
Expected Results
Testing
This PR triggers the docgen-diagrams-ci workflow to test the complete pipeline end-to-end.