-
-
Notifications
You must be signed in to change notification settings - Fork 8.6k
.pr_agent_accepted_suggestions
| PR 17076 (2026-02-10) |
[possible issue] Fix HTML entity encoding order
✅ Fix HTML entity encoding order
Fix a double-encoding bug by reordering the Replace method calls. The ampersand character & should be replaced first.
third_party/dotnet/devtools/src/generator/ProtocolDefinition/ProtocolDefinitionItem.cs [14-17]
get => InitialDescription?
+ .Replace("&", "&")
.Replace("<", "<")
- .Replace(">", ">")
- .Replace("&", "&");
+ .Replace(">", ">");Suggestion importance[1-10]: 8
__
Why: The suggestion correctly identifies a bug in the PR where replacing & after other entities causes double-encoding, and provides a correct fix by reordering the Replace calls.
| PR 17074 (2026-02-10) |
[general] Use a distinct log prefix for failures
✅ Use a distinct log prefix for failures
Use a different log prefix for exceptions, such as !!, to distinguish them from successful response logs which use <<.
dotnet/src/webdriver/Remote/HttpCommandExecutor.cs [467-475]
catch (Exception ex)
{
if (_logger.IsEnabled(LogEventLevel.Trace))
{
- _logger.Trace($"<< [{requestId}] {ex}");
+ _logger.Trace($"!! [{requestId}] Request failed: {ex}");
}
throw;
}Suggestion importance[1-10]: 4
__
Why: This is a good suggestion for improving log clarity by using a distinct prefix for exceptions versus successful responses, which makes debugging easier.
| PR 17068 (2026-02-09) |
[possible issue] Fix class name casing
✅ Fix class name casing
Correct the class name DownLoadsTests to DownloadsTests to match the file name and standard naming conventions.
dotnet/test/common/DownloadsTests.cs [32]
-public class DownLoadsTests : DriverTestFixture
+public class DownloadsTests : DriverTestFixtureSuggestion importance[1-10]: 6
__
Why: The PR introduces a typo in the class name DownLoadsTests, which should be DownloadsTests to match the file name and C# naming conventions.
| PR 17063 (2026-02-08) |
[possible issue] Validate truncation length is non-negative
✅ Validate truncation length is non-negative
In the WithTruncation method, add a check to throw an ArgumentOutOfRangeException if the provided length is negative to prevent downstream errors.
dotnet/src/webdriver/Internal/Logging/LogContext.cs [131-135]
public ILogContext WithTruncation(int length)
{
+ if (length < 0)
+ {
+ throw new ArgumentOutOfRangeException(nameof(length), "Truncation length must be non-negative.");
+ }
+
_truncationLength = length;
return this;
}Suggestion importance[1-10]: 7
__
Why: The suggestion correctly identifies that a negative truncation length will cause a runtime exception and proposes adding a guard clause, which is a good practice for input validation.
[learned best practice] Remove magic-number marker math
✅ Remove magic-number marker math
Build the marker string first and use its .Length, and guard truncationLength <= 0 to avoid negative/zero substring lengths.
dotnet/src/webdriver/Internal/Logging/LogContext.cs [176-200]
private static string TruncateMessage(string message, int truncationLength)
{
+ if (truncationLength <= 0)
+ {
+ return string.Empty;
+ }
+
if (message.Length <= truncationLength)
{
return message;
}
- // Calculate marker length: " ...truncated N... " (14 chars + digit count)
int removedCount = message.Length - truncationLength;
- int markerLength = 14 + removedCount.ToString().Length + 4; // " ...truncated " + digits + "... "
+ string marker = $" ...truncated {removedCount}... ";
- if (markerLength >= truncationLength)
+ if (marker.Length >= truncationLength)
{
- // Fallback to simple truncation if marker won't fit
return truncationLength >= 3
? message.Substring(0, truncationLength - 3) + "..."
: message.Substring(0, truncationLength);
}
- int contentLength = truncationLength - markerLength;
+ int contentLength = truncationLength - marker.Length;
int prefixLength = contentLength / 2;
int suffixLength = contentLength - prefixLength;
- return message.Substring(0, prefixLength) + " ...truncated " + removedCount + "... " + message.Substring(message.Length - suffixLength);
+ return message.Substring(0, prefixLength) + marker + message.Substring(message.Length - suffixLength);
}Suggestion importance[1-10]: 6
__
Why: Relevant best practice - Avoid brittle “magic number” calculations; derive lengths from the actual formatted marker to keep logic correct and maintainable.
| PR 17058 (2026-02-07) |
[possible issue] Make inspector events instance-specific
✅ Make inspector events instance-specific
Revert the Event fields in BrowsingContextInspector from static to instance-level (non-static) members to prevent listener collisions between different inspector instances.
java/src/org/openqa/selenium/bidi/module/BrowsingContextInspector.java [83-84]
-private static final Event<BrowsingContextInfo> browsingContextCreated =
+private final Event<BrowsingContextInfo> browsingContextCreated =
new Event<>("browsingContext.contextCreated", browsingContextInfoMapper);Suggestion importance[1-10]: 9
__
Why: The suggestion correctly identifies a critical issue where making Event fields static causes all BrowsingContextInspector instances to share listeners, leading to incorrect behavior and state corruption. Reverting this change is crucial for correctness.
| PR 17035 (2026-01-31) |
[possible issue] Enforce mutually exclusive modes
✅ Enforce mutually exclusive modes
Add a check to ensure that the --pre-commit and --pre-push flags are mutually exclusive, exiting with an error if both are provided.
run_lint=false
mode="default"
+seen_mode=""
for arg in "$@"; do
case "$arg" in
--lint) run_lint=true ;;
- --pre-commit) mode="pre-commit" ;;
- --pre-push) mode="pre-push" ;;
+ --pre-commit|--pre-push)
+ if [[ -n "$seen_mode" && "$seen_mode" != "$arg" ]]; then
+ echo "ERROR: --pre-commit and --pre-push are mutually exclusive" >&2
+ exit 1
+ fi
+ seen_mode="$arg"
+ [[ "$arg" == "--pre-commit" ]] && mode="pre-commit" || mode="pre-push"
+ ;;
*)
echo "Unknown option: $arg" >&2
echo "Usage: $0 [--pre-commit] [--pre-push] [--lint]" >&2
exit 1
;;
esac
doneSuggestion importance[1-10]: 7
__
Why: The suggestion correctly identifies that --pre-commit and --pre-push should be mutually exclusive and adds a check to enforce this, preventing unexpected behavior from misconfiguration.
[possible issue] Harden formatter file piping
✅ Harden formatter file piping
Improve the robustness of the find | xargs pipeline by using find -print0 and xargs -0 -r to correctly handle filenames containing spaces or newlines.
if changed_matches '^java/'; then
section "Java"
echo " google-java-format" >&2
GOOGLE_JAVA_FORMAT="$(bazel run --run_under=echo //scripts:google-java-format)"
- find "${WORKSPACE_ROOT}/java" -type f -name '*.java' | xargs "$GOOGLE_JAVA_FORMAT" --replace
+ find "${WORKSPACE_ROOT}/java" -type f -name '*.java' -print0 | xargs -0 -r "$GOOGLE_JAVA_FORMAT" --replace
fiSuggestion importance[1-10]: 5
__
Why: The suggestion correctly applies shell scripting best practices to make the find | xargs pipeline more robust against filenames with special characters, which improves code quality.
[possible issue] Detect untracked formatting outputs
✅ Detect untracked formatting outputs
Replace git diff --quiet with git status --porcelain to detect both tracked and untracked file changes, ensuring the script correctly identifies all modifications made by formatters.
-# Check if formatting made changes
-if ! git diff --quiet; then
+# Check if formatting made changes (including untracked files)
+if [[ -n "$(git status --porcelain)" ]]; then
echo "" >&2
echo "Formatters modified files:" >&2
- git diff --name-only >&2
+ git status --porcelain >&2
exit 1
fi
echo "Format check passed." >&2Suggestion importance[1-10]: 7
__
Why: The suggestion correctly points out that git diff --quiet misses untracked files, and using git status --porcelain is a more robust way to check for a dirty working tree, improving the script's correctness.
[learned best practice] Reject unknown CLI arguments
✅ Reject unknown CLI arguments
Treat any unrecognized argument as an error and print a short usage message so CI/manual runs don’t silently ignore typos or unsupported flags.
run_lint=false
for arg in "$@"; do
case "$arg" in
--lint) run_lint=true ;;
+ -h|--help)
+ echo "Usage: $0 [--lint]" >&2
+ exit 0
+ ;;
+ *)
+ echo "ERROR: unknown argument: $arg" >&2
+ echo "Usage: $0 [--lint]" >&2
+ exit 2
+ ;;
esac
doneSuggestion importance[1-10]: 5
__
Why: Relevant best practice - Validate and sanitize CLI inputs in automation scripts; fail fast with a clear error on unknown/invalid arguments.
[high-level] Refine change detection logic
✅ Refine change detection logic
Modify the change detection logic to differentiate between "no changed files" and "unable to determine changes". This prevents the script from inefficiently running all formatters when no files have actually been modified.
scripts/format.sh [24-39]
if [[ -n "$trunk_ref" ]]; then
base="$(git merge-base HEAD "$trunk_ref" 2>/dev/null || echo "")"
if [[ -n "$base" ]]; then
changed="$(git diff --name-only "$base" HEAD)"
else
changed=""
fi
else
# No trunk ref found, format everything
changed=""
... (clipped 6 lines)# ... find trunk ref ...
if [[ -n "$trunk_ref" ]]; then
# ... find merge-base ...
changed="$(git diff --name-only "$base" HEAD)" # This is empty if no files changed
else
changed="" # This is empty if trunk is not found
fi
changed_matches() {
# This is TRUE if `changed` is empty, causing all formatters to run
[[ -z "$changed" ]] || echo "$changed" | grep -qE "$1"
}
if changed_matches '^java/'; then
# run java formatter
fi
# ... and so on for all other formatters
# Use a flag to control behavior
run_all_formatters=false
if [[ -z "$trunk_ref" ]]; then
run_all_formatters=true
changed=""
else
# ... find merge-base ...
changed="$(git diff --name-only "$base" HEAD)"
fi
changed_matches() {
# Run all if flagged, otherwise only if there are changes that match
"$run_all_formatters" || ([[ -n "$changed" ]] && echo "$changed" | grep -qE "$1")
}
if changed_matches '^java/'; then
# run java formatter
fi
# ... and so on for all other formatters
Suggestion importance[1-10]: 9
__
Why: The suggestion correctly identifies a significant logical flaw where the script inefficiently runs all formatters when no files have changed, which contradicts the PR's primary goal of improving performance for pre-commit hooks.
[general] Handle filenames safely in xargs
✅ Handle filenames safely in xargs
Modify the find and xargs command to use null delimiters (-print0 and -0) to handle filenames containing spaces or special characters correctly.
-find "$PWD/java" -type f -name '*.java' | xargs "$GOOGLE_JAVA_FORMAT" --replace
+find "$PWD/java" -type f -name '*.java' -print0 | xargs -0 "$GOOGLE_JAVA_FORMAT" --replaceSuggestion importance[1-10]: 6
__
Why: This suggestion improves the script's robustness by making the find | xargs pipe safe for filenames with spaces or special characters. It's a best practice that prevents potential errors with unusually named files.
[general] Simplify shellcheck path lookup
✅ Simplify shellcheck path lookup
Use bazel run --run_under=echo to reliably get the shellcheck binary path instead of parsing bazel build output.
-SHELLCHECK="$(bazel build @multitool//tools/shellcheck 2>&1 | grep -oE 'bazel-out/\\S+/shellcheck$')"
+SHELLCHECK="$(bazel run --run_under=echo @multitool//tools/shellcheck)"
bazel run @multitool//tools/actionlint:cwd -- -shellcheck "$SHELLCHECK"Suggestion importance[1-10]: 6
__
Why: The suggestion replaces a fragile method of finding an executable (parsing bazel build output) with a robust and idiomatic approach (bazel run --run_under=echo) that is already used elsewhere in the script. This improves maintainability and reliability.
| PR 17033 (2026-01-31) |
[possible issue] Index file deps correctly
✅ Index file deps correctly
In query_dep_srcs, handle Bazel file labels directly to ensure they are correctly indexed, as labels(srcs, ...) can fail for them.
rake_tasks/bazel.rake [118-127]
def query_dep_srcs(dep)
+ # If `dep` is already a file label, index it directly.
+ if dep.include?(':')
+ name = dep.split(':', 2).last
+ return [dep] if name.include?('.')
+ end
+
srcs = []
Bazel.execute('query', ['--output=label'], "labels(srcs, #{dep})") do |out|
srcs = out.lines.map(&:strip).select { |l| l.start_with?('//') && !l.start_with?('//:') }
end
srcs
rescue StandardError => e
puts " Warning: Failed to query srcs for #{dep}: #{e.message}"
[]
endSuggestion importance[1-10]: 8
__
Why: The suggestion correctly identifies a flaw where file labels from deps(...) are not handled, causing labels(srcs, ...) to fail and preventing those files from being indexed. This fix is critical for ensuring that changes in such files correctly trigger dependent tests.
| PR 17028 (2026-01-30) |
[possible issue] Prevent NullPointerException on parameter creation
✅ Prevent NullPointerException on parameter creation
Add a null check for the 'height' and 'width' values retrieved from the screenArea map to prevent a NullPointerException during auto-unboxing to primitive int types.
java/src/org/openqa/selenium/bidi/emulation/SetScreenSettingsOverrideParameters.java [15-17]
-this.height = screenArea.get("height");
-this.width = screenArea.get("width");
+Integer heightValue = screenArea.get("height");
+Integer widthValue = screenArea.get("width");
+
+if (heightValue == null || widthValue == null) {
+ throw new IllegalArgumentException("'height' and 'width' in screenArea must not be null");
+}
+
+this.height = heightValue;
+this.width = widthValue;
map.put("screenArea", screenArea);Suggestion importance[1-10]: 8
__
Why: This suggestion correctly identifies a potential NullPointerException if the input Map contains null values for 'height' or 'width', which would crash the program during auto-unboxing. Adding a null check significantly improves the robustness of the new class.
| PR 17024 (2026-01-29) |
[learned best practice] Harden external log parsing
✅ Harden external log parsing
Make the regex and switch consistent (e.g., WARN vs WARNING) and use DateTimeOffset.TryParse (or TryParseExact) to avoid exceptions on unexpected log lines.
dotnet/src/webdriver/SeleniumManager.cs [340-362]
const string LogMessageRegexPattern = @"^\[(.*) (INFO|WARN|ERROR|DEBUG|TRACE)\t?\] (.*)$";
...
-var dateTime = DateTimeOffset.Parse(match.Groups[1].Value);
-var logLevel = match.Groups[2].Value;
-var message = match.Groups[3].Value;
+if (DateTimeOffset.TryParse(match.Groups[1].Value, CultureInfo.InvariantCulture, DateTimeStyles.RoundtripKind, out var dateTime))
+{
+ var logLevel = match.Groups[2].Value;
+ var message = match.Groups[3].Value;
-switch (logLevel)
+ switch (logLevel)
+ {
+ case "INFO":
+ _logger.LogMessage(dateTime, LogEventLevel.Info, message);
+ break;
+ case "WARN":
+ _logger.LogMessage(dateTime, LogEventLevel.Warn, message);
+ break;
+ case "ERROR":
+ _logger.LogMessage(dateTime, LogEventLevel.Error, message);
+ break;
+ case "DEBUG":
+ _logger.LogMessage(dateTime, LogEventLevel.Debug, message);
+ break;
+ case "TRACE":
+ default:
+ _logger.LogMessage(dateTime, LogEventLevel.Trace, message);
+ break;
+ }
+}
+else
{
- case "INFO":
- _logger.LogMessage(dateTime, LogEventLevel.Info, message);
- break;
- case "WARNING":
- _logger.LogMessage(dateTime, LogEventLevel.Warn, message);
- break;
- case "ERROR":
- _logger.LogMessage(dateTime, LogEventLevel.Error, message);
- break;
- case "DEBUG":
- _logger.LogMessage(dateTime, LogEventLevel.Debug, message);
- break;
- case "TRACE":
- default:
- _logger.LogMessage(dateTime, LogEventLevel.Trace, message);
- break;
+ errOutputBuilder.AppendLine(e.Data);
}Suggestion importance[1-10]: 5
__
Why: Relevant best practice - Validate and robustly parse external tool output; avoid brittle parsing assumptions and fail fast with clear errors when format is unexpected.
[high-level] Rethink the Selenium Manager API
✅ Rethink the Selenium Manager API
Move the public DiscoveryOptions and DiscoveryResult records out of the SeleniumManager class to become top-level types. This change improves API discoverability and aligns with standard .NET design practices.
dotnet/src/webdriver/SeleniumManager.cs [375-414]
public record DiscoveryOptions
{
///
/// Gets or sets the specific browser version to target (e.g., "120.0.6099.109").
/// If not specified, the installed browser version is detected automatically.
///
public string? BrowserVersion { get; set; }
///
/// Gets or sets the path to the browser executable.
... (clipped 30 lines)// file: dotnet/src/webdriver/SeleniumManager.cs
namespace OpenQA.Selenium;
public static partial class SeleniumManager
{
public static DiscoveryResult DiscoverBrowser(...) { ... }
public record DiscoveryOptions
{
// ... properties
}
public record DiscoveryResult(string DriverPath, string BrowserPath);
}// file: dotnet/src/webdriver/SeleniumManager.cs
namespace OpenQA.Selenium;
public static partial class SeleniumManager
{
public static DiscoveryResult DiscoverBrowser(...) { ... }
}
public record DiscoveryOptions
{
// ... properties
}
public record DiscoveryResult(string DriverPath, string BrowserPath);Suggestion importance[1-10]: 5
__
Why: This is a valid API design suggestion that improves discoverability and aligns with .NET conventions, but it is not a critical functional issue.
[general] Check empty browser name
✅ Check empty browser name
Use string.IsNullOrEmpty to validate options.BrowserName to handle both null and empty strings.
dotnet/src/webdriver/DriverFinder.cs [113]
-if (options.BrowserName is null)
+if (string.IsNullOrEmpty(options.BrowserName))Suggestion importance[1-10]: 5
__
Why: The suggestion correctly points out that an empty string for BrowserName should also be handled, and using string.IsNullOrEmpty is a good practice for this validation.
| PR 17020 (2026-01-28) |
[learned best practice] Make conditional enforcement explicit
✅ Make conditional enforcement explicit
Replace the next early-exit with an explicit conditional so the task’s behavior is clear (and optionally log that extra diagnostics enforcement is currently skipped).
rake_tasks/dotnet.rake [136-141]
# TODO: Identify specific diagnostics that we want to enforce but can't be auto-corrected (e.g., 'IDE0060'):
enforced_diagnostics = []
-next if enforced_diagnostics.empty?
+if enforced_diagnostics.any?
+ arguments = %w[-- style --severity info --verify-no-changes --diagnostics] + enforced_diagnostics
+ Bazel.execute('run', arguments, '//dotnet:format')
+else
+ puts ' No additional enforced diagnostics configured'
+end
-arguments = %w[-- style --severity info --verify-no-changes --diagnostics] + enforced_diagnostics
-Bazel.execute('run', arguments, '//dotnet:format')
-Suggestion importance[1-10]: 6
__
Why: Relevant best practice - Avoid silent early exits in automation; gate optional work explicitly and keep lint tasks deterministic and rerunnable.
[incremental [*]] Make formatter invocation robust
✅ Make formatter invocation robust
Refactor the java:format task to robustly handle file paths by using find -print0 | xargs -0 and a safer method to capture the formatter path.
rake_tasks/java.rake [396-399]
-formatter = `bazel run --run_under=echo //scripts:google-java-format 2>/dev/null`.strip
-raise 'Failed to get google-java-format path' if formatter.empty? || !$CHILD_STATUS.success?
+formatter = nil
+Bazel.execute('run', ['--run_under=echo'], '//scripts:google-java-format') do |output|
+ formatter = output.lines.last&.strip
+end
+raise 'Failed to get google-java-format path' if formatter.to_s.empty?
-sh "find #{Dir.pwd}/java -name '*.java' | xargs #{formatter} --replace"
+sh %(find "#{File.join(Dir.pwd, 'java')}" -name '*.java' -print0 | xargs -0 "#{formatter}" --replace)Suggestion importance[1-10]: 7
__
Why: The suggestion correctly identifies a brittle shell command and proposes a more robust implementation, improving the script's reliability against paths with spaces or special characters.
[incremental [*]] Fix invalid control flow
✅ Fix invalid control flow
Replace next with a conditional guard (unless) to prevent a LocalJumpError in the Rake task, as next is only valid within an iterator.
rake_tasks/dotnet.rake [136-138]
# TODO: Identify specific diagnostics that we want to enforce but can't be auto-corrected (e.g., 'IDE0060'):
enforced_diagnostics = []
-next if enforced_diagnostics.empty?
+unless enforced_diagnostics.empty?
+ arguments = %w[-- style --severity info --verify-no-changes --diagnostics] + enforced_diagnostics
+ Bazel.execute('run', arguments, '//dotnet:format')
+endSuggestion importance[1-10]: 9
__
Why: The suggestion correctly identifies that using next outside of a loop will cause a LocalJumpError, which is a bug that would break the dotnet:lint task.
[incremental [*]] Avoid command line length overflow
✅ Avoid command line length overflow
Batch the list of Java files passed to Bazel.execute to avoid exceeding command-line length limits, which can cause the format task to fail.
rake_tasks/java.rake [396-397]
files = Dir.glob("#{Dir.pwd}/java/**/*.java")
-Bazel.execute('run', ['--', '--replace'] + files, '//scripts:google-java-format')
+files.each_slice(200) do |batch|
+ Bazel.execute('run', ['--', '--replace'] + batch, '//scripts:google-java-format')
+endSuggestion importance[1-10]: 7
__
Why: The suggestion correctly identifies a potential command-line length limit issue and provides a robust solution by batching file arguments, improving script reliability.
[possible issue] Prevent lint from auto-fixing
✅ Prevent lint from auto-fixing
Remove the --fix flag from the eslint command in the node:lint task and add --max-warnings 0 to make it a strict, non-mutating check.
rake_tasks/node.rake [167-172]
desc 'Run JavaScript linter (eslint, docs)'
task :lint do
puts ' Running eslint...'
- Bazel.execute('run', ['--', '--fix', '.'], '//javascript/selenium-webdriver:eslint')
+ Bazel.execute('run', ['--', '--max-warnings', '0', '.'], '//javascript/selenium-webdriver:eslint')
Rake::Task['node:docs_generate'].invoke
endSuggestion importance[1-10]: 8
__
Why: The suggestion correctly points out that the node:lint task modifies files using --fix, which is inconsistent with the PR's goal of separating lint and format tasks.
[possible issue] Fix ineffective linter command arguments
✅ Fix ineffective linter command arguments
Conditionally add the --diagnostics argument to the Bazel.execute call in the .NET :lint task only if the enforced_diagnostics array is not empty.
rake_tasks/dotnet.rake [130-141]
desc 'Run .NET linter (dotnet format analyzers, docs)'
task :lint do
puts ' Running dotnet format analyzers...'
Bazel.execute('run', ['--', 'analyzers', '--verify-no-changes'], '//dotnet:format')
Rake::Task['dotnet:docs_generate'].invoke
# TODO: Can also identify specific diagnostics to elevate and add to this list
# TODO: Add IDE0060 after merging #17019
enforced_diagnostics = []
- arguments = %w[-- style --severity info --verify-no-changes --diagnostics] + enforced_diagnostics
+ arguments = %w[-- style --severity info --verify-no-changes]
+ arguments += %w[--diagnostics] + enforced_diagnostics if enforced_diagnostics.any?
Bazel.execute('run', arguments, '//dotnet:format')
endSuggestion importance[1-10]: 5
__
Why: The suggestion correctly identifies that the enforced_diagnostics array is currently empty, making the --diagnostics argument ineffective. The proposed change improves code clarity and robustness by making the argument conditional.
| PR 17019 (2026-01-28) |
[possible issue] Restore driver assignment
✅ Restore driver assignment
In the Reset method, restore the assignment of wrapper.WrappedDriver to the driver field instead of discarding it.
dotnet/src/webdriver/Interactions/Actions.cs [665]
-_ = wrapper.WrappedDriver;
+driver = wrapper.WrappedDriver;Suggestion importance[1-10]: 9
__
Why: The PR incorrectly changed a necessary assignment to a discard, which would break the logic of the Reset method by failing to update the wrapped driver.
| PR 17015 (2026-01-27) |
[general] Safely handle missing payload
✅ Safely handle missing payload
In fireSessionEvent, safely handle the payload from the request by checking its type before casting, and default to an empty map if it's missing or not a map to prevent a ClassCastException.
java/src/org/openqa/selenium/grid/node/local/LocalNode.java [1103]
-Map<String, Object> payload = (Map<String, Object>) incoming.get("payload");
+Object rawPayload = incoming.get("payload");
+Map<String, Object> payload =
+ rawPayload instanceof Map
+ ? (Map<String, Object>) rawPayload
+ : Collections.emptyMap();Suggestion importance[1-10]: 8
__
Why: This suggestion correctly identifies a potential ClassCastException if the payload from the JSON is missing or not a map. The proposed change makes the code more robust by safely handling this case and preventing a runtime exception.
| PR 17012 (2026-01-26) |
[possible issue] Make root package importable
✅ Make root package importable
Create a dedicated py_library for root init.py files and add it as a dependency to each new module target to ensure they are importable.
+py_library(
+ name = "selenium_init",
+ srcs = [
+ "selenium/__init__.py",
+ "selenium/webdriver/__init__.py",
+ ],
+ data = ["selenium/py.typed"],
+ imports = ["."],
+ visibility = ["//visibility:public"],
+)
+
py_library(
name = "chrome",
srcs = glob(["selenium/webdriver/chrome/**/*.py"]),
imports = ["."],
visibility = ["//visibility:public"],
- deps = [":chromium"],
+ deps = [
+ ":selenium_init",
+ ":chromium",
+ ],
)
py_library(
name = "edge",
srcs = glob(["selenium/webdriver/edge/**/*.py"]),
imports = ["."],
visibility = ["//visibility:public"],
- deps = [":chromium"],
+ deps = [
+ ":selenium_init",
+ ":chromium",
+ ],
)
py_library(
name = "firefox",
srcs = glob(["selenium/webdriver/firefox/**/*.py"]),
data = [":firefox-driver-prefs"],
imports = ["."],
visibility = ["//visibility:public"],
deps = [
+ ":selenium_init",
":common",
":remote",
],
)
py_library(
name = "selenium",
srcs = [
"selenium/__init__.py",
"selenium/webdriver/__init__.py",
],
data = ["selenium/py.typed"],
imports = ["."],
visibility = ["//visibility:public"],
deps = [
":bidi",
":chrome",
":chromium",
":common",
":edge",
":exceptions",
":firefox",
":ie",
":remote",
":safari",
":support",
":webkitgtk",
":wpewebkit",
],
)Suggestion importance[1-10]: 9
__
Why: The suggestion correctly identifies a critical flaw in the PR's modularization effort where individual modules would not be importable, and provides a sound solution to fix it.
| PR 17004 (2026-01-25) |
[general] Use explicit channel options
✅ Use explicit channel options
Improve readability by using an explicit UnboundedChannelOptions object when creating the Channel.
dotnet/src/webdriver/BiDi/Broker.cs [40]
-private readonly Channel<(string Method, EventArgs Params)> _pendingEvents = Channel.CreateUnbounded<(string Method, EventArgs Params)>(new(){ SingleReader = true, SingleWriter = true });
+private readonly Channel<(string Method, EventArgs Params)> _pendingEvents =
+ Channel.CreateUnbounded<(string Method, EventArgs Params)>(new UnboundedChannelOptions
+ {
+ SingleReader = true,
+ SingleWriter = true
+ });Suggestion importance[1-10]: 3
__
Why: This is a valid code style suggestion that improves readability by explicitly naming UnboundedChannelOptions, making the configuration clearer at a glance.
| PR 17001 (2026-01-25) |
[possible issue] Check correct filesystem space
✅ Check correct filesystem space
Modify the disk space check to use df on $GITHUB_WORKSPACE instead of / to ensure the measurement is for the correct filesystem where the job workspace resides.
.github/workflows/bazel.yml [269-276]
- name: Check disk space
if: always()
shell: bash
id: disk
run: |
- avail=$(df -k / | awk 'NR==2 {printf "%.0f", $4/1024/1024}')
+ target="${GITHUB_WORKSPACE:-.}"
+ avail=$(df -k "$target" | awk 'NR==2 {printf "%.0f", $4/1024/1024}')
echo "Remaining disk space: ${avail}GB"
echo "avail=$avail" >> "$GITHUB_OUTPUT"Suggestion importance[1-10]: 7
__
Why: The suggestion correctly points out that hardcoding / for the disk space check might be inaccurate on some runners; using $GITHUB_WORKSPACE makes the check more robust and reliable.
[general] Move color logic into bash
✅ Move color logic into bash
Refactor the workflow by moving the SLACK_COLOR conditional logic from the YAML into the Check disk space bash script and outputting the color value.
.github/workflows/bazel.yml [276-286]
echo "avail=$avail" >> "$GITHUB_OUTPUT"
+if [ "$avail" -lt 10 ]; then
+ color="danger"
+elif [ "$avail" -lt 30 ]; then
+ color="warning"
+else
+ color="good"
+fi
+echo "color=$color" >> "$GITHUB_OUTPUT"
...
-SLACK_COLOR: ${{ steps.disk.outputs.avail < 10 && 'danger' || (steps.disk.outputs.avail < 30 && 'warning' || 'good') }}
+SLACK_COLOR: ${{ steps.disk.outputs.color }}Suggestion importance[1-10]: 5
__
Why: This suggestion improves code readability and maintainability by moving complex conditional logic from the YAML file into the bash script, which is a better place for it.
[high-level] Only send alerts on low disk
✅ Only send alerts on low disk
The current implementation sends a Slack notification on every CI run. To reduce noise, it should be modified to only send alerts when disk space is low (e.g., below the 'warning' threshold).
.github/workflows/bazel.yml [277-287]
- name: Report disk space
if: always()
uses: rtCamp/action-slack-notify@v2
env:
SLACK_WEBHOOK: ${{ secrets.SLACK_WEBHOOK_URL }}
SLACK_CHANNEL: ci-disk-alerts
SLACK_USERNAME: Disk Monitor
SLACK_TITLE: "${{ steps.disk.outputs.avail }}GB remaining"
SLACK_MESSAGE: "${{ inputs.name }} on ${{ inputs.os }}"
SLACK_COLOR: ${{ steps.disk.outputs.avail < 10 && 'danger' || (steps.disk.outputs.avail < 30 && 'warning' || 'good') }}
... (clipped 1 lines)- name: Report disk space
if: always()
uses: rtCamp/action-slack-notify@v2
env:
SLACK_WEBHOOK: ${{ secrets.SLACK_WEBHOOK_URL }}
SLACK_CHANNEL: ci-disk-alerts
SLACK_TITLE: "${{ steps.disk.outputs.avail }}GB remaining"
SLACK_MESSAGE: "${{ inputs.name }} on ${{ inputs.os }}"
SLACK_COLOR: ${{ steps.disk.outputs.avail < 10 && 'danger' || (steps.disk.outputs.avail < 30 && 'warning' || 'good') }}
...
- name: Report disk space
if: always() && steps.disk.outputs.avail < 30
uses: rtCamp/action-slack-notify@v2
env:
SLACK_WEBHOOK: ${{ secrets.SLACK_WEBHOOK_URL }}
SLACK_CHANNEL: ci-disk-alerts
SLACK_TITLE: "${{ steps.disk.outputs.avail }}GB remaining"
SLACK_MESSAGE: "${{ inputs.name }} on ${{ inputs.os }}"
SLACK_COLOR: ${{ steps.disk.outputs.avail < 10 && 'danger' || 'warning' }}
...
Suggestion importance[1-10]: 7
__
Why: This is a valid design improvement that addresses the high potential for noise in the new alerting system, suggesting a shift from constant reporting to conditional alerting, which is a best practice.
[learned best practice] Gate Slack notify on inputs
✅ Gate Slack notify on inputs
Only run Slack reporting when the webhook secret exists and the disk step produced a valid output, otherwise this can fail the workflow or evaluate invalid comparisons.
.github/workflows/bazel.yml [277-287]
- name: Report disk space
- if: always()
+ if: always() && secrets.SLACK_WEBHOOK_URL != '' && steps.disk.outputs.avail != ''
uses: rtCamp/action-slack-notify@v2
env:
SLACK_WEBHOOK: ${{ secrets.SLACK_WEBHOOK_URL }}
SLACK_CHANNEL: ci-disk-alerts
SLACK_USERNAME: Disk Monitor
SLACK_TITLE: "${{ steps.disk.outputs.avail }}GB remaining"
SLACK_MESSAGE: "${{ inputs.name }} on ${{ inputs.os }}"
SLACK_COLOR: ${{ steps.disk.outputs.avail < 10 && 'danger' || (steps.disk.outputs.avail < 30 && 'warning' || 'good') }}
SLACK_ICON_EMOJI: ":floppy_disk:"Suggestion importance[1-10]: 6
__
Why: Relevant best practice - Add explicit validation/availability guards at integration boundaries (secrets/webhooks and dependent step outputs) before using them.
| PR 17000 (2026-01-25) |
[possible issue] Checkout an immutable PR SHA
✅ Checkout an immutable PR SHA
To prevent a race condition, check out the immutable PR head SHA (github.event.pull_request.head.sha) instead of the mutable branch ref (github.event.pull_request.head.ref).
.github/workflows/ci-lint.yml [75-78]
- name: Checkout
uses: actions/checkout@v4
with:
- ref: ${{ github.event.pull_request.head.ref }}
+ ref: ${{ github.event.pull_request.head.sha }}Suggestion importance[1-10]: 8
__
Why: The suggestion correctly identifies a race condition where checking out the branch ref could lead to analyzing a different commit than the one that failed the format check, thus making the bot-commit check unreliable. Using the commit sha ensures the correct commit is analyzed.
[general] Use case-insensitive author name comparison
✅ Use case-insensitive author name comparison
To improve robustness, modify the shell script to use a case-insensitive comparison when checking the commit author's name.
.github/workflows/ci-lint.yml [81-88]
run: |
LAST_AUTHOR=$(git log -1 --format='%an')
- if [ "$LAST_AUTHOR" = "Selenium CI Bot" ]; then
+ if [ "$(echo "$LAST_AUTHOR" | tr '[:upper:]' '[:lower:]')" = "selenium ci bot" ]; then
echo "::notice::Last commit was from Selenium CI Bot - skipping commit-fixes"
echo "should-commit=false" >> "$GITHUB_OUTPUT"
else
echo "should-commit=true" >> "$GITHUB_OUTPUT"
fiSuggestion importance[1-10]: 6
__
Why: The suggestion correctly identifies that a case-sensitive comparison for the author name is brittle and proposes a more robust, case-insensitive check, improving the reliability of the workflow.
[learned best practice] Harden author check validation
✅ Harden author check validation
Make the step more robust by enabling strict shell mode, trimming/validating the retrieved author, and preferring a stable identifier (e.g., author email) rather than a display name.
.github/workflows/ci-lint.yml [79-88]
- name: Check last commit author
id: check
+ shell: bash
run: |
- LAST_AUTHOR=$(git log -1 --format='%an')
- if [ "$LAST_AUTHOR" = "Selenium CI Bot" ]; then
+ set -euo pipefail
+
+ LAST_AUTHOR_EMAIL="$(git log -1 --format='%ae' | tr -d '\r' | xargs || true)"
+ if [ -z "$LAST_AUTHOR_EMAIL" ]; then
+ echo "::warning::Could not determine last commit author email; skipping auto-commit to avoid loops"
+ echo "should-commit=false" >> "$GITHUB_OUTPUT"
+ elif [ "$LAST_AUTHOR_EMAIL" = "selenium-ci-bot@example.com" ]; then
echo "::notice::Last commit was from Selenium CI Bot - skipping commit-fixes"
echo "should-commit=false" >> "$GITHUB_OUTPUT"
else
echo "should-commit=true" >> "$GITHUB_OUTPUT"
fiSuggestion importance[1-10]: 6
__
Why: Relevant best practice - Add explicit validation/guards at integration boundaries (GitHub Actions + shell) by trimming/checking presence and avoiding brittle identity checks before using values.
| PR 16995 (2026-01-24) |
[possible issue] Prevent crash on lint skip argument
✅ Prevent crash on lint skip argument
In the lint task, filter out the -rust argument before invoking all:lint to prevent an "Unknown languages: rust" error.
task :lint do |_task, arguments|
failures = []
skip = arguments.to_a.select { |a| a.start_with?('-') }.map { |a| a.delete_prefix('-') }
begin
- Rake::Task['all:lint'].invoke(*arguments.to_a)
+ all_lint_args = arguments.to_a.reject { |arg| arg == '-rust' }
+ Rake::Task['all:lint'].invoke(*all_lint_args)
rescue StandardError => e
failures << e.message
end
unless skip.include?('rust')
puts 'Linting rust...'
begin
Rake::Task['rust:lint'].invoke
rescue StandardError => e
failures << "rust: #{e.message}"
end
end
...Suggestion importance[1-10]: 8
__
Why: The suggestion correctly identifies a bug where passing -rust to the top-level lint task would cause a crash in the all:lint sub-task, and the proposed fix is accurate.
| PR 16993 (2026-01-24) |
[incremental [*]] Fix incomplete module introspection
✅ Fix incomplete module introspection
Update dir() to include existing module attributes from globals() in addition to the lazy-loadable submodules to ensure correct introspection.
py/selenium/webdriver/chrome/init.py [31-32]
def __dir__():
- return sorted(set(_LAZY_SUBMODULES))
+ return sorted(set(globals().keys()) | set(_LAZY_SUBMODULES))Suggestion importance[1-10]: 6
__
Why: The suggestion correctly identifies that the __dir__ implementation is incomplete, as it omits attributes already present in the module's namespace, which harms introspection and developer tools.
[general] Stabilize and de-duplicate introspection
✅ Stabilize and de-duplicate introspection
Improve the dir implementation to include lazily imported symbols, remove duplicates, and return a sorted list of public names for stable introspection.
py/selenium/webdriver/init.py [102-103]
def __dir__():
- return list(__all__) + list(globals().keys())
+ public = set(globals().get("__all__", ()))
+ public.update(_LAZY_IMPORTS.keys())
+ public.update(_LAZY_SUBMODULES.keys())
+ return sorted(public)Suggestion importance[1-10]: 7
__
Why: The suggestion correctly identifies that the __dir__ implementation is incomplete and could produce duplicates. The proposed change correctly includes the lazily-imported names, which is crucial for introspection and autocompletion.
[possible issue] Cache lazy imports for performance
✅ Cache lazy imports for performance
Optimize the getattr function by caching the lazy-loaded attributes in the module's globals(). This will prevent re-importing on every attribute access and improve performance.
py/selenium/webdriver/init.py [73-78]
def __getattr__(name):
if name in _LAZY_IMPORTS:
module_path, attr_name = _LAZY_IMPORTS[name]
module = importlib.import_module(module_path)
- return getattr(module, attr_name)
+ attr = getattr(module, attr_name)
+ globals()[name] = attr
+ return attr
raise AttributeError(f"module 'selenium.webdriver' has no attribute {name!r}")Suggestion importance[1-10]: 7
__
Why: This is a valid and important performance optimization for the new lazy-loading mechanism. Caching the imported attribute in globals() is a standard practice that prevents re-importing on every access, making subsequent lookups much faster.
[general] Avoid exposing internal module details
✅ Avoid exposing internal module details
Modify the dir function to return only the all list. This will prevent exposing internal module details to introspection tools.
py/selenium/webdriver/init.py [81-82]
def __dir__():
- return list(__all__) + list(globals().keys())
+ return __all__Suggestion importance[1-10]: 6
__
Why: The suggestion correctly identifies that including globals().keys() in __dir__ exposes internal implementation details. Returning only __all__ correctly lists the public API as intended, improving encapsulation and providing a cleaner interface for introspection.
| PR 16990 (2026-01-23) |
[possible issue] Consolidate CTS creation and disposal
Consolidate CTS creation and disposal
Refactor the cancellation token source creation to use a single CancellationTokenSource, conditionally linking it based on whether the external token can be canceled, and ensure proper disposal.
dotnet/src/webdriver/BiDi/Broker.cs [140-145]
-using var timeoutCts = new CancellationTokenSource(timeout);
-using var linkedCts = cancellationToken != default ?
- CancellationTokenSource.CreateLinkedTokenSource(cancellationToken, timeoutCts.Token) :
- null;
+CancellationTokenSource cts = cancellationToken.CanBeCanceled
+ ? CancellationTokenSource.CreateLinkedTokenSource(cancellationToken, timeout)
+ : new CancellationTokenSource(timeout);
+using (cts)
-var cts = linkedCts ?? timeoutCts;
-Suggestion importance[1-10]: 6
__
Why: The suggestion correctly identifies a way to simplify the cancellation token logic, improving code readability and robustness by avoiding a nullable using variable and consolidating the creation logic.
| PR 16987 (2026-01-23) |
[high-level] Centralize duplicated release tag parsing logic
✅ Centralize duplicated release tag parsing logic
The release tag parsing and validation logic is duplicated across pre-release.yml and release.yml. This logic should be extracted into a single reusable workflow to serve as a single source of truth, improving maintainability.
.github/workflows/pre-release.yml [117-166]
run: |
TAG="${{ inputs.tag }}"
TAG="${TAG//[[:space:]]/}"
# Validate tag format: selenium-X.Y.Z or selenium-X.Y.Z-lang
if [[ ! "$TAG" =~ ^selenium-[0-9]+\.[0-9]+\.[0-9]+(-[a-z]+)?$ ]]; then
echo "::error::Invalid tag format: '$TAG'. Expected selenium-X.Y.Z or selenium-X.Y.Z-lang"
exit 1
fi
... (clipped 40 lines).github/workflows/release.yml [38-79]
run: |
if [ "$EVENT_NAME" == "workflow_dispatch" ]; then
TAG="$INPUT_TAG"
else
# Extract tag from branch name: release-preparation-selenium-4.28.1-ruby -> selenium-4.28.1-ruby
TAG=$(echo "$PR_HEAD_REF" | sed 's/^release-preparation-//')
fi
# Extract version
VERSION=$(echo "$TAG" | grep -oE '[0-9]+\.[0-9]+\.[0-9]+')
... (clipped 32 lines)# In .github/workflows/pre-release.yml
jobs:
parse-tag:
steps:
- name: Parse tag
run: |
TAG="${{ inputs.tag }}"
# ... complex parsing and validation logic ...
if [[ "$PATCH" -gt 0 && -z "$LANG_SUFFIX" ]]; then
echo "::error::Patch releases must specify a language"
exit 1
fi
# ... more validation ...
# In .github/workflows/release.yml
jobs:
prepare:
steps:
- name: Extract and parse tag
run: |
# ... logic to get TAG from branch or input ...
# ... complex parsing and validation logic (duplicated) ...
if [[ "$PATCH" -gt 0 && -z "$LANG_SUFFIX" ]]; then
echo "::error::Patch releases must specify a language"
exit 1
fi
# ... more validation (duplicated) ...
# In .github/workflows/parse-release-tag.yml (new file)
on:
workflow_call:
inputs:
tag_string:
type: string
outputs:
tag:
version:
language:
jobs:
parse:
steps:
- name: Parse tag
run: |
# ... single source of truth for parsing and validation ...
echo "tag=$TAG" >> "$GITHUB_OUTPUT"
echo "version=$VERSION" >> "$GITHUB_OUTPUT"
echo "language=$LANGUAGE" >> "$GITHUB_OUTPUT"
# In .github/workflows/pre-release.yml and release.yml
jobs:
parse-tag: # or 'prepare' job
uses: ./.github/workflows/parse-release-tag.yml
with:
tag_string: ${{ inputs.tag }} # or ${{ github.event.pull_request.head.ref }}
Suggestion importance[1-10]: 8
__
Why: The suggestion correctly identifies significant and complex logic duplication for tag parsing between pre-release.yml and release.yml, and proposing a reusable workflow is an excellent architectural improvement for maintainability.
[possible issue] Add missing tag validation logic
✅ Add missing tag validation logic
Add validation to the prepare job in release.yml to ensure full releases do not have a language suffix, aligning its logic with the pre-release.yml workflow.
.github/workflows/release.yml [32-79]
- name: Extract and parse tag
id: parse
env:
EVENT_NAME: ${{ github.event_name }}
INPUT_TAG: ${{ inputs.tag }}
PR_HEAD_REF: ${{ github.event.pull_request.head.ref }}
run: |
if [ "$EVENT_NAME" == "workflow_dispatch" ]; then
TAG="$INPUT_TAG"
else
# Extract tag from branch name: release-preparation-selenium-4.28.1-ruby -> selenium-4.28.1-ruby
TAG=$(echo "$PR_HEAD_REF" | sed 's/^release-preparation-//')
fi
# Extract version
VERSION=$(echo "$TAG" | grep -oE '[0-9]+\.[0-9]+\.[0-9]+')
PATCH=$(echo "$VERSION" | cut -d. -f3)
# Extract language suffix
if [[ "$TAG" =~ -([a-z]+)$ ]]; then
LANG_SUFFIX="${BASH_REMATCH[1]}"
else
LANG_SUFFIX=""
fi
# Patch releases must have a language suffix
if [[ "$PATCH" -gt 0 && -z "$LANG_SUFFIX" ]]; then
echo "::error::Patch releases must specify a language (e.g., selenium-${VERSION}-ruby)"
exit 1
fi
+ # Full releases (X.Y.0) must not have a language suffix
+ if [[ "$PATCH" -eq 0 && -n "$LANG_SUFFIX" ]]; then
+ echo "::error::Full releases (X.Y.0) cannot have a language suffix"
+ exit 1
+ fi
+
# Validate language suffix (rake namespace aliases allow full names)
case "$LANG_SUFFIX" in
ruby|python|javascript|java|dotnet)
LANGUAGE="$LANG_SUFFIX"
;;
"")
LANGUAGE="all"
;;
*)
echo "::error::Invalid language suffix: '$LANG_SUFFIX'. Expected ruby, python, javascript, java, or dotnet"
exit 1
;;
esac
echo "tag=$TAG" >> "$GITHUB_OUTPUT"
echo "version=$VERSION" >> "$GITHUB_OUTPUT"
echo "language=$LANGUAGE" >> "$GITHUB_OUTPUT"Suggestion importance[1-10]: 8
__
Why: This suggestion correctly identifies missing validation logic in the release.yml workflow, which could lead to inconsistent behavior compared to pre-release.yml, and proposes adding it to improve robustness.
[learned best practice] Centralize tag parsing logic
✅ Centralize tag parsing logic
Move tag parsing/validation into a single shared script (or reusable workflow/composite action) and call it from all workflows to prevent future drift and inconsistent rules.
.github/workflows/pre-release.yml [114-166]
- name: Parse tag
id: parse
shell: bash
run: |
- TAG="${{ inputs.tag }}"
- TAG="${TAG//[[:space:]]/}"
+ ./scripts/parse-release-tag.sh "${{ inputs.tag }}" >> "$GITHUB_OUTPUT"
- # Validate tag format: selenium-X.Y.Z or selenium-X.Y.Z-lang
- if [[ ! "$TAG" =~ ^selenium-[0-9]+\.[0-9]+\.[0-9]+(-[a-z]+)?$ ]]; then
- echo "::error::Invalid tag format: '$TAG'. Expected selenium-X.Y.Z or selenium-X.Y.Z-lang"
- exit 1
- fi
-
- # Extract version (strip 'selenium-' prefix and optional language suffix)
- VERSION=$(echo "$TAG" | sed -E 's/^selenium-([0-9]+\.[0-9]+\.[0-9]+)(-[a-z]+)?$/\1/')
- PATCH=$(echo "$VERSION" | cut -d. -f3)
-
- # Extract language suffix (default to 'all' if no suffix)
- if [[ "$TAG" =~ -([a-z]+)$ ]]; then
- LANG_SUFFIX="${BASH_REMATCH[1]}"
- else
- LANG_SUFFIX=""
- fi
-
- # Patch releases must have a language suffix
- if [[ "$PATCH" -gt 0 && -z "$LANG_SUFFIX" ]]; then
- echo "::error::Patch releases must specify a language (e.g., selenium-${VERSION}-ruby)"
- exit 1
- fi
-
- # Full releases (X.Y.0) must not have a language suffix
- if [[ "$PATCH" -eq 0 && -n "$LANG_SUFFIX" ]]; then
- echo "::error::Full releases (X.Y.0) cannot have a language suffix"
- exit 1
- fi
-
- # Validate language suffix (rake namespace aliases allow full names)
- case "$LANG_SUFFIX" in
- ruby|python|javascript|java|dotnet)
- LANGUAGE="$LANG_SUFFIX"
- ;;
- "")
- LANGUAGE="all"
- ;;
- *)
- echo "::error::Invalid language suffix: '$LANG_SUFFIX'. Expected ruby, python, javascript, java, or dotnet"
- exit 1
- ;;
- esac
-
- echo "tag=$TAG" >> "$GITHUB_OUTPUT"
- echo "version=$VERSION" >> "$GITHUB_OUTPUT"
- echo "language=$LANGUAGE" >> "$GITHUB_OUTPUT"
-Suggestion importance[1-10]: 6
__
Why: Relevant best practice - Reduce duplication by centralizing shared behavior (single-source parsing/validation logic) instead of copy/pasting similar logic across workflows.
| PR 16986 (2026-01-23) |
[learned best practice] Fail fast on missing inputs
✅ Fail fast on missing inputs
Validate that $DOTNET_DIR exists and that at least one .csproj was found before running, so mis-invocations fail fast with a clear message.
dotnet/private/dotnet_format.bzl [49-59]
# Find the workspace root
WORKSPACE_ROOT="${{BUILD_WORKSPACE_DIRECTORY:-$RUNFILES_DIR/_main}}"
DOTNET_DIR="$WORKSPACE_ROOT/dotnet"
+if [[ ! -d "$DOTNET_DIR" ]]; then
+ echo "ERROR: Could not find dotnet directory at $DOTNET_DIR" >&2
+ exit 1
+fi
+
cd "$DOTNET_DIR"
echo "Running dotnet format on all projects..."
-find "$DOTNET_DIR/src" "$DOTNET_DIR/test" -name "*.csproj" 2>/dev/null | while read -r proj; do
+mapfile -t projects < <(find "$DOTNET_DIR/src" "$DOTNET_DIR/test" -name "*.csproj" 2>/dev/null)
+if [[ "${#projects[@]}" -eq 0 ]]; then
+ echo "ERROR: No .csproj files found under $DOTNET_DIR/src or $DOTNET_DIR/test" >&2
+ exit 1
+fi
+for proj in "${projects[@]}"; do
echo " Formatting $proj..."
"$DOTNET" format "$proj" || exit 1
-done || exit 1
+doneSuggestion importance[1-10]: 6
__
Why: Relevant best practice - Add explicit validation/guards at integration boundaries (derived paths/env vars) before use, and fail with clear errors.
[possible issue] Fix workspace root resolution
✅ Fix workspace root resolution
Replace the brittle workspace root discovery logic with a more robust method using the Bazel runfiles tree to prevent unexpected failures.
dotnet/private/dotnet_format.bzl [49-51]
# Find the workspace root
-WORKSPACE_ROOT="${{BUILD_WORKSPACE_DIRECTORY:-$(cd "$(dirname "$0")/../.." && pwd)}}"
+WORKSPACE_ROOT="${BUILD_WORKSPACE_DIRECTORY:-$RUNFILES_DIR/{workspace}}"
DOTNET_DIR="$WORKSPACE_ROOT/dotnet"
+if [[ ! -d "$DOTNET_DIR" ]]; then
+ echo "ERROR: Could not find dotnet/ directory at $DOTNET_DIR" >&2
+ exit 1
+fi
+Suggestion importance[1-10]: 7
__
Why: The suggestion correctly identifies a brittle implementation for finding the workspace root, which can fail in certain Bazel execution environments, and proposes a robust, standard fix using the runfiles tree.
[possible issue] Resolve repo root via runfiles
✅ Resolve repo root via runfiles
Replace the brittle workspace root discovery logic with a more robust method using the Bazel runfiles tree to prevent unexpected failures.
dotnet/private/paket_deps.bzl [50-52]
# Find the workspace root (where dotnet/.config/dotnet-tools.json lives)
-WORKSPACE_ROOT="${{BUILD_WORKSPACE_DIRECTORY:-$(cd "$(dirname "$0")/../.." && pwd)}}"
+WORKSPACE_ROOT="${BUILD_WORKSPACE_DIRECTORY:-$RUNFILES_DIR/{workspace}}"
DOTNET_DIR="$WORKSPACE_ROOT/dotnet"
+if [[ ! -d "$DOTNET_DIR" ]]; then
+ echo "ERROR: Could not find dotnet/ directory at $DOTNET_DIR" >&2
+ exit 1
+fi
+Suggestion importance[1-10]: 7
__
Why: The suggestion correctly identifies a brittle implementation for finding the workspace root, which can fail in certain Bazel execution environments, and proposes a robust, standard fix using the runfiles tree.
[learned best practice] Add path existence validations
✅ Add path existence validations
Validate that DOTNET_DIR exists and each project file is present before cd/formatting, and fail with a clear error message to avoid confusing runtime failures when env/runfiles assumptions don't hold.
dotnet/private/dotnet_format.bzl [50-59]
WORKSPACE_ROOT="${{BUILD_WORKSPACE_DIRECTORY:-$(cd "$(dirname "$0")/../.." && pwd)}}"
DOTNET_DIR="$WORKSPACE_ROOT/dotnet"
+
+if [[ ! -d "$DOTNET_DIR" ]]; then
+ echo "ERROR: Could not find dotnet directory at: $DOTNET_DIR" >&2
+ exit 1
+fi
cd "$DOTNET_DIR"
echo "Running dotnet format on src projects..."
for proj in src/webdriver/Selenium.WebDriver.csproj src/support/Selenium.WebDriver.Support.csproj; do
+ if [[ ! -f "$DOTNET_DIR/$proj" ]]; then
+ echo "ERROR: Missing project file: $DOTNET_DIR/$proj" >&2
+ exit 1
+ fi
echo " Formatting $proj..."
"$DOTNET" format "$DOTNET_DIR/$proj"
doneSuggestion importance[1-10]: 5
__
Why: Relevant best practice - Add explicit validation/guards at integration boundaries (e.g., environment variables, filesystem paths) before use.
| PR 16985 (2026-01-23) |
[learned best practice] Make generated BUILD file well-formed
✅ Make generated BUILD file well-formed
When generating a placeholder BUILD.bazel, write a well-formed file (include a trailing newline and a clearer comment) to avoid odd formatting/tooling edge cases and make the skip reason explicit.
common/private/pkg_archive.bzl [2-5]
pkgutil = repository_ctx.which("pkgutil")
if not pkgutil:
- repository_ctx.file("BUILD.bazel", "# pkg_archive: skipped (pkgutil not available on this platform)")
+ repository_ctx.file(
+ "BUILD.bazel",
+ "# pkg_archive: skipped because pkgutil is not available on this platform\n",
+ )
returnSuggestion importance[1-10]: 5
__
Why: Relevant best practice - Add explicit validation/guards at integration boundaries by ensuring generated files are well-formed and unambiguous when skipping due to missing external tools.
| PR 16981 (2026-01-22) |
[general] Fix typo in public method parameter
✅ Fix typo in public method parameter
Correct the typo in the parameter name desriptor to descriptor in the public SetPermissionAsync method.
dotnet/src/webdriver/BiDi/Permissions/PermissionsModule.cs [31]
-public async Task<SetPermissionResult> SetPermissionAsync(PermissionDescriptor desriptor, PermissionState state, string origin, SetPermissionOptions? options = null)
+public async Task<SetPermissionResult> SetPermissionAsync(PermissionDescriptor descriptor, PermissionState state, string origin, SetPermissionOptions? options = null)Suggestion importance[1-10]: 6
__
Why: The suggestion correctly identifies a typo in the desriptor parameter of the public SetPermissionAsync method, which improves API clarity and code quality.
| PR 16980 (2026-01-22) |
[possible issue] Fix version selection to respect prerelease flag
✅ Fix version selection to respect prerelease flag
In the choose_version function, remove the fallback logic that ignores the allow_prerelease flag to prevent unintentional selection of prerelease versions. Instead, raise an error if no suitable versions are found.
scripts/update_docfx.py [26-51]
def choose_version(versions, allow_prerelease, explicit_version=None):
if explicit_version:
return explicit_version
parsed = []
for v in versions:
try:
pv = Version(v)
except InvalidVersion:
continue
if not allow_prerelease and pv.is_prerelease:
continue
parsed.append((pv, v))
if not parsed:
- # Fall back to any parseable version.
- for v in versions:
- try:
- parsed.append((Version(v), v))
- except InvalidVersion:
- continue
-
- if not parsed:
- raise ValueError("No parseable DocFX versions found in NuGet index")
+ if allow_prerelease:
+ raise ValueError("No parseable DocFX versions found in NuGet index")
+ else:
+ raise ValueError("No stable DocFX versions found in NuGet index. Use --allow-prerelease to include them.")
return max(parsed, key=lambda item: item[0])[1]Suggestion importance[1-10]: 7
__
Why: The suggestion correctly identifies a logic flaw where the allow_prerelease flag is ignored in the fallback path, potentially leading to an unintended prerelease version being selected.
[possible issue] Check explicit version validity
✅ Check explicit version validity
Before returning an explicit_version, validate that it exists in the list of available versions from the NuGet index to fail early with a clear error.
scripts/update_docfx.py [27-28]
if explicit_version:
- return explicit_version
+ if explicit_version in versions:
+ return explicit_version
+ else:
+ raise ValueError(f"Explicit version {explicit_version!r} not found in NuGet index")Suggestion importance[1-10]: 7
__
Why: This is a valuable suggestion for improving robustness by validating user input early, which provides a clearer error message than letting the script fail later during the download phase.
[learned best practice] Validate CLI/env inputs before use
✅ Validate CLI/env inputs before use
Trim and validate --version/--output and require/validate BUILD_WORKSPACE_DIRECTORY (or explicitly define a fallback) so the script doesn't write to an unexpected relative path or accept invalid versions.
scripts/update_docfx.py [126-135]
-version = choose_version(versions, args.allow_prerelease, args.version)
+explicit_version = args.version.strip() if args.version else None
+if explicit_version:
+ try:
+ Version(explicit_version)
+ except InvalidVersion as e:
+ raise ValueError(f"Invalid --version: {explicit_version}") from e
+
+version = choose_version(versions, args.allow_prerelease, explicit_version)
nupkg_url = NUGET_NUPKG_URL.format(version=version)
sha256 = sha256_of_url(nupkg_url)
-output_path = Path(args.output)
+output_arg = (args.output or "").strip()
+if not output_arg:
+ raise ValueError("--output must be a non-empty path")
+output_path = Path(output_arg)
if not output_path.is_absolute():
- workspace_dir = os.environ.get("BUILD_WORKSPACE_DIRECTORY")
- if workspace_dir:
- output_path = Path(workspace_dir) / output_path
+ workspace_dir = (os.environ.get("BUILD_WORKSPACE_DIRECTORY") or "").strip()
+ if not workspace_dir:
+ raise EnvironmentError("BUILD_WORKSPACE_DIRECTORY is required when --output is a relative path")
+ output_path = Path(workspace_dir) / output_path
output_path.write_text(render_docfx_repo(version, sha256))Suggestion importance[1-10]: 6
__
Why: Relevant best practice - Add explicit validation and availability guards at integration boundaries (e.g., environment variables, CLI inputs, network calls) before use.
| PR 16979 (2026-01-22) |
[possible issue] Include nested task files
✅ Include nested task files
Update the glob in the rakefile filegroup to be recursive ("rake_tasks//*.rake") to prevent future build failures if task files are moved into subdirectories.**
filegroup(
name = "rakefile",
srcs = [
"Rakefile",
- ] + glob(["rake_tasks/*.rake", "rake_tasks/*.rb"]),
+ ] + glob(["rake_tasks/**/*.rake", "rake_tasks/**/*.rb"]),
visibility = ["//rb:__subpackages__"],
)Suggestion importance[1-10]: 6
__
Why: The suggestion correctly identifies that the glob pattern is not recursive and proposes a change to rake_tasks/**/*.rake to make the build more robust against future file organization changes.
[general] Forward linter task arguments
✅ Forward linter task arguments
Modify the all:lint task to forward any command-line arguments (except for the language skip flags) to each individual language's lint task.
desc 'Run linters for all languages (skip with: ./go all:lint -rb -rust)'
task :lint do |_task, arguments|
all_langs = %w[java py rb node rust]
- skip = arguments.to_a.select { |a| a.start_with?('-') }.map { |a| a.delete_prefix('-') }
+ raw_args = arguments.to_a
+ skip_args = raw_args.select { |a| a.start_with?('-') }
+ skip = skip_args.map { |a| a.delete_prefix('-') }
+
invalid = skip - all_langs
raise "Unknown languages: #{invalid.join(', ')}. Valid: #{all_langs.join(', ')}" if invalid.any?
+ forward_args = raw_args - skip_args
langs = all_langs - skip
+
failures = []
langs.each do |lang|
puts "Linting #{lang}..."
- Rake::Task["#{lang}:lint"].invoke
+ Rake::Task["#{lang}:lint"].invoke(*forward_args)
rescue StandardError => e
failures << "#{lang}: #{e.message}"
end
+
raise "Lint failed:\n#{failures.join("\n")}" unless failures.empty?
endSuggestion importance[1-10]: 6
__
Why: The suggestion correctly identifies that arguments are not being passed to the individual linter tasks and provides a correct implementation to forward them, improving the task's functionality.
[security] Tighten artifact file permissions
✅ Tighten artifact file permissions
Change the file permissions for the generated .zip artifacts from world-writable 0o666 to a more secure 0o644 to prevent tampering.
rake_tasks/dotnet.rake [21-24]
FileUtils.copy('bazel-bin/dotnet/release.zip', "build/dist/selenium-dotnet-#{dotnet_version}.zip")
-FileUtils.chmod(0o666, "build/dist/selenium-dotnet-#{dotnet_version}.zip")
+FileUtils.chmod(0o644, "build/dist/selenium-dotnet-#{dotnet_version}.zip")
FileUtils.copy('bazel-bin/dotnet/strongnamed.zip', "build/dist/selenium-dotnet-strongnamed-#{dotnet_version}.zip")
-FileUtils.chmod(0o666, "build/dist/selenium-dotnet-strongnamed-#{dotnet_version}.zip")
+FileUtils.chmod(0o644, "build/dist/selenium-dotnet-strongnamed-#{dotnet_version}.zip")Suggestion importance[1-10]: 8
__
Why: The suggestion correctly identifies overly permissive file permissions (0o666) on release artifacts, which is a security risk, and proposes a more secure alternative (0o644).
[possible issue] Add HTTP timeouts for downloads
✅ Add HTTP timeouts for downloads
Add open_timeout and read_timeout to the Net::HTTP request when downloading gems to prevent the task from hanging indefinitely on network issues.
rake_tasks/ruby.rake [168-181]
+response = nil
5.times do
- response = Net::HTTP.get_response(uri)
+ response = Net::HTTP.start(uri.hostname, uri.port,
+ use_ssl: uri.scheme == 'https',
+ open_timeout: 10,
+ read_timeout: 30) do |http|
+ http.request(Net::HTTP::Get.new(uri))
+ end
break unless response.is_a?(Net::HTTPRedirection)
uri = URI(response['location'])
end
unless response.is_a?(Net::HTTPSuccess)
puts " #{key}: failed (HTTP #{response.code})"
failed << key
next
end
sha = Digest::SHA256.hexdigest(response.body)Suggestion importance[1-10]: 7
__
Why: The suggestion correctly points out that the HTTP request lacks timeouts, which can cause the task to hang indefinitely, and proposes a robust solution using Net::HTTP.start with timeouts.
[possible issue] Validate required release inputs
✅ Validate required release inputs
Add validation to the prep_release task to ensure the required version argument is provided, preventing silent failures and providing a clear error message to the user if it is missing.
desc 'Update everything in preparation for a release'
task :prep_release, [:version, :channel] do |_task, arguments|
version = arguments[:version]
+ raise 'Missing required version: ./go prep_release[4.31.0,early-stable]' if version.nil? || version.empty?
- Rake::Task['update_browsers'].invoke(arguments[:channel])
- Rake::Task['update_cdp'].invoke(arguments[:channel])
+ channel = arguments[:channel] || 'stable'
+
+ Rake::Task['update_browsers'].invoke(channel)
+ Rake::Task['update_cdp'].invoke(channel)
Rake::Task['update_manager'].invoke
Rake::Task['java:update'].invoke
Rake::Task['authors'].invoke
Rake::Task['all:version'].invoke(version)
Rake::Task['all:changelogs'].invoke
endSuggestion importance[1-10]: 8
__
Why: This suggestion correctly identifies that the new prep_release task lacks validation for the required version argument, which could lead to silent failures, and proposes adding an explicit check to improve robustness.
[possible issue] Handle missing settings file safely
✅ Handle missing settings file safely
Check if ~/.m2/settings.xml exists before attempting to read it to prevent the program from crashing and provide a clear warning if the file is missing.
def read_m2_user_pass
puts 'Maven environment variables not set, inspecting ~/.m2/settings.xml.'
- settings = File.read("#{Dir.home}/.m2/settings.xml")
+ settings_path = File.join(Dir.home, '.m2', 'settings.xml')
+ unless File.exist?(settings_path)
+ warn "Maven settings file not found at #{settings_path}"
+ return
+ end
+
+ settings = File.read(settings_path)
found_section = false
settings.each_line do |line|
if !found_section
found_section = line.include? '<id>central</id>'
elsif line.include?('<username>')
ENV['MAVEN_USER'] = line[%r{<username>(.*?)</username>}, 1]
elsif line.include?('<password>')
ENV['MAVEN_PASSWORD'] = line[%r{<password>(.*?)</password>}, 1]
end
break if ENV['MAVEN_PASSWORD'] && ENV['MAVEN_USER']
end
endSuggestion importance[1-10]: 7
__
Why: The suggestion improves error handling by proactively checking for the existence of the settings.xml file, preventing a crash and providing a more user-friendly warning message.
| PR 16978 (2026-01-22) |
[learned best practice] Require dependencies for OS detection
✅ Require dependencies for OS detection
Add an explicit require 'rbconfig' (or otherwise ensure RbConfig is loaded) before using it, so OS detection does not fail depending on load order.
require 'English'
require 'open3'
require 'rake'
require 'io/wait'
+require 'rbconfig'
module Bazel
def self.windows?
(RbConfig::CONFIG['host_os'] =~ /mswin|msys|mingw32/) != nil
endSuggestion importance[1-10]: 5
__
Why: Relevant best practice - Add explicit availability guards at integration boundaries (OS/env detection) and avoid relying on undeclared dependencies.
| PR 16977 (2026-01-22) |
[possible issue] Fix documentation artifact path
✅ Fix documentation artifact path
Update the artifact-path in the GitHub workflow from docs/api//* to build/docs/api//* to correctly capture the generated documentation.
.github/workflows/update-documentation.yml [80-82]
run: ./go ${{ needs.parse.outputs.language }}:docs
-artifact-path: docs/api/**/*
+artifact-path: build/docs/api/**/*Suggestion importance[1-10]: 9
__
Why: This is a critical fix. The PR removes the step that moved documentation from build/docs/api to docs/api, but it fails to update the artifact-path in the workflow, which would cause the job to fail to capture any artifacts.
| PR 16976 (2026-01-22) |
[possible issue] Fix execution root path resolution
✅ Fix execution root path resolution
Fix the logic for calculating the EXEC_ROOT path in the Unix shell script template. The current implementation is incorrect and will fail to find the required executables.
dotnet/private/docfx.bzl [38-41]
-# Resolve execution root from bazel-bin symlink (bin -> config -> bazel-out -> exec_root)
-EXEC_ROOT=$(cd "$BUILD_WORKSPACE_DIRECTORY/bazel-bin/../../.." && pwd -P)
+# Resolve Bazel execution root from the real bazel-bin path.
+BIN_REAL="$(cd "$BUILD_WORKSPACE_DIRECTORY/bazel-bin" && pwd -P)"
+OUTPUT_BASE="${BIN_REAL%%/bazel-out/*}"
+WS_NAME="$(basename "$BUILD_WORKSPACE_DIRECTORY")"
+EXEC_ROOT="${OUTPUT_BASE}/execroot/${WS_NAME}"
exec "$EXEC_ROOT/{dotnet}" exec \
"$EXEC_ROOT/{docfx}" {config} "$@"Suggestion importance[1-10]: 9
__
Why: The suggestion correctly identifies a critical flaw in the PR's logic for resolving the Bazel execution root, which would lead to incorrect paths and script failure, and provides a robust and correct fix.
[learned best practice] Validate external command output
✅ Validate external command output
Guard against missing/failed bazel or empty output_base so you don’t accidentally probe /external or a malformed path; only form EXTERNAL_DIR when bazel info succeeds and returns a non-empty path.
-EXTERNAL_DIR=$(cd "$REPO_ROOT" && bazel info output_base 2>/dev/null)/external
-if [[ -d "$EXTERNAL_DIR" ]]; then
- DOTNET_DIR=$(find "$EXTERNAL_DIR" -maxdepth 1 -name "rules_dotnet++dotnet+dotnet_*" -type d 2>/dev/null | head -1)
- if [[ -n "$DOTNET_DIR" && -x "$DOTNET_DIR/dotnet" ]]; then
- DOTNET="$DOTNET_DIR/dotnet"
- echo "Using bazel-managed dotnet: $DOTNET"
+OUTPUT_BASE="$(cd "$REPO_ROOT" && command -v bazel >/dev/null 2>&1 && bazel info output_base 2>/dev/null || true)"
+if [[ -n "${OUTPUT_BASE}" ]]; then
+ EXTERNAL_DIR="${OUTPUT_BASE}/external"
+ if [[ -d "$EXTERNAL_DIR" ]]; then
+ DOTNET_DIR=$(find "$EXTERNAL_DIR" -maxdepth 1 -name "rules_dotnet++dotnet+dotnet_*" -type d 2>/dev/null | head -1)
+ if [[ -n "$DOTNET_DIR" && -x "$DOTNET_DIR/dotnet" ]]; then
+ DOTNET="$DOTNET_DIR/dotnet"
+ echo "Using bazel-managed dotnet: $DOTNET"
+ fi
fi
fiSuggestion importance[1-10]: 6
__
Why: Relevant best practice - Add validation/guards for integration-boundary inputs (e.g., environment/tool outputs) before constructing paths from them.
[learned best practice] Add robust path existence checks
✅ Add robust path existence checks
Add explicit existence checks and a clear error message if bazel-bin isn’t present or the derived EXEC_ROOT is empty/invalid, to avoid confusing failures when run outside Bazel or before a build.
dotnet/private/docfx.bzl [35-53]
_UNIX_TEMPLATE = """#!/usr/bin/env bash
set -euo pipefail
cd "$BUILD_WORKSPACE_DIRECTORY"
+if [ ! -e "$BUILD_WORKSPACE_DIRECTORY/bazel-bin" ]; then
+ echo "bazel-bin not found; run a Bazel build first." >&2
+ exit 1
+fi
# Resolve execution root from bazel-bin symlink (bin -> config -> bazel-out -> exec_root)
EXEC_ROOT=$(cd "$BUILD_WORKSPACE_DIRECTORY/bazel-bin/../../.." && pwd -P)
+if [ -z "${EXEC_ROOT}" ] || [ ! -d "${EXEC_ROOT}" ]; then
+ echo "Failed to resolve Bazel execution root." >&2
+ exit 1
+fi
exec "$EXEC_ROOT/{dotnet}" exec \
"$EXEC_ROOT/{docfx}" {config} "$@"
"""
_WINDOWS_TEMPLATE = """@echo off
setlocal
cd /d "%BUILD_WORKSPACE_DIRECTORY%"
+if not exist "%BUILD_WORKSPACE_DIRECTORY%\\bazel-bin" (
+ echo bazel-bin not found; run a Bazel build first. 1>&2
+ exit /b 1
+)
rem Resolve execution root from bazel-bin junction (bin -> config -> bazel-out -> exec_root)
cd /d "%BUILD_WORKSPACE_DIRECTORY%\\bazel-bin\\..\\..\\.."
set EXEC_ROOT=%CD%
+if not exist "%EXEC_ROOT%" (
+ echo Failed to resolve Bazel execution root. 1>&2
+ exit /b 1
+)
cd /d "%BUILD_WORKSPACE_DIRECTORY%"
"%EXEC_ROOT%\\{dotnet}" exec ^
"%EXEC_ROOT%\\{docfx}" {config} %*
"""Suggestion importance[1-10]: 5
__
Why: Relevant best practice - Add validation/guards at integration boundaries when deriving filesystem paths from Bazel/workspace state.
[general] Improve path resolution in Windows script
✅ Improve path resolution in Windows script
Refactor the Windows batch script to use a for loop for more robust path resolution and add endlocal to properly manage environment variables.
dotnet/private/docfx.bzl [44-53]
_WINDOWS_TEMPLATE = """@echo off
setlocal
cd /d "%BUILD_WORKSPACE_DIRECTORY%"
rem Resolve execution root from bazel-bin junction (bin -> config -> bazel-out -> exec_root)
-cd /d "%BUILD_WORKSPACE_DIRECTORY%\\bazel-bin\\..\\..\\.."
-set EXEC_ROOT=%CD%
-cd /d "%BUILD_WORKSPACE_DIRECTORY%"
+for /f "delims=" %%i in ("%BUILD_WORKSPACE_DIRECTORY%\\bazel-bin\\..\\..\\..") do set "EXEC_ROOT=%%~fi"
"%EXEC_ROOT%\\{dotnet}" exec ^
"%EXEC_ROOT%\\{docfx}" {config} %*
+endlocal
"""Suggestion importance[1-10]: 6
__
Why: This is a good suggestion that improves the Windows batch script by using a more idiomatic and robust method for path resolution and by correctly scoping environment variable changes with endlocal.
[learned best practice] Guard and validate exec-root resolution
✅ Guard and validate exec-root resolution
Avoid assuming the bazel-bin/../../.. traversal is valid; resolve via bazel info execution_root and validate it’s non-empty and exists before use.
dotnet/private/docfx.bzl [38-41]
-# Resolve execution root from bazel-bin symlink (bin -> config -> bazel-out -> exec_root)
-EXEC_ROOT=$(cd "$BUILD_WORKSPACE_DIRECTORY/bazel-bin/../../.." && pwd -P)
+EXEC_ROOT="$(cd "$BUILD_WORKSPACE_DIRECTORY" && bazel info execution_root 2>/dev/null || true)"
+if [[ -z "$EXEC_ROOT" || ! -d "$EXEC_ROOT" ]]; then
+ echo "Failed to resolve Bazel execution root" >&2
+ exit 1
+fi
exec "$EXEC_ROOT/{dotnet}" exec \
"$EXEC_ROOT/{docfx}" {config} "$@"Suggestion importance[1-10]: 6
__
Why: Relevant best practice - Add explicit validation/guards at integration boundaries (e.g., filesystem/exec-root resolution) instead of assuming paths exist.
| PR 16965 (2026-01-20) |
[possible issue] Allow artifact downloads permission
✅ Allow artifact downloads permission
Add actions: read permission to the commit job to ensure the download-artifact step can reliably fetch artifacts in restricted environments.
.github/workflows/commit-changes.yml [38-39]
permissions:
contents: write
+ actions: readSuggestion importance[1-10]: 6
__
Why: The suggestion correctly identifies that actions: read permission might be necessary for actions/download-artifact@v4 in repositories with restricted permissions, improving the workflow's robustness and preventing silent failures.
[possible issue] Fail on missing artifact
✅ Fail on missing artifact
Remove continue-on-error: true from the artifact download step and add an explicit check for the patch file's existence to avoid masking failures and provide clearer error messages.
.github/workflows/commit-changes.yml [46-88]
- name: Download patch
uses: actions/download-artifact@v4
with:
name: ${{ inputs.artifact-name }}
- continue-on-error: true
- name: Apply and commit
id: commit
run: |
- if [ -f changes.patch ] && [ -s changes.patch ]; then
+ if [ ! -f changes.patch ]; then
+ echo "::error::Expected changes.patch artifact but it was not downloaded"
+ echo "committed=false" >> "$GITHUB_OUTPUT"
+ exit 1
+ fi
+
+ if [ -s changes.patch ]; then
if ! git apply --index changes.patch; then
echo "::error::Failed to apply patch"
echo "committed=false" >> "$GITHUB_OUTPUT"
exit 1
fi
git config --local user.email "selenium-ci@users.noreply.github.com"
git config --local user.name "Selenium CI Bot"
if ! git commit -m "$COMMIT_MESSAGE"; then
echo "::error::Failed to commit changes"
echo "committed=false" >> "$GITHUB_OUTPUT"
exit 1
fi
if [ -n "$PUSH_BRANCH" ]; then
if ! git push origin HEAD:"$PUSH_BRANCH" --force; then
echo "::error::Failed to push to $PUSH_BRANCH"
echo "committed=false" >> "$GITHUB_OUTPUT"
exit 1
fi
else
if ! git push; then
echo "::error::Failed to push"
echo "committed=false" >> "$GITHUB_OUTPUT"
exit 1
fi
fi
echo "::notice::Changes committed and pushed"
echo "committed=true" >> "$GITHUB_OUTPUT"
else
echo "::notice::No changes to commit"
echo "committed=false" >> "$GITHUB_OUTPUT"
fi
env:
COMMIT_MESSAGE: ${{ inputs.commit-message }}
PUSH_BRANCH: ${{ inputs.push-branch }}Suggestion importance[1-10]: 7
__
Why: The suggestion correctly identifies that continue-on-error: true can mask failures, and improving the robustness of this new reusable workflow by providing clearer failure states is a valuable improvement.
[learned best practice] Validate inputs and fail fast
✅ Validate inputs and fail fast
Validate that COMMIT_MESSAGE is non-empty (and optionally trim it) and enable strict shell options to prevent unexpected behavior from empty inputs or failing commands.
.github/workflows/commit-changes.yml [51-85]
- name: Apply and commit
id: commit
run: |
+ set -euo pipefail
+
if [ -f changes.patch ] && [ -s changes.patch ]; then
+ if [ -z "${COMMIT_MESSAGE//[[:space:]]/}" ]; then
+ echo "::error::Commit message is required"
+ echo "committed=false" >> "$GITHUB_OUTPUT"
+ exit 1
+ fi
+
if ! git apply --index changes.patch; then
echo "::error::Failed to apply patch"
echo "committed=false" >> "$GITHUB_OUTPUT"
exit 1
fi
git config --local user.email "selenium-ci@users.noreply.github.com"
git config --local user.name "Selenium CI Bot"
if ! git commit -m "$COMMIT_MESSAGE"; then
echo "::error::Failed to commit changes"
echo "committed=false" >> "$GITHUB_OUTPUT"
exit 1
fi
- if [ -n "$PUSH_BRANCH" ]; then
+ if [ -n "${PUSH_BRANCH:-}" ]; then
if ! git push origin HEAD:"$PUSH_BRANCH" --force; then
echo "::error::Failed to push to $PUSH_BRANCH"
echo "committed=false" >> "$GITHUB_OUTPUT"
exit 1
fi
else
if ! git push; then
echo "::error::Failed to push"
echo "committed=false" >> "$GITHUB_OUTPUT"
exit 1
fi
fi
echo "::notice::Changes committed and pushed"
echo "committed=true" >> "$GITHUB_OUTPUT"
else
echo "::notice::No changes to commit"
echo "committed=false" >> "$GITHUB_OUTPUT"
fiSuggestion importance[1-10]: 6
__
Why: Relevant best practice - Add explicit validation/guards at integration boundaries (workflow inputs/env) by trimming/checking presence before use.
[learned best practice] Declare required artifact permissions
✅ Declare required artifact permissions
Add actions: read to the job permissions because the workflow downloads artifacts; this avoids relying on implicit/default permissions that may differ across repos/settings.
.github/workflows/commit-changes.yml [38-39]
permissions:
contents: write
+ actions: readSuggestion importance[1-10]: 6
__
Why: Relevant best practice - Request least privileges explicitly; ensure required permissions (like artifact read) are declared for jobs that download artifacts.
[learned best practice] Avoid brittle multiline CLI quoting
✅ Avoid brittle multiline CLI quoting
Avoid embedding a multi-line --body string directly in the command; write the body via a heredoc and pass it using --body-file to prevent quoting/indentation issues.
.github/workflows/pin-browsers.yml [42-54]
run: |
existing=$(gh pr list --head pinned-browser-updates --json number --jq '.[0].number // empty')
if [ -n "$existing" ]; then
echo "::notice::PR #$existing already exists"
else
+ cat > pr-body.txt <<'EOF'
+This is an automated pull request to update pinned browsers and drivers
+
+Merge after verify the new browser versions properly passing the tests and no bugs need to be filed
+EOF
gh pr create \
--head pinned-browser-updates \
--base trunk \
--title "[dotnet][rb][java][js][py] Automated Browser Version Update" \
- --body "This is an automated pull request to update pinned browsers and drivers
-
- Merge after verify the new browser versions properly passing the tests and no bugs need to be filed"
+ --body-file pr-body.txt
fiSuggestion importance[1-10]: 5
__
Why: Relevant best practice - Prefer robust input handling at integration boundaries; avoid brittle multi-line shell quoting when building API/CLI payloads.
[possible issue] Fail on patch application errors
✅ Fail on patch application errors
Add set -e to the beginning of the run block to ensure the script exits immediately if any command, such as git apply, fails.
.github/workflows/commit-changes.yml [54-69]
-if [ -f changes.patch ] && [ -s changes.patch ]; then
- git apply --index changes.patch
- git config --local user.email "selenium-ci@users.noreply.github.com"
- git config --local user.name "Selenium CI Bot"
- git commit -m "$COMMIT_MESSAGE"
- […]
-else
- echo "::notice::No changes to commit"
- echo "committed=false" >> "$GITHUB_OUTPUT"
-fi
+run: |
+ set -e
+ if [ -f changes.patch ] && [ -s changes.patch ]; then
+ git apply --index changes.patch
+ […]
+ else
+ echo "::notice::No changes to commit"
+ echo "committed=false" >> "$GITHUB_OUTPUT"
+ fiSuggestion importance[1-10]: 8
__
Why: The suggestion correctly identifies that the script would silently continue if git apply fails; adding set -e is a crucial change to ensure the job fails immediately on error.
[learned best practice] Validate workflow inputs before use
✅ Validate workflow inputs before use
Validate/sanitize COMMIT_MESSAGE and PUSH_BRANCH (non-empty, trimmed, and restricted charset) before using them, to avoid unexpected behavior or refspec injection.
.github/workflows/commit-changes.yml [51-72]
- name: Apply and commit
id: commit
run: |
+ set -euo pipefail
+
+ COMMIT_MESSAGE="${COMMIT_MESSAGE#"${COMMIT_MESSAGE%%[![:space:]]*}"}"
+ COMMIT_MESSAGE="${COMMIT_MESSAGE%"${COMMIT_MESSAGE##*[![:space:]]}"}"
+ if [ -z "$COMMIT_MESSAGE" ]; then
+ echo "::error::commit-message must be non-empty"
+ exit 1
+ fi
+
+ if [ -n "${PUSH_BRANCH:-}" ] && ! [[ "$PUSH_BRANCH" =~ ^[A-Za-z0-9._/-]+$ ]]; then
+ echo "::error::push-branch contains invalid characters"
+ exit 1
+ fi
+
if [ -f changes.patch ] && [ -s changes.patch ]; then
git apply --index changes.patch
git config --local user.email "selenium-ci@users.noreply.github.com"
git config --local user.name "Selenium CI Bot"
git commit -m "$COMMIT_MESSAGE"
- if [ -n "$PUSH_BRANCH" ]; then
+ if [ -n "${PUSH_BRANCH:-}" ]; then
git push origin HEAD:"$PUSH_BRANCH" --force
else
git push
fi
echo "::notice::Changes committed and pushed"
echo "committed=true" >> "$GITHUB_OUTPUT"
else
echo "::notice::No changes to commit"
echo "committed=false" >> "$GITHUB_OUTPUT"
fi
env:
COMMIT_MESSAGE: ${{ inputs.commit-message }}
PUSH_BRANCH: ${{ inputs.push-branch }}Suggestion importance[1-10]: 6
__
Why: Relevant best practice - Add explicit validation/guards at integration boundaries (workflow inputs/env) before using them in shell commands.
[learned best practice] Guard against null CLI output
✅ Guard against null CLI output
Make the gh pr list parsing robust by returning an empty string when no PR exists (instead of null) before checking -n.
.github/workflows/pin-browsers.yml [39-54]
- name: Create Pull Request
env:
GH_TOKEN: ${{ secrets.SELENIUM_CI_TOKEN }}
run: |
- existing=$(gh pr list --head pinned-browser-updates --json number --jq '.[0].number')
+ existing=$(gh pr list --head pinned-browser-updates --json number --jq '.[0].number // empty')
if [ -n "$existing" ]; then
echo "::notice::PR #$existing already exists"
else
gh pr create \
--head pinned-browser-updates \
--base trunk \
--title "[dotnet][rb][java][js][py] Automated Browser Version Update" \
--body "This is an automated pull request to update pinned browsers and drivers
Merge after verify the new browser versions properly passing the tests and no bugs need to be filed"
fiSuggestion importance[1-10]: 5
__
Why: Relevant best practice - Add explicit validation/guards at integration boundaries (CLI/JSON outputs) before using values.
| PR 16960 (2026-01-20) |
[possible issue] Use correct release commit SHA
✅ Use correct release commit SHA
Use the pull request's merge_commit_sha for PR-triggered runs to ensure the release tag points to the correct commit, falling back to github.sha for other event types.
.github/workflows/release.yml [109]
-commit: "${{ github.sha }}"
+commit: "${{ github.event_name == 'pull_request' && github.event.pull_request.merge_commit_sha || github.sha }}"Suggestion importance[1-10]: 9
__
Why: The suggestion correctly identifies that github.sha points to the PR's head commit, not the merge commit, which is critical for creating a release tag that accurately reflects the state of the default branch after the merge.
[incremental [*]] Make release creation idempotent
✅ Make release creation idempotent
Set allowUpdates: true to prevent workflow failures on re-runs if a release already exists, making the job idempotent.
.github/workflows/release.yml [103]
-allowUpdates: ${{ github.run_attempt > 1 }}
+allowUpdates: trueSuggestion importance[1-10]: 8
__
Why: The suggestion correctly points out a potential failure scenario on re-runs and proposes a change to make the release creation step more robust and idempotent.
[incremental [*]] Avoid persisting Git credentials
✅ Avoid persisting Git credentials
Set persist-credentials: false in the checkout action and only authenticate when needed to reduce the exposure of secrets.
.github/workflows/release.yml [82-84]
with:
token: ${{ secrets.SELENIUM_CI_TOKEN }}
- persist-credentials: true
+ persist-credentials: falseSuggestion importance[1-10]: 7
__
Why: The suggestion correctly identifies a security best practice to limit credential exposure, which is relevant as a later step performs a git push.
[possible issue] Use consistent credentials for tag deletion
✅ Use consistent credentials for tag deletion
Configure the actions/checkout step to use the SELENIUM_CI_TOKEN and persist credentials. This ensures both gh and git commands have consistent and sufficient permissions to prevent failures when deleting tags.
.github/workflows/release.yml [80-97]
- name: Checkout repo
uses: actions/checkout@v4
+ with:
+ token: ${{ secrets.SELENIUM_CI_TOKEN }}
+ persist-credentials: true
- name: Delete nightly release and tag
env:
GH_TOKEN: ${{ secrets.SELENIUM_CI_TOKEN }}
run: |
if gh release view nightly >/dev/null 2>&1; then
gh release delete nightly --yes
fi
if git ls-remote --tags origin refs/tags/nightly | grep -q nightly; then
git push origin --delete refs/tags/nightly
fiSuggestion importance[1-10]: 9
__
Why: The suggestion correctly identifies a potential permissions issue where git push --delete might fail because it uses the default GITHUB_TOKEN, unlike the gh commands which use a privileged SELENIUM_CI_TOKEN. This change is critical for the release workflow's reliability.
[incremental [*]] Restrict release updates to reruns
✅ Restrict release updates to reruns
Change allowUpdates: true to allowUpdates: ${{ github.run_attempt > 1 }} to restrict release updates to only occur on workflow reruns.
.github/workflows/release.yml [101]
-allowUpdates: true
+allowUpdates: ${{ github.run_attempt > 1 }}Suggestion importance[1-10]: 8
__
Why: This is a valuable security enhancement that prevents accidental overwrites of a release, only allowing updates on job reruns, which is a safer operational practice.
[possible issue] Download all packages for release
✅ Download all packages for release
Modify the github-release job to download all release packages into the build/dist directory by using a single download-artifact step without a specific artifact name.
.github/workflows/release.yml [82-90]
-- name: Download Java packages
+- name: Download all release packages
uses: actions/download-artifact@v4
with:
- name: release-packages-java
-- name: Download .NET packages
- uses: actions/download-artifact@v4
- with:
- name: release-packages-dotnet
path: build/distSuggestion importance[1-10]: 9
__
Why: The suggestion correctly identifies a critical bug where the release would be incomplete, as it only downloads a subset of packages and places one in the wrong directory. The proposed fix is correct and efficient.
[learned best practice] Make release creation rerunnable
✅ Make release creation rerunnable
Make the release step safe to re-run by allowing updates or skipping when the release already exists, to avoid failures when the tag/release is present from a partial run.
.github/workflows/release.yml [101-109]
- name: Create GitHub release
uses: ncipollo/release-action@v1
with:
artifacts: "build/dist/*.*"
bodyFile: "scripts/github-actions/release_header.md"
generateReleaseNotes: true
name: "Selenium ${{ needs.prepare.outputs.version }}"
tag: "${{ needs.prepare.outputs.tag }}"
commit: "${{ github.sha }}"
+ allowUpdates: true
+ omitBodyDuringUpdate: true
+ omitNameDuringUpdate: trueSuggestion importance[1-10]: 5
__
Why: Relevant best practice - Make lifecycle-sensitive automation robust and idempotent so reruns/retries do not fail due to already-created resources.
| PR 16957 (2026-01-19) |
[learned best practice] Validate external data before use
✅ Validate external data before use
Guard against missing platform entries from the external API by providing a default and raising a descriptive error instead of letting StopIteration propagate.
scripts/pinned_browsers.py [53]
-url = next(d["url"] for d in drivers if d["platform"] == "linux64")
+url = next((d["url"] for d in drivers if d.get("platform") == "linux64"), None)
+if not url:
+ raise ValueError("Missing chromedriver download URL for platform 'linux64'")Suggestion importance[1-10]: 5
__
Why: Relevant best practice - Add explicit validation and availability guards at integration boundaries by checking presence/type and failing with clear errors before using external inputs.
| PR 16955 (2026-01-19) |
[possible issue] Fix artifact path nesting
✅ Fix artifact path nesting
In .github/workflows/update-documentation.yml, change the download path for the documentation artifact from docs/api to . to prevent creating a nested and incorrect directory structure.
.github/workflows/update-documentation.yml [94-98]
- name: Download documentation
uses: actions/download-artifact@v4
with:
name: documentation
- path: docs/api
+ path: .Suggestion importance[1-10]: 9
__
Why: This is a critical bug fix. The upload-artifact action preserves the full path, so downloading to a subdirectory creates a nested, incorrect path structure (docs/api/docs/api/...), which would break the documentation update logic.
[possible issue] Prevent nested dist directories
✅ Prevent nested dist directories
In .github/workflows/nightly.yml, change the download path for the nightly-grid artifact from build/dist to . to avoid creating a nested directory structure and ensure the release step finds the files.
.github/workflows/nightly.yml [191-195]
- name: Download grid packages
uses: actions/download-artifact@v4
with:
name: nightly-grid
- path: build/dist
+ path: .Suggestion importance[1-10]: 9
__
Why: This is a critical bug fix. The upload-artifact action preserves the full path, so downloading to build/dist would create a nested build/dist/build/dist/ path. This would cause the subsequent release step to fail as it would not find any files to upload.
[possible issue] Avoid doubled output directories
✅ Avoid doubled output directories
In .github/workflows/release.yml, change the download path for the release-packages artifact from build/dist to . to prevent a nested directory issue that would cause the release creation to fail.
.github/workflows/release.yml [72-76]
- name: Download release packages
uses: actions/download-artifact@v4
with:
name: release-packages
- path: build/dist
+ path: .Suggestion importance[1-10]: 9
__
Why: This is a critical bug fix. The upload-artifact action preserves the full path, so downloading to build/dist would create a nested build/dist/build/dist/ path. This would cause the ncipollo/release-action to fail as it would not find the packages for the draft release.
[incremental [*]] Capture complete patch diffs
✅ Capture complete patch diffs
Modify the Save git diff step to capture both staged and unstaged changes, handle binary files, and avoid creating empty patch files. Update the run command to git diff --binary > changes.patch; git diff --cached --binary >> changes.patch; [ -s changes.patch ] || rm -f changes.patch.
.github/workflows/bazel.yml [254-258]
- name: Save git diff
if: always() && inputs.artifact-name != '' && inputs.artifact-path == ''
- run: git diff > changes.patch
+ run: |
+ git diff --binary > changes.patch
+ git diff --cached --binary >> changes.patch
+ [ -s changes.patch ] || rm -f changes.patch
- name: Upload artifact
if: always() && inputs.artifact-name != ''Suggestion importance[1-10]: 8
__
Why: The suggestion correctly identifies that git diff alone is insufficient as it misses staged changes, which are used by downstream jobs (git apply --index). The proposed change to include git diff --cached and --binary is a crucial fix for the correctness of the new reusable workflow.
[possible issue] Upload artifacts even on failure
✅ Upload artifacts even on failure
Add always() to the conditions for saving and uploading artifacts in bazel.yml to ensure they run even if the main run step fails.
.github/workflows/bazel.yml [254-264]
- name: Save git diff
- if: inputs.artifact-name != '' && inputs.artifact-path == ''
+ if: always() && inputs.artifact-name != '' && inputs.artifact-path == ''
run: git diff > changes.patch
- name: Upload artifact
- if: inputs.artifact-name != ''
+ if: always() && inputs.artifact-name != ''
uses: actions/upload-artifact@v4
with:
name: ${{ inputs.artifact-name }}
path: ${{ inputs.artifact-path || 'changes.patch' }}
retention-days: 6
if-no-files-found: ${{ inputs.artifact-path != '' && 'error' || 'warn' }}Suggestion importance[1-10]: 9
__
Why: The suggestion correctly identifies a critical flaw where artifacts are not generated if the main run step fails, which breaks several dependent jobs like auto-formatting. Adding always() is essential for the new workflow design to function as intended.
[possible issue] Handle empty bazel targets file correctly
✅ Handle empty bazel targets file correctly
In the read-targets job, check if bazel-targets.txt is empty. If so, set the targets output to an empty string to avoid it containing a newline, which could cause issues in downstream jobs.
.github/workflows/ci.yml [33-51]
read-targets:
name: Read Targets
needs: check
runs-on: ubuntu-latest
outputs:
targets: ${{ steps.read.outputs.targets }}
steps:
- name: Download targets
uses: actions/download-artifact@v4
with:
name: check-targets
- name: Read targets
id: read
run: |
- {
- echo "targets<<EOF"
- cat bazel-targets.txt
- echo "EOF"
- } >> "$GITHUB_OUTPUT"
+ if [ -s bazel-targets.txt ]; then
+ {
+ echo "targets<<EOF"
+ cat bazel-targets.txt
+ echo "EOF"
+ } >> "$GITHUB_OUTPUT"
+ else
+ echo "targets=" >> "$GITHUB_OUTPUT"
+ fiSuggestion importance[1-10]: 8
__
Why: The suggestion correctly identifies that an empty bazel-targets.txt file leads to a job output with a newline, which can cause issues in downstream jobs. The proposed fix ensures the output is a truly empty string, which is a critical improvement for the correctness of the workflow logic.
[possible issue] Prevent job failure if no changes
✅ Prevent job failure if no changes
In the push-rust-version job, add a condition to check if changes.patch is non-empty before attempting to apply it and commit, preventing job failure when there are no changes.
.github/workflows/pre-release.yml [43-66]
push-rust-version:
name: Push Rust Version
needs: generate-rust-version
runs-on: ubuntu-latest
steps:
- name: Checkout repo
uses: actions/checkout@v4
with:
token: ${{ secrets.SELENIUM_CI_TOKEN }}
fetch-depth: 0
- name: Download rust version patch
uses: actions/download-artifact@v4
with:
name: rust-version
- name: Apply patch
- run: git apply --index changes.patch
+ run: |
+ if [ -s changes.patch ]; then
+ git apply --index changes.patch
+ echo "CHANGES_APPLIED=true" >> "$GITHUB_ENV"
+ fi
- name: Prep git
+ if: env.CHANGES_APPLIED == 'true'
run: |
git config --local user.email "selenium-ci@users.noreply.github.com"
git config --local user.name "Selenium CI Bot"
- name: Push changes
+ if: env.CHANGES_APPLIED == 'true'
run: |
git commit -m "update selenium manager version and rust changelog"
git push origin HEAD:rust-release-${{ inputs.version }} --forceSuggestion importance[1-10]: 7
__
Why: This is a valid improvement for workflow robustness, preventing a potential failure if no version changes are generated. This pattern of checking for a non-empty patch is used elsewhere in the PR, making this a consistent and valuable addition.
[general] Fetch full git history
✅ Fetch full git history
Add fetch-depth: 0 to the actions/checkout step in the bazel.yml reusable workflow to ensure the full git history is available for subsequent git operations.
.github/workflows/bazel.yml [98-101]
- name: Checkout source tree
uses: actions/checkout@v4
with:
ref: ${{ inputs.ref || github.ref }}
+ fetch-depth: 0Suggestion importance[1-10]: 8
__
Why: This is a critical fix for the reusable workflow, as several calling workflows rely on git diff or commit ranges which require the full git history. Without fetch-depth: 0, these operations would fail or produce incorrect results.
[learned best practice] Avoid swallowing patch-apply errors
✅ Avoid swallowing patch-apply errors
Replace the unconditional || echo ... with an explicit file existence/size check so real git apply failures fail the job instead of being silently ignored.
.github/workflows/release.yml [189-190]
- name: Apply patch
- run: git apply --index changes.patch || echo "No changes to apply"
+ shell: bash
+ run: |
+ if [ ! -f changes.patch ] || [ ! -s changes.patch ]; then
+ echo "No changes to apply"
+ exit 0
+ fi
+ git apply --index changes.patchSuggestion importance[1-10]: 5
__
Why: Relevant best practice - Add explicit validation/availability guards at integration boundaries (e.g., downloaded artifacts/files) and avoid swallowing errors from critical steps.
| PR 16954 (2026-01-19) |
[general] Use standard braces for class definition
✅ Use standard braces for class definition
Replace the semicolon with curly braces {} for the empty class definition of ContextSetCacheBehaviorOptions to improve readability and align with conventional C# style.
dotnet/src/webdriver/BiDi/Network/SetCacheBehaviorCommand.cs [41]
-public sealed class ContextSetCacheBehaviorOptions : CommandOptions;
+public sealed class ContextSetCacheBehaviorOptions : CommandOptions
+{
+}Suggestion importance[1-10]: 3
__
Why: This is a valid stylistic suggestion that improves code consistency and readability by adhering to common C# conventions for empty class definitions.
| PR 16952 (2026-01-19) |
[possible issue] Prevent undefined state on timeout
✅ Prevent undefined state on timeout
Prevent a potential NameError in the java:publish_deployment task by initializing the state variable to nil before the polling loop.
encoded_id = URI.encode_www_form_component(deployment_id.strip)
status = {}
+state = nil
max_attempts = 60
delay = 5
max_attempts.times do |attempt|
status = sonatype_api_post("https://central.sonatype.com/api/v1/publisher/status?id=#{encoded_id}", token)
state = status['deploymentState']
puts "Deployment state: #{state}"
case state
when 'VALIDATED', 'PUBLISHED' then break
when 'FAILED' then raise "Deployment failed: #{status['errors']}"
end
sleep(delay)
rescue StandardError => e
warn "API error (attempt #{attempt + 1}/#{max_attempts}): #{e.message}"
sleep(delay) unless attempt == max_attempts - 1
end
return if status['deploymentState'] == 'PUBLISHED'
-raise "Timed out after #{(max_attempts * delay) / 60} minutes waiting for validation" unless state == 'VALIDATED'
+final_state = status['deploymentState'] || state
+raise "Timed out after #{(max_attempts * delay) / 60} minutes waiting for validation" unless final_state == 'VALIDATED'Suggestion importance[1-10]: 7
__
Why: This suggestion correctly identifies a potential NameError if all API calls in the polling loop fail, as the state variable would be uninitialized. The fix is correct and prevents a crash in a plausible error scenario.
[possible issue] Initialize variable to prevent NameError
✅ Initialize variable to prevent NameError
Initialize the status variable to an empty hash before the 60.times loop to prevent a NameError if the sonatype_api_post call fails.
+status = {}
60.times do
status = sonatype_api_post("https://central.sonatype.com/api/v1/publisher/status?id=#{deployment_id}", token)
state = status['deploymentState']
puts "Deployment state: #{state}"
case state
when 'VALIDATED' then break
when 'PUBLISHED' then exit(0)
when 'FAILED' then raise "Deployment failed: #{status['errors']}"
end
sleep(5)
end
raise 'Timed out waiting for validation' unless status['deploymentState'] == 'VALIDATED'
expected = java_release_targets.size
actual = status['purls']&.size || 0
raise "Expected #{expected} packages but found #{actual}" if actual != expectedSuggestion importance[1-10]: 6
__
Why: This suggestion correctly identifies a potential NameError if the sonatype_api_post call fails within the loop, preventing status from being initialized. Initializing status before the loop is a valid fix for this edge case.
[learned best practice] Validate auth and harden HTTP calls
✅ Validate auth and harden HTTP calls
Guard against blank tokens and add timeouts when calling the external API; also ensure the authorization scheme matches the credential you’re passing (e.g., don’t label a Base64 basic token as Bearer).
def sonatype_api_post(url, token)
+ token = token&.strip
+ raise 'Sonatype token required' if token.nil? || token.empty?
+
uri = URI(url)
req = Net::HTTP::Post.new(uri)
- req['Authorization'] = "Bearer #{token}"
+ req['Authorization'] = "Basic #{token}"
- res = Net::HTTP.start(uri.hostname, uri.port, use_ssl: true) { |http| http.request(req) }
+ res = Net::HTTP.start(uri.hostname, uri.port, use_ssl: true,
+ open_timeout: 10, read_timeout: 30) { |http| http.request(req) }
raise "Sonatype API error (#{res.code}): #{res.body}" unless res.is_a?(Net::HTTPSuccess)
- res.body.empty? ? {} : JSON.parse(res.body)
+ res.body.to_s.empty? ? {} : JSON.parse(res.body)
endSuggestion importance[1-10]: 6
__
Why: Relevant best practice - Add explicit validation/guards for environment-based credentials and external API calls (auth scheme correctness and request robustness).
[learned best practice] Encode and validate URL parameters
✅ Encode and validate URL parameters
Strip and validate deployment_id, and URL-encode it (or use URI.encode_www_form) when building the Sonatype status URL to avoid malformed requests or injection via special characters.
-deployment_id = arguments[:deployment_id] || ENV.fetch('DEPLOYMENT_ID', nil)
+deployment_id = (arguments[:deployment_id] || ENV.fetch('DEPLOYMENT_ID', nil))&.strip
raise 'Deployment ID required' if deployment_id.nil? || deployment_id.empty?
...
60.times do
- status = sonatype_api_post("https://central.sonatype.com/api/v1/publisher/status?id=#{deployment_id}", token)
+ query = URI.encode_www_form(id: deployment_id)
+ status = sonatype_api_post("https://central.sonatype.com/api/v1/publisher/status?#{query}", token)Suggestion importance[1-10]: 5
__
Why: Relevant best practice - Add explicit validation and sanitization/encoding at integration boundaries (e.g., URL parameters) before use.
| PR 16951 (2026-01-19) |
[security] Secure .npmrc file permissions
✅ Secure .npmrc file permissions
Set the file permissions of .npmrc to 0o600 after writing to it to protect the contained credentials.
if File.exist?(npmrc)
File.open(npmrc, 'a') { |f| f.puts(auth_line) }
else
File.write(npmrc, "#{auth_line}\n")
end
+File.chmod(0o600, npmrc)Suggestion importance[1-10]: 8
__
Why: This is a critical security suggestion, as the .npmrc file contains a secret token and its permissions should be restricted to the owner, which is a standard practice demonstrated elsewhere in the same file for other credential files.
| PR 16950 (2026-01-19) |
[possible issue] Correctly resolve package runfile paths
✅ Correctly resolve package runfile paths
Instead of hardcoding the _main repository prefix for package paths, dynamically determine the correct runfiles path by checking if the package is from the main workspace.
dotnet/private/nuget_push.bzl [29-32]
for nupkg in nupkg_files:
+ # The runfiles path for a file from the main workspace is _main/path/to/file,
+ # but for external workspaces it is other_repo/path/to/file.
+ if nupkg.owner.workspace_root == "":
+ nupkg_runfiles_path = "_main/" + nupkg.short_path
+ else:
+ nupkg_runfiles_path = nupkg.short_path
+
push_commands.append(
- '"$DOTNET" nuget push "$RUNFILES_DIR/_main/{nupkg}" --api-key "$NUGET_API_KEY" --source "$NUGET_SOURCE" --skip-duplicate --no-symbols'.format(nupkg = nupkg.short_path),
+ '"$DOTNET" nuget push "$RUNFILES_DIR/{nupkg}" --api-key "$NUGET_API_KEY" --source "$NUGET_SOURCE" --skip-duplicate --no-symbols'.format(nupkg = nupkg_runfiles_path),
)Suggestion importance[1-10]: 9
__
Why: The suggestion fixes a significant bug where hardcoding the _main repository prefix would cause failures for packages from external repositories, and it provides a robust solution.
[possible issue] Correct prefix stripping
✅ Correct prefix stripping
Replace lstrip("../") with a safer check using startswith("../") and slicing to remove only a single leading ../ prefix from the path.
dotnet/private/nuget_push.bzl [35]
-dotnet_runfiles_path = dotnet.short_path.lstrip("../")
+if dotnet.short_path.startswith("../"):
+ dotnet_runfiles_path = dotnet.short_path[3:]
+else:
+ dotnet_runfiles_path = dotnet.short_pathSuggestion importance[1-10]: 7
__
Why: The suggestion correctly points out that lstrip("../") can over-strip characters and replaces it with a safer, more explicit method to remove a single ../ prefix, improving code robustness.
[general] Fix Windows runfiles path
✅ Fix Windows runfiles path
Apply the same ../ prefix stripping logic from the Unix script to the dotnet.short_path in the Windows script to ensure the executable is correctly located in external repositories.
dotnet/private/nuget_push.bzl [74]
-dotnet_path = dotnet.short_path.replace("/", "\\")
+external_dotnet = dotnet.short_path
+if external_dotnet.startswith("../"):
+ external_dotnet = external_dotnet[3:]
+dotnet_path = external_dotnet.replace("/", "\\")Suggestion importance[1-10]: 8
__
Why: The suggestion correctly identifies that the path fix for external repositories was only applied for Unix and not for Windows, and provides a necessary correction for feature parity and correctness.
This wiki is not where you want to be! Visit the Wiki Home for more useful links
Getting Involved
Triaging Issues
Releasing Selenium
Ruby Development
Python Bindings
Ruby Bindings
WebDriverJs
This content is being evaluated for where it belongs
Architectural Overview
Automation Atoms
HtmlUnitDriver
Lift Style API
LoadableComponent
Logging
PageFactory
RemoteWebDriver
Xpath In WebDriver
Moved to Official Documentation
Bot Style Tests
Buck
Continuous Integration
Crazy Fun Build
Design Patterns
Desired Capabilities
Developer Tips
Domain Driven Design
Firefox Driver
Firefox Driver Internals
Focus Stealing On Linux
Frequently Asked Questions
Google Summer Of Code
Grid Platforms
History
Internet Explorer Driver
InternetExplorerDriver Internals
Next Steps
PageObjects
RemoteWebDriverServer
Roadmap
Scaling WebDriver
SeIDE Release Notes
Selenium Emulation
Selenium Grid 4
Selenium Help
Shipping Selenium 3
The Team
TLC Meetings
Untrusted SSL Certificates
WebDriver For Mobile Browsers
Writing New Drivers