Conversation
…nardobelchior/mui-x into bernardobelchior/perf-benchmark-tool
|
Deploy preview: https://deploy-preview-21222--material-ui-x.netlify.app/ Bundle size report
|
…n/gh-action-master-performance
This reverts commit 6ad6eae.
|
This pull request has conflicts, please resolve those before we can evaluate the pull request. |
There was a problem hiding this comment.
Pull request overview
This pull request adds a comprehensive benchmark tool and GitHub Actions workflows for performance testing of MUI X components. The implementation includes a Next.js-based benchmark application using Playwright for automated testing, along with CI/CD workflows to track and compare performance metrics over time.
Changes:
- Added benchmark tool infrastructure with Next.js app, Playwright tests, and React Profiler integration
- Implemented GitHub Actions workflows for running benchmarks on PRs and storing baseline data
- Created CI scripts for comparing benchmark results and generating performance reports
Reviewed changes
Copilot reviewed 37 out of 38 changed files in this pull request and generated 16 comments.
Show a summary per file
| File | Description |
|---|---|
.github/workflows/benchmark-comparison.yml |
Workflow for comparing PR benchmarks against master baseline |
.github/workflows/benchmark-baseline.yml |
Workflow for storing baseline benchmark data on master branch |
test/benchmark-tool/utils/* |
Utility functions for profiling, reporting, and page navigation |
test/benchmark-tool/ci-scripts/* |
Scripts for fetching, comparing, and reporting benchmark data |
test/benchmark-tool/app/**/* |
Benchmark test pages and tests for various components |
test/benchmark-tool/*.{json,ts} |
Configuration files for Next.js, Playwright, and TypeScript |
pnpm-lock.yaml |
Dependency updates including Playwright version bump |
Files not reviewed (1)
- pnpm-lock.yaml: Language not supported
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| pull_request_target: | ||
| types: | ||
| - opened | ||
| - synchronize | ||
| - reopened |
There was a problem hiding this comment.
The workflow uses pull_request_target which runs in the context of the base repository with write permissions. This is a security risk because it could allow untrusted code from a PR to access secrets and write to the repository. Consider using only pull_request or implement proper security measures.
| pull_request_target: | |
| types: | |
| - opened | |
| - synchronize | |
| - reopened |
There was a problem hiding this comment.
This worries me. I think we should move to the pull_request + workflow_run pattern. pull_request executes bench and stores artifact, low permissions. Workflow_run gathers artifacts and posts PR comment.
There was a problem hiding this comment.
Shouldn't it be pull_request_target if we want it to have write access to a branch in the main repo? Else it will only have access to the users branch, which is not how our contribution model works
There was a problem hiding this comment.
Why does it need write access to a branch in the main repo? It should execute the benchmark and stores the results as an artefact.
There was a problem hiding this comment.
I got confused, this is for master, then it shouldn't need pull_request nor pull_request_target as it should run on push to master only.
I'm inclined to store master's results in an orphan branch on this repo
This comment wouldn't work when using pull_request only
There was a problem hiding this comment.
It's only for pull_requests, master branch is handled by the push action
push(only onmaster): run benchmarks, store in orphan branchpull_request(only onPRs`): run benchmark, store in artefactworkflow_run(when PR workflow finishes): download artefact + master result for merge-base, compare, post PR comment
| const contentsResponse = await fetch( | ||
| 'https://api.github.com/repos/mnajdova/performance-benchmark-data/contents/mui-x', |
There was a problem hiding this comment.
The URL is hardcoded to a personal GitHub repository 'mnajdova/performance-benchmark-data'. For production use, this should be configurable via environment variables or point to an official organization repository to avoid dependency on a personal account.
There was a problem hiding this comment.
To avoid external repos and keeping the process self-contained, I'm inclined to store master's results in an orphan branch on this repo. PR results could live as GHA artefacts with 90 day retention.
|
|
||
| function onRender(id: string, phase: 'mount' | 'update' | 'nested-update', actualDuration: number) { | ||
| const startTime = performance.now() - actualDuration; | ||
| (window as any)[CAPTURE_RENDER_FN]?.({ |
There was a problem hiding this comment.
Using (window as any) bypasses TypeScript's type safety. Consider defining a proper type for the window object with the custom property, or at least check if the function exists before calling it to avoid potential runtime errors if the function isn't exposed yet.
| const pageState = new WeakMap<Page, PageState>(); | ||
|
|
||
| export function getRouteFromFilename(filename: string): string { | ||
| return path.dirname(filename).split('/app').pop()!; |
There was a problem hiding this comment.
The non-null assertion operator (!) is used on line 15 with .pop()! which assumes that the split will always produce at least one element. If the filename doesn't contain '/app', this will return undefined and cause a runtime error. Consider adding validation or a more robust path extraction method.
| // Enable React profiling in production builds | ||
| reactProductionProfiling: true, | ||
| productionBrowserSourceMaps: true, | ||
| experimental: { |
There was a problem hiding this comment.
The configuration disables Turbopack minification with turbopackMinify: false. While this may be intentional for benchmarking purposes to get consistent results, consider adding a comment explaining why minification is disabled, as it's unusual for production configurations.
| experimental: { | |
| experimental: { | |
| // Disable Turbopack minification to keep benchmark results consistent and easier to analyze. |
| pull_request_target: | ||
| types: | ||
| - opened | ||
| - synchronize | ||
| - reopened |
There was a problem hiding this comment.
The workflow uses pull_request_target which runs in the context of the base repository with write permissions. This is a security risk because it could allow untrusted code from a PR to access secrets and write to the repository. The workflow should either use only pull_request (which runs in a restricted context) or implement proper security measures to prevent malicious PRs from exploiting the write permissions.
| pull_request_target: | |
| types: | |
| - opened | |
| - synchronize | |
| - reopened |
| console.warn( | ||
| `High coefficient of variation (${(coefficientOfVariation * 100).toFixed(1)}%) for event "${getEventKey(event)}". ` + | ||
| `Mean: ${meanDuration.toFixed(2)}ms, StdDev: ${stdDev.toFixed(2)}ms. Results may be unreliable.`, | ||
| ); |
There was a problem hiding this comment.
The warning uses console.warn which will be visible in the output, but on line 19 console.log is disabled via eslint comment while console.warn is not. For consistency, either allow both or suppress both console methods depending on the intended behavior.
| # Extract just the repository name (after the /) | ||
| REPO_DIR=$(echo "$REPO_NAME" | cut -d'/' -f2) |
There was a problem hiding this comment.
The comment on line 91 says "Extract just the repository name (after the /)" but then uses it to create a directory. This assumes the repository name doesn't contain characters that could cause issues in directory names. Consider sanitizing the repository name to ensure it's safe for use in file paths.
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com> Signed-off-by: Jan Potoms <2109932+Janpot@users.noreply.github.com>
|
|
||
| const latestFolder = folders[0]; | ||
|
|
||
| console.warn(`Using latest benchmark folder: ${latestFolder}`); |
There was a problem hiding this comment.
Using latest benchmark from master is incorrect. it's a moving target. You may be comparing with different commits at different points in time. Instead you should calculate the merge-base with the master branch and use that commit for comparison.
Therefore you must store data by commit. I'm changing data layout to
# store results by commit
benchmarks/commits/{SHA}.json
# append only, store the same results in jsonl, batched per month
benchmarks/monthly/{YYYY-MM}.jsonl|
Just a note that we deliberately cut corners, so this is not really in a review state |
|
I was asked to take this to the finish line. Just documenting the changes I'm making. |
Nvm then. From what you wrote it seemed you were just reviewing it. |
|
This pull request has been inactive for 30 days. Please remove the stale label or leave a comment to keep it open. Otherwise, it will be closed in 15 days. |
Based on top of #21220