upcoming: [DPS-42065] - Add virtualization in CloudPulseResourcesSelect dropdown, Loading indicator in CloudPulse Metrics#13575
Conversation
…o-akamai/manager into virtualisation_resources_select
…on_resources_select # Conflicts: # pnpm-lock.yaml
…o-akamai/manager into virtualisation_resources_select
Cloud Manager UI test results🔺 1 failing test on test run #8 ↗︎
Details
TroubleshootingUse this command to re-run the failing tests: pnpm cy:run -s "cypress/e2e/core/objectStorage/object-storage.e2e.spec.ts" |
|||||||||||||||||
|
@bnussman-akamai / @dwiley-akamai / @pmakode-akamai , can you please take a look |
abailly-akamai
left a comment
There was a problem hiding this comment.
Needing to virtualize a list usually points to wrong data fetching decisions. Why fetching everything and not paginated records? Those endpoints support X-Filter for search as well, and you can fetch more on scroll + do a search API on the label.
// example: packages/manager/src/features/IAM/Delegations/UpdateDelegationForm.tsx
<Autocomplete
{...}
slotProps={{
listbox: {
onScroll: (event: React.SyntheticEvent) => {
const listboxNode = event.currentTarget;
if (
listboxNode.scrollTop + listboxNode.clientHeight >=
listboxNode.scrollHeight &&
hasNextPage
) {
fetchNextPage();
}
},
},
}}
/>This will remove all this "await loading" logic and and need for virtualization (and adding a new library to CM. We don't have to make the user wait for all the data.
Besides, what happens when you select 500 linodes in this Automplete? The UI is awkward because on focus you get a very large amount of entity Chips in the Input.
this is solved in various places of the UI by using the pattern showed below
Can you please check with @tzmiivsk-akamai if this UI could adopt this pattern as well?
|
@abailly-akamai , Thanks for the feedback I understand the concern about adding a new library and the general preference for server-side pagination. However, CloudPulse has specific requirements that make server-side pagination impractical. Here's why:
CloudPulse implements a comprehensive user preferences system that stores and restores user selections across sessions. This is a core feature that cannot be compromised How it breaks With Server-Side Pagination:
Resources are filtered based on multiple dependent filters (region, tags, node types, etc.): These filters work client-side across the entire dataset and we have full control. Server-side pagination would require:
Also regarding virtualization library addition
Happy to discuss further or explore alternative approaches that don't compromise the preferences system! @tzmiivsk-akamai , please take a look as well cc- @kmuddapo |
|
@venkymano-akamai thanks for the explanation. Virtualization is only one of the problem, regardless of the library size. This is regrettable overall. I understand the constraints but you will likely keep building on top of bad paradigms, and keep adding similar band-aids to work around design decisions. Large accounts are likely going to see their user experience degraded and their UI lag. It would be worthwhile to at least doing some benchmarking around the time it takes to render graphs (from page load to selection to rendering graphs) on an account with 2000+ linodes. This may educate the decision to consider refactors down the line since it does not seem realistic now. |
|
@abailly-akamai, thank you for the thoughtful feedback and for raising these important architectural considerations. I want to clarify a few key points and address the concerns about the overall design: 📊 The Actual Problem We're Solving
Server-Side Pagination Creates the Same Issue
Prior Internal Discussion & MUI Recommendation Slack Thread: https://linode.slack.com/archives/C07DF5MQX16/p1774510547626749 MUI officially recommends react-window for Autocomplete virtualization: https://mui.com/material-ui/react-autocomplete/#virtualization Please let me know if you have any additional things needed to be considered Happy to schedule a sync to discuss the broader architectural direction if that would be helpful cc - @tzmiivsk-akamai , @kmuddapo , @bnussman-akamai |
|
@venkymano-akamai as you mentioned i am raising them. If not addressed them the debt may be passed to the user, that is all. I have no doubt this decision was agreed by your team, this is besides the point.
Not sure i follow. It is an issue with overfetching and client-side handling large data sets. Period. Your assessment of Server-Side Pagination having the same issue in incorrect in practice. No one will scroll though 2000 records in an Autocomplete to find an item. They will search for it. These are optimistic patterns that end up solving the problems we're discussing. I appreciate the argument, but best would be to curate actual optimization tickets and decide later if they are worth it for your team - shutting down the feedback this way isn't really helpful. |
|
@abailly-akamai - Thank you for the feedback and for raising these concerns. Virtualization is a valid and widely-used approach for handling large lists (MUI recommends it, used by Slack, Airbnb, etc.), and it solves the immediate rendering performance issue we're facing. For this PR, let's move forward with the current implementation. For the future, I'm happy to create optimization tickets to explore server-side pagination/search as an alternative approach, including how to handle preferences, search, and other edge cases. We can plan those together and evaluate what works best for CloudPulse. |
|
@abailly-akamai Thank you for the detailed feedback and for highlighting these architectural considerations. As @venkymano-akamai noted, we previously engaged the CM Team in the ACLP channel to explore various patterns for this specific use case. In the absence of a better alternative that addressed all our requirements at that time, we collectively decided to move forward with Virtualization to resolve our immediate rendering bottlenecks. We agree that Server-Side Pagination is an ideal long-term standard for large datasets. However, implementing it while maintaining support for ACLP specific features like Preferences is not straightforward and wont solve ACLP usecase and would significantly impact our current delivery timelines for CloudPulse to integrate compute(linode) service. We have noted your concerns and will ensure you are included in the optimization tickets Venkat mentioned. This will allow us to evaluate a permanent server-side solution in a future iteration that fully accounts for ACLP's unique needs. @venkymano-akamai Please raise the ticket now with details and tag @abailly-akamai and share here, i will try to prioritize this in upcoming sprints soon so that you can coordinate with CM team and finalize the better solution if any available. Let's proceed with this merge so we can stay on track with our release schedule. Thanks a lot!! in advance |
|
@abailly-akamai - Created ticket for analysis and implementation of the server side fetch, tackling all the challenges discussed. https://track.akamai.com/jira/browse/DPS-42409 I will schedule a call with you go through the problems that we have to find a appropriate solution As suggested by @kmuddapo , can we proceed and merge this PR? |
Description 📝
Adds virtualized list rendering for large datasets and delayed loading indicators to improve performance and provide better user feedback during long-running operations
These changes address performance issues when loading large numbers of CloudPulse resources (100+ items) and improve the overall user experience by providing clearer feedback during extended loading times.
Changes 🔄
Scope 🚢
Upon production release, changes in this PR will be visible to:
Target release date 🗓️ Next Release Date
Preview 📷
Virtualisation_Video.webm
How to test 🧪
Author Checklists
As an Author, to speed up the review process, I considered 🤔
👀 Doing a self review
❔ Our contribution guidelines
🤏 Splitting feature into small PRs
➕ Adding a changeset
🧪 Providing/improving test coverage
🔐 Removing all sensitive information from the code and PR description
🚩 Using a feature flag to protect the release
👣 Providing comprehensive reproduction steps
📑 Providing or updating our documentation
🕛 Scheduling a pair reviewing session
📱 Providing mobile support
♿ Providing accessibility support
As an Author, before moving this PR from Draft to Open, I confirmed ✅