Resolves: MTV-4551 | Add virtual machines to existing plan#2261
Resolves: MTV-4551 | Add virtual machines to existing plan#2261avivtur wants to merge 2 commits intokubev2v:mainfrom
Conversation
|
Codecov Report❌ Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #2261 +/- ##
===========================================
- Coverage 36.81% 14.84% -21.97%
===========================================
Files 158 1135 +977
Lines 2548 20284 +17736
Branches 599 3997 +3398
===========================================
+ Hits 938 3012 +2074
- Misses 1428 16611 +15183
- Partials 182 661 +479 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
|
|
||
| // Read the name of a VM that is currently in the plan for exclusion checks | ||
| const plannedVmName = await planDetailsPage.virtualMachinesTab.getFirstVMName(); | ||
| expect(plannedVmName).toBeTruthy(); |
There was a problem hiding this comment.
Make sure all test logic are within step blocks
| }); | ||
|
|
||
| await test.step('4. Select a VM and verify confirm button becomes enabled', async () => { | ||
| const modal = await planDetailsPage.virtualMachinesTab.clickAddVirtualMachines(); |
There was a problem hiding this comment.
Test could possible be optimized by reducing number of times opening and closing of modal.
| await planDetailsPage.virtualMachinesTab.navigateToVirtualMachinesTab(); | ||
|
|
||
| // Wait for the status to reflect the archived state | ||
| await page.waitForTimeout(2000); |
There was a problem hiding this comment.
Is waitForTimeout avoidable here?
There was a problem hiding this comment.
Please consider reusing common/VirtualMachinesTable.ts or common/Table.ts page objects instead to avoid redefining common actions. getRowCount method could be moved to common/Table.ts since it's generic enough.
65c685d to
b342c5c
Compare
|
|
||
| const tableProps: ProviderVirtualMachinesListProps = useMemo( | ||
| () => ({ | ||
| hasCriticalConcernFilter: true, |
There was a problem hiding this comment.
selectedIds is in the useMemo deps for tableProps, so every selection rebuilds the object and re-renders the entire VM list. I think initialSelectedIds should be set once (maybe via a ref) rather than updating on every selection change.
There was a problem hiding this comment.
A ref cannot replace state here. The useEffect in usePageSelection (line 33-35) is the only mechanism that re-synchronizes internalSelectedIds when the component re-renders from external causes (inventory polling, K8s watch events). It fires only when the selectedIds prop reference changes. A ref never triggers that change, so selection silently resets on the next poll-driven re-render. The create-plan wizard's VirtualMachinesTable has the same pattern -- it uses react-hook-form's value (which is state) as initialSelectedIds and includes it in useMemo deps (line 83). One re-render per user click is the expected cost and matches the existing working pattern.
| const selectedVmsRef = useRef<VmData[]>([]); | ||
| const [hasSelection, setHasSelection] = useState(false); | ||
|
|
||
| const handleSelect = useCallback((vms: VmData[]): void => { |
There was a problem hiding this comment.
usePlanSourceProvider(plan) is called here and again in AddVirtualMachinesTable. That's two K8s watches for the same provider. Could the modal fetch it once and pass it down?
| launcher<AddVirtualMachineProps>(AddVirtualMachinesModal, { plan }); | ||
| }; | ||
|
|
||
| const reason = useMemo((): string | null => { |
There was a problem hiding this comment.
This uses isPlanEditable while the existing DeleteVirtualMachinesButton uses isPlanArchived. Add is more restrictive (disables during execution, succeeded, etc.) but delete only disables for archived. Is that inconsistency intentional, or should delete be updated too?
There was a problem hiding this comment.
changed the delete button to also disabled if plan is not editable
| case PROVIDER_TYPES.ova: | ||
| return <OvaVirtualMachinesList {...tableProps} />; | ||
| case PROVIDER_TYPES.hyperv: | ||
| return <HypervVirtualMachinesList {...tableProps} />; |
There was a problem hiding this comment.
The default case returns <></> — if the provider type is still loading, the user sees an empty modal with no feedback. I think a loading indicator would be better here.
| const existingVmIds = useMemo((): Set<string> => { | ||
| const planVms = getPlanVirtualMachines(plan); | ||
| return new Set((planVms ?? []).map((vm) => vm.id).filter(Boolean) as string[]); | ||
| }, [plan]); |
There was a problem hiding this comment.
The filtering uses vm.id to exclude existing VMs. For OpenShift providers, is id guaranteed unique across namespaces from the inventory?
There was a problem hiding this comment.
id is based on uid which should be globally unique
| const op = currentVms ? REPLACE : ADD; | ||
|
|
||
| const newVmEntries: V1beta1PlanSpecVms[] = selectedVmsRef.current.map((vmData) => ({ | ||
| id: vmData.vm.id, |
There was a problem hiding this comment.
handleSave reads getPlanVirtualMachines(plan) from the plan prop. If someone else modifies the plan concurrently, this could add duplicate VMs. Is the plan object kept fresh via the K8s watch, or is this a stale closure risk?
There was a problem hiding this comment.
k8s resources have resourceVersion in their metadata to avoid this, so there should be no issue on this
| await page.waitForLoadState('networkidle'); | ||
| await planDetailsPage.virtualMachinesTab.verifyTableLoaded(); | ||
|
|
||
| const afterAddRowCount = await planDetailsPage.virtualMachinesTab.getRowCount(); |
There was a problem hiding this comment.
networkidle can be flaky with K8s watches polling in the background. Would waiting for a specific element or API response be more reliable?
9b3f1b6 to
99194ad
Compare
jpuzz0
left a comment
There was a problem hiding this comment.
1 remaining comment. Otherwise looks good!
|
|
||
| const handleSave = useCallback(async () => { | ||
| const currentVms = getPlanVirtualMachines(plan); | ||
| const op = currentVms ? REPLACE : ADD; |
There was a problem hiding this comment.
getPlanVirtualMachines(plan) returns plan?.spec?.vms ?? [], which is always truthy (empty array). So op will always be REPLACE and the ADD branch is dead code. Should this just use REPLACE directly, or should the check be !isEmpty(currentVms) if the ADD path is actually needed?
There was a problem hiding this comment.
there's always at least one vm in the plan so it should always be REPLACE, changed
Signed-off-by: Aviv Turgeman <aturgema@redhat.com>
Signed-off-by: Aviv Turgeman <aturgema@redhat.com>
99194ad to
1b9a69b
Compare
|



📝 Links
https://issues.redhat.com/browse/MTV-4551
📝 Description
expose a dialog in the virtual machines tab to add virtual machines to the plan
🎥 Demo
📝 CC://