-
Notifications
You must be signed in to change notification settings - Fork 747
fix(amazonq): fix enterprise users not able to sign in correctly if they have 2+ vscode instances open #7151
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
…ll require users to reselect profile
…se it will require users to reselect profile" This reverts commit 3e50145.
|
@justinmk3 i will add test as a followup |
|
|
||
| if (!this.isConnected()) { | ||
| await this.regionProfileManager.invalidateProfile(this.regionProfileManager.activeRegionProfile?.arn) | ||
| await this.regionProfileManager.resetCache() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
should invalidateProfile do this internally?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is kinda redundant i think, and also its name is confusing for sure (should've named it releaseLock), but i will remove it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
invalidateProfile doesn't need this i think as the source of truth of profile invalidateProfile is also from this resourceCache. Originally i added this line is kinda for debugging purpose and to avoid a dead lock scenario when the code was problematic. However i think releaseLock will be called
- when
getResourcegoes to pull real response and executes successfully - when
getResourcegoes to pull real response and executes exceptionally - when
getResourcegoes to return cached value - if one process/ide instance waits to long (current it's set to 15s), it will go either (1) or (2) and eventually release the lock again
so I think it's redundant
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
just reminds me that i need to add one clearCache onConnectionChanged. Cache expiration is only 60s atm tho, worth adding one in case ppl change connection and use the wrong resultset.
| const _acquireLock = async () => { | ||
| const cachedValue = this.readCacheOrDefault() | ||
|
|
||
| if (!cachedValue.resource.locked) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
shouldn't this (also) check resource.timestamp and skip lock-aquisition if the cached value is new enough? what will prevent multiple vscode instances from sequentially waiting and then (redundantly) making the service call?
| const cachedValue = await this.tryLoadResourceAndLock() | ||
| const resource = cachedValue?.resource | ||
|
|
||
| // If cache is still fresh, return cached result, otherwise pull latest from the service | ||
| if (cachedValue && resource && resource.result) { | ||
| const duration = now() - resource.timestamp | ||
| if (duration < this.expirationInMilli) { | ||
| logger.info(`cache hit, duration(%sms) is less than expiration(%sms)`, duration, this.expirationInMilli) | ||
| // release the lock | ||
| await this.releaseLock(resource, cachedValue) | ||
| return resource.result |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
oh, I think I get it: multiple vscode instances will all acquire the lock, but the N+1 instances will each return here.
I guess that's fine. But it means they all must acquire a lock just to do the read. Maybe that's necessary, to avoid reading during an update.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yea exactly, i should have mentioned it more explicitly in the doc string, it's like a read write lock? if i understand it correctly.
But not multiple vscode instances will all acquire the lock, only 1 can hold the lock and the rest of them need to wait until the first one finish pulling the real response and release the lock. Then the rest of them can start acquiring the lock 1 by 1 and return early here using the cached value.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
there're definitely room to improve i think in the future. Not have enough time to consider all the potential use cases and requirements so currently it only meets the needs for listProfile / listCustomization.
justinmk3
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, thanks for followup
* remove regionProfileManager#resetCache
| logger.debug( | ||
| `cache is stale, duration(%sms) is older than expiration(%sms), pulling latest resource %s`, | ||
| duration, | ||
| this.expirationInMilli, | ||
| this.key | ||
| ) | ||
| } | ||
| } else { | ||
| logger.info(`cache miss, pulling latest resource %s`, this.key) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When I open 2 vscode instances locally, I see cache is stale in one, and cache miss in the other. That means the fetch (service call) happens in both?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yea, one of my commit today broke it... #7173
Problem
revision of #7134, this pr aims to address the comment from the previous PR #7134 (comment) to extract the logic to a shared module so the diff against 7134 is expected to be large.
after deployment 4/21, service has a strict 1 tps throttling policy and there was no caching for the API call previously.
It will impact users as long as they have multiple ide instances opened as all of them will make list profile call while users attempt to sign in.
Solution
feature/xbranches will not be squash-merged at release time.