-
Notifications
You must be signed in to change notification settings - Fork 68
IGNORE #1729
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
IGNORE #1729
Conversation
✅ Deploy Preview for olmv1 ready!
To edit notification comments on pull requests, go to your Netlify site configuration. |
3ada648 to
386f295
Compare
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #1729 +/- ##
==========================================
+ Coverage 68.02% 68.08% +0.05%
==========================================
Files 59 59
Lines 5004 5016 +12
==========================================
+ Hits 3404 3415 +11
- Misses 1358 1359 +1
Partials 242 242
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. |
| cachePath: cachePath, | ||
| mutex: sync.RWMutex{}, | ||
| cacheDataByCatalogName: map[string]cacheData{}, | ||
| cacheDataByCatalogName: sync.Map{}, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
sync.Map's lack of type safety is a concern. Could we hold off on this change until sync/v2 package lands with a generic sync.Map?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also it looks like the existing mutex is still in place (for other reasons?), so the extra locking if the sync.Map doesn't buy us anything.
|
|
||
| if err := os.RemoveAll(cacheDir); err != nil { | ||
| return nil, fmt.Errorf("error removing old cache directory: %v", err) | ||
| if _, err := os.Stat(cacheDir); err == nil { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This seems unnecessary since os.RemoveAll already tolerates the case that the cache dir doesn't exist.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It seems that os.RemoveAll costs more than os.Stat(.
That was the motivation.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there a benchmark that can show the difference? Even if os.RemoveAll is significantly slower:
- How often will we be running this function?
- When this function runs, how often will cacheDir not exist?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You are totally right, it will depends on how much times we call writeFS.
I have nothing of that for now, I am just looking to see how we can improve the cache.
It is just an experimental staff.
| }) | ||
|
|
||
| if errToCache != nil { | ||
| fsc.logger.Error(errToCache, "Cache update failed", "catalog", catalogName, "ref", resolvedRef) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would this error ultimately propagate back up to to the reconciler and be reported back to the user? If so, seems duplicative to log it since it will show up in our logs anyway. Maybe add context to the returned error instead?
| } | ||
|
|
||
| return cacheFS, errToCache | ||
| fsc.logger.Info("Cache updated successfully", "catalog", catalogName, "ref", resolvedRef) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we have any particular concerns with the need for more visibility into the inner workings of what is going on inside this cache that motivates the extra logging?
This feels like it could start to make our logs pretty noisy. I'd suggest setting this to log at level 4 to keep our standard logging a little bit leaner.
386f295 to
ae41bfc
Compare
|
It seems that not so much here that will be valid |
IGNORE