Skip to content

Conversation

@camilamacedo86
Copy link
Contributor

@camilamacedo86 camilamacedo86 commented Feb 7, 2025

IGNORE

@camilamacedo86 camilamacedo86 requested a review from a team as a code owner February 7, 2025 13:31
@openshift-ci openshift-ci bot added the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Feb 7, 2025
@netlify
Copy link

netlify bot commented Feb 7, 2025

Deploy Preview for olmv1 ready!

Name Link
🔨 Latest commit ae41bfc
🔍 Latest deploy log https://app.netlify.com/sites/olmv1/deploys/67a676ce9cf77c0007f3def1
😎 Deploy Preview https://deploy-preview-1729--olmv1.netlify.app
📱 Preview on mobile
Toggle QR Code...

QR Code

Use your smartphone camera to open QR code link.

To edit notification comments on pull requests, go to your Netlify site configuration.

@camilamacedo86 camilamacedo86 changed the title WIP: Enhance catalogd cache performance by using sync.Map and adding struc… WIP: Enhance catalogd cache by using sync.Map and adding struc… Feb 7, 2025
@camilamacedo86 camilamacedo86 changed the title WIP: Enhance catalogd cache by using sync.Map and adding struc… Enhance catalogd cache by using sync.Map and adds logger Feb 7, 2025
@openshift-ci openshift-ci bot removed the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Feb 7, 2025
@camilamacedo86 camilamacedo86 changed the title Enhance catalogd cache by using sync.Map and adds logger WIP: Enhance catalogd cache by using sync.Map and adds logger Feb 7, 2025
@openshift-ci openshift-ci bot added the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Feb 7, 2025
@codecov
Copy link

codecov bot commented Feb 7, 2025

Codecov Report

Attention: Patch coverage is 88.88889% with 3 lines in your changes missing coverage. Please review.

Project coverage is 68.08%. Comparing base (ae41bfc) to head (386f295).

Files with missing lines Patch % Lines
internal/catalogmetadata/cache/cache.go 88.46% 2 Missing and 1 partial ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main    #1729      +/-   ##
==========================================
+ Coverage   68.02%   68.08%   +0.05%     
==========================================
  Files          59       59              
  Lines        5004     5016      +12     
==========================================
+ Hits         3404     3415      +11     
- Misses       1358     1359       +1     
  Partials      242      242              
Flag Coverage Δ
e2e 52.36% <59.25%> (-0.57%) ⬇️
unit 55.36% <85.18%> (+0.08%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

cachePath: cachePath,
mutex: sync.RWMutex{},
cacheDataByCatalogName: map[string]cacheData{},
cacheDataByCatalogName: sync.Map{},
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sync.Map's lack of type safety is a concern. Could we hold off on this change until sync/v2 package lands with a generic sync.Map?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also it looks like the existing mutex is still in place (for other reasons?), so the extra locking if the sync.Map doesn't buy us anything.


if err := os.RemoveAll(cacheDir); err != nil {
return nil, fmt.Errorf("error removing old cache directory: %v", err)
if _, err := os.Stat(cacheDir); err == nil {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This seems unnecessary since os.RemoveAll already tolerates the case that the cache dir doesn't exist.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It seems that os.RemoveAll costs more than os.Stat(.
That was the motivation.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there a benchmark that can show the difference? Even if os.RemoveAll is significantly slower:

  1. How often will we be running this function?
  2. When this function runs, how often will cacheDir not exist?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You are totally right, it will depends on how much times we call writeFS.
I have nothing of that for now, I am just looking to see how we can improve the cache.
It is just an experimental staff.

})

if errToCache != nil {
fsc.logger.Error(errToCache, "Cache update failed", "catalog", catalogName, "ref", resolvedRef)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would this error ultimately propagate back up to to the reconciler and be reported back to the user? If so, seems duplicative to log it since it will show up in our logs anyway. Maybe add context to the returned error instead?

}

return cacheFS, errToCache
fsc.logger.Info("Cache updated successfully", "catalog", catalogName, "ref", resolvedRef)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we have any particular concerns with the need for more visibility into the inner workings of what is going on inside this cache that motivates the extra logging?

This feels like it could start to make our logs pretty noisy. I'd suggest setting this to log at level 4 to keep our standard logging a little bit leaner.

@camilamacedo86 camilamacedo86 changed the title WIP: Enhance catalogd cache by using sync.Map and adds logger IGNORE Feb 7, 2025
@openshift-ci openshift-ci bot removed the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Feb 7, 2025
@camilamacedo86
Copy link
Contributor Author

It seems that not so much here that will be valid
So closing it out.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants