Skip to content
Open
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
41 changes: 40 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ Use `go get` to retrieve the SDK to add it to your `GOPATH` workspace, or projec
$go get github.com/ciscoecosystem/mso-go-client
```

There are no additional dependancies needed to be installed.
There are no additional dependencies needed to be installed.

## Overview ##

Expand Down Expand Up @@ -44,3 +44,42 @@ Example,
client.Save("api/v1/tenants", models.NewTenant(TenantAttributes))
# TenantAttributes is struct present in models/tenant.go
```

## Caching Support ##

The client supports optional caching for schema requests to improve performance in scenarios with frequent schema lookups.

### Enabling Caching ###

Caching is **disabled by default** for safety. Enable it using the `CacheEnabled` option:

```golang
import "github.com/ciscoecosystem/mso-go-client/client"

// Enable caching
msoClient := client.GetClient("URL", "Username",
client.Password("Password"),
client.Insecure(true),
client.CacheEnabled(true))
```

### Cache Operations ###

Once caching is enabled, you can use the following methods:

```golang
// Fetch schema with caching support (automatically handles cache hits/misses)
schema, err := msoClient.GetSchemaWithCache("schema-id")
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why not adjust the current GetViaURL, so that when caching is enabled it leverage caching for everything?

Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

for now i wanted to enabled it on selected resources only that we know are problematic to see the behaviour

Copy link
Copy Markdown
Contributor

@akinross akinross Dec 19, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I personally think we should have consistent behaviour on all the resources, but this is not a resource (schema) discussion in my opinion but a generic caching discussion. In this case I would say we should have something like a GetViaURLWithCache function that can be used for schema ( but also for other endpoints ) where it would access the caching mechanism.

In the provider we can then gradually change those resources.


// Invalidate a specific schema from cache (e.g., after updates)
msoClient.InvalidateSchemaCache("schema-id")

// Clear all cached schemas (useful for bulk operations or cleanup)
msoClient.InvalidateAllSchemaCache()

// Get cache statistics for monitoring
hits, misses, invalidations, hitRatio := msoClient.GetCacheStats()

// Log current cache statistics
msoClient.LogCacheStats()
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think these functions/methods should be part of a cache struct and not the client

```
204 changes: 204 additions & 0 deletions client/client.go
Original file line number Diff line number Diff line change
Expand Up @@ -57,6 +57,8 @@ type Client struct {
backoffMinDelay int
backoffMaxDelay int
backoffDelayFactor float64
Cache *ThreadSafeCache
cacheEnabled bool
}

type CallbackRetryFunc func(*container.Container) bool
Expand Down Expand Up @@ -138,6 +140,12 @@ func BackoffDelayFactor(backoffDelayFactor float64) Option {
}
}

func CacheEnabled(enabled bool) Option {
return func(client *Client) {
client.cacheEnabled = enabled
}
}

Comment on lines +143 to +148
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is this required? can't we init the client with a cache when we provide that option? this way when a cache exist on the client it is used and else it is not

func initClient(clientUrl, username string, options ...Option) *Client {
var transport *http.Transport
bUrl, err := url.Parse(clientUrl)
Expand All @@ -150,6 +158,7 @@ func initClient(clientUrl, username string, options ...Option) *Client {
username: username,
httpClient: http.DefaultClient,
maxReAuthRetries: 3,
Cache: NewThreadSafeCache(),
}

for _, option := range options {
Expand Down Expand Up @@ -377,6 +386,119 @@ func (c *Client) GetVersion() (string, error) {
return version, nil
}

// deepCloneContainer creates a true deep copy of a Container to prevent shared data races
func (c *Client) deepCloneContainer(original *container.Container) (*container.Container, error) {
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why expose this in client and not in gabs.go as a method/function on the container itself?

if original == nil {
return nil, nil
}

// Use gabs-compatible deep cloning via JSON bytes but with proper container creation
jsonBytes, err := json.Marshal(original.Data())
if err != nil {
log.Printf("[WARN] Failed to marshal container for cloning: %v", err)
return original, nil // Return original as fallback
}

cloned, err := container.ParseJSON(jsonBytes)
if err != nil {
log.Printf("[WARN] Failed to parse JSON for cloning: %v", err)
return original, nil // Return original as fallback
}

return cloned, nil
}

// GetSchemaWithCache retrieves schema with caching support
func (c *Client) GetSchemaWithCache(schemaId string) (*container.Container, error) {
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we make the cache mechanism generic to allow for anything to be cached with some sort of cache-id?

Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we could but im just not sure if it makes sense, i dont see any other object in ndo that could grow as much as schema to need caching, the new tenant templates have pretty limited configuration options so we wouldnt gain much from caching them

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would argue two point to this:

  1. schema originally was not this big but it grew overtime, in some template structures like l3out templates there are already large configurations possible
  2. from a consistency point of view I think the caching should behave similar for every GET request

// Skip cache if disabled - fall back to direct API call
if !c.cacheEnabled {
log.Printf("[DEBUG] SCHEMA_CACHE_DISABLED for %s, fetching from API", schemaId)
return c.GetViaURL(fmt.Sprintf("api/v1/schemas/%s", schemaId))
}

cacheKey := fmt.Sprintf("schema_%s", schemaId)

// Check cache first - use atomic get+clone
cloneFunc := func(item interface{}) (interface{}, error) {
return c.deepCloneContainer(item.(*container.Container))
}

if cached, found, cloneErr := c.Cache.Get(cacheKey, cloneFunc); found {
hits, misses, invalidations, hitRatio := c.Cache.GetStats()
log.Printf("[DEBUG] SCHEMA_CACHE_HIT for %s | Stats: Hits=%d, Misses=%d, Invalidations=%d, HitRatio=%.1f%%",
schemaId, hits, misses, invalidations, hitRatio)

if cloneErr != nil {
log.Printf("[WARN] Failed to clone cached container for %s, fetching fresh: %v", schemaId, cloneErr)
return c.GetViaURL(fmt.Sprintf("api/v1/schemas/%s", schemaId))
}
return cached.(*container.Container), nil
}

hits, misses, invalidations, hitRatio := c.Cache.GetStats()
log.Printf("[DEBUG] SCHEMA_CACHE_MISS for %s, fetching from API | Stats: Hits=%d, Misses=%d, Invalidations=%d, HitRatio=%.1f%%",
schemaId, hits, misses, invalidations, hitRatio)

// Cache miss - fetch from API
cont, err := c.GetViaURL(fmt.Sprintf("api/v1/schemas/%s", schemaId))
if err != nil {
return nil, err
}

// Store in cache
c.Cache.Set(cacheKey, cont)
log.Printf("[DEBUG] SCHEMA_CACHED for %s | Size: %d items in cache", schemaId, len(c.Cache.items))

// CRITICAL: Return deep clone even for fresh data to maintain consistency
cloned, err := c.deepCloneContainer(cont)
if err != nil {
log.Printf("[WARN] Failed to clone fresh container for %s, returning original: %v", schemaId, err)
return cont, nil // Return original as fallback
}
return cloned, nil
}

// InvalidateSchemaCache removes a schema from cache
func (c *Client) InvalidateSchemaCache(schemaId string) {
// Skip cache operations if caching is disabled
if !c.cacheEnabled {
log.Printf("[DEBUG] SCHEMA_CACHE_DISABLED, skipping invalidation for %s", schemaId)
return
}

cacheKey := fmt.Sprintf("schema_%s", schemaId)
c.Cache.Delete(cacheKey)
hits, misses, invalidations, hitRatio := c.Cache.GetStats()
log.Printf("[DEBUG] SCHEMA_CACHE_INVALIDATED for %s | Stats: Hits=%d, Misses=%d, Invalidations=%d, HitRatio=%.1f%%",
schemaId, hits, misses, invalidations, hitRatio)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why not leverage the LogCacheStats function, or some other wrapper function to prevent all the copied code?

}

// InvalidateAllSchemaCache removes all schema caches (for safety)
func (c *Client) InvalidateAllSchemaCache() {
// Skip cache operations if caching is disabled
if !c.cacheEnabled {
log.Printf("[DEBUG] SCHEMA_CACHE_DISABLED, skipping all cache invalidation")
return
}

c.Cache.DeletePattern("schema_")
hits, misses, invalidations, hitRatio := c.Cache.GetStats()
log.Printf("[DEBUG] SCHEMA_CACHE_ALL_INVALIDATED | Stats: Hits=%d, Misses=%d, Invalidations=%d, HitRatio=%.1f%%",
hits, misses, invalidations, hitRatio)
}

// GetCacheStats returns current cache statistics
func (c *Client) GetCacheStats() (hits, misses, invalidations int64, hitRatio float64) {
return c.Cache.GetStats()
}

// LogCacheStats logs current cache statistics
func (c *Client) LogCacheStats() {
hits, misses, invalidations, hitRatio := c.Cache.GetStats()
log.Printf("[DEBUG] SCHEMA_CACHE_STATS | Hits=%d, Misses=%d, Invalidations=%d, HitRatio=%.1f%%, Size=%d",
hits, misses, invalidations, hitRatio, len(c.Cache.items))
}

// Compares the version to the retrieved version.
// This returns -1, 0, or 1 if this version is smaller, equal, or larger than the retrieved version, respectively.
func (c *Client) CompareVersion(v string) (int, error) {
Expand Down Expand Up @@ -594,3 +716,85 @@ func stripQuotes(word string) string {
}
return word
}

type ThreadSafeCache struct {
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why add this too client file instead of as separate cache file?
do we have a non thread safe cache? should we just name it cache and allow have thread-safe implementation for the methods/functions?

mu sync.RWMutex
items map[string]interface{}
hits int64
misses int64
invalidations int64
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What is your exact goal of this statistics part? should we also include statistics per cache item?
If we want to expose statistics I think we should create a statistics struct which is part of each item, with the necessary data.

type ThreadSafeCache struct {
	mu            sync.RWMutex
	items         map[string]CacheItem
}

type CacheItem struct {
	item 		  interface{}
	hits          int64
	misses        int64
	invalidations int64
}

Perhaps even further where the statistics are also set in its own type.

Then have a cache statistics method/function that would provide a statistics report based on preference of statistics you would like to represent.

}

// NewThreadSafeCache creates and returns a new initialized ThreadSafeCache.
func NewThreadSafeCache() *ThreadSafeCache {
return &ThreadSafeCache{
items: make(map[string]interface{}),
}
}

// Set adds or updates an item in the cache.
func (c *ThreadSafeCache) Set(key string, value interface{}) {
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the c is kind of confusing me with the c used for client, shall we make this more specific?

c.mu.Lock()
defer c.mu.Unlock()
c.items[key] = value
}

// Get atomically gets and clones an item to prevent race conditions
func (c *ThreadSafeCache) Get(key string, cloneFunc func(interface{}) (interface{}, error)) (interface{}, bool, error) {
c.mu.RLock()
item, found := c.items[key]

var result interface{}
var cloneErr error

if found {
// Clone while holding read lock - prevents race conditions
result, cloneErr = cloneFunc(item)
}
c.mu.RUnlock()

// Update statistics
c.mu.Lock()
if found {
c.hits++
} else {
c.misses++
}
c.mu.Unlock()

return result, found, cloneErr
}

// Delete removes an item from the cache.
func (c *ThreadSafeCache) Delete(key string) {
c.mu.Lock()
defer c.mu.Unlock()
delete(c.items, key)
c.invalidations++
}

// DeletePattern removes all items from the cache matching the pattern.
func (c *ThreadSafeCache) DeletePattern(pattern string) {
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

do we also require a delete all or clear cache method/function?

Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

there is InvalidateAllSchemaCache that can be used to clear all cache in case it's needed - will change that to Clear to have it for all possible cache. We also have InvalidateSchemaCache that is used after every create/update operation on schema resources
not sure there is need for anything more, closing the terraform sessions should clear cache on its own

c.mu.Lock()
defer c.mu.Unlock()
deletedCount := 0
for key := range c.items {
if strings.Contains(key, pattern) {
delete(c.items, key)
deletedCount++
}
}
}

func (c *ThreadSafeCache) GetStats() (hits, misses, invalidations int64, hitRatio float64) {
c.mu.RLock()
defer c.mu.RUnlock()
hits = c.hits
misses = c.misses
invalidations = c.invalidations
total := hits + misses
if total > 0 {
hitRatio = float64(hits) / float64(total) * 100
}
return
}
Loading