Skip to content

Conversation

@SuperOleg39
Copy link
Contributor

Rationale

Custom cache for DNS interceptor allows:

  • add custom metrics for DNS cache hit rate monitoring
  • use any cache library (lru-cache, etc.)

Reference - https://github.com/szmarczak/cacheable-lookup?tab=readme-ov-file#cache

If these changes are acceptable, I will add unit test and validation for this parameter.

Changes

Added cache parameter for DNSInstance

Status

Copy link
Contributor

@Uzlopak Uzlopak left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So you just renamed the attribute records to cache. Made the attribute public. And lastly you make the cache as an option to be passed via the options parameter of the constructor.

I have questions:
Why is cache now publicly accessible? Did you do this on purpose? Please explain.

What happens if i pass an lru cache which handles ttl and max items by itself. How is full coming into play with this if the LRU cache is configured with other values?

In runLookup we just ignore new entries if the Map is full. Isnt the point of having an LRU, to have the last recently used element be in the cache and not the first x elements till hitting the maxItems limit?

There is no definition of the shape of the cache. What if a .get() call returns undefined and not null?

@Uzlopak Uzlopak requested a review from Copilot September 22, 2025 13:18
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

This PR adds a configurable cache option to the DNS interceptor, allowing users to provide custom cache implementations instead of being limited to the default Map. This enables custom metrics for DNS cache hit rate monitoring and support for specialized cache libraries like lru-cache.

Key changes:

  • Added cache parameter to DNSInstance constructor with Map as default fallback
  • Replaced all internal #records Map usage with the configurable cache property

Tip: Customize your code reviews with copilot-instructions.md. Create the file or learn how to get started.

affinity = null
lookup = null
pick = null
cache = null
Copy link

Copilot AI Sep 22, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The cache property should not be initialized to null since it's expected to be a Map-like object. Consider initializing it to new Map() or removing the initialization entirely since it's set in the constructor.

Suggested change
cache = null

Copilot uses AI. Check for mistakes.
Comment on lines 24 to 25
this.cache = opts.cache ?? new Map()
}
Copy link

Copilot AI Sep 22, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Missing input validation for the cache parameter. The cache object should be validated to ensure it has the required Map-like interface (get, set, delete, size properties/methods) before assignment to prevent runtime errors.

Suggested change
this.cache = opts.cache ?? new Map()
}
if (opts.cache !== undefined && opts.cache !== null) {
if (
typeof opts.cache.get !== 'function' ||
typeof opts.cache.set !== 'function' ||
typeof opts.cache.delete !== 'function' ||
(typeof opts.cache.size !== 'number' && typeof opts.cache.size !== 'undefined')
) {
throw new InvalidArgumentError('cache must implement the Map interface (get, set, delete methods and size property)')
}
this.cache = opts.cache
} else {
this.cache = new Map()
}

Copilot uses AI. Check for mistakes.
@SuperOleg39
Copy link
Contributor Author

Hi! Thanks for the quick response.

I have questions: Why is cache now publicly accessible? Did you do this on purpose? Please explain.

I made cache public for a few reasons, but it's not that important:

  • simpler debugging and monkeypatching if necessary
  • fast access to cache if we need to call dump/load or other lru-cache methods - useful is dns interceptor instance will be saved as DI provider for example

What happens if i pass an lru cache which handles ttl and max items by itself. How is full coming into play with this if the LRU cache is configured with other values?

At first, I think that cache ttl and size is a bit out of scope of interceptor, and in the future this logic can be incapsulated in some default cache implementation.

But for now, if lru-cache used and cache property will be added, I think cache need to be provided with Infinity ttls and max items.

In runLookup we just ignore new entries if the Map is full. Isnt the point of having an LRU, to have the last recently used element be in the cache and not the first x elements till hitting the maxItems limit?

Yes, and because of that LRU cache can be more effective. But it is just an example, I'm not sure that applications have so many hosts for resolution that you need to worry about cache size.

DNS cache is more about balance between requests speed and failure rate because of outdated ips. So custom cache metrics can be really useful here to fine tuning.

There is no definition of the shape of the cache. What if a .get() call returns undefined and not null?

Can we do here something without TypeScript? Maybe I can define interface with JSdoc

@Uzlopak
Copy link
Contributor

Uzlopak commented Sep 22, 2025

I think the interceptor was planned with integrated cache. Infinity as default values is not acceptable, because it would be a source for a memory leak.

I really think, that you need to either extract the caching logic properly or it is imho not acceptable.

@metcoder95
Copy link
Member

Agree with @Uzlopak in the term of extracting the cache implementation.

We need to set an explicit contract the passed cache should follow, and the cache should account for the TTL given in the DNS record (at least pass that information, and let the cache decide what to do with it).

Abstracting the current cache implementation will help you shape the contract and the interaction between the interceptor and the cache semantics.

Note: what I mean with extracting, is to create a Cache abstraction and pass it as default cache if no cache is passed.

@Uzlopak
Copy link
Contributor

Uzlopak commented Sep 23, 2025

Maybe we should also avoid the word Cache. Maybe we should use the word Store? Or cache-store? :/

@SuperOleg39
Copy link
Contributor Author

100% agreed with cache extraction, will be back soon

@SuperOleg39
Copy link
Contributor Author

And another idea - we can just integrate diagnostic channel for monitoring purposes

@SuperOleg39
Copy link
Contributor Author

Demo with extracted cache - 6605718

Extracted cache have so much logic to handle TTL at IP records level, and looking at it, I no longer think the decision is such a good idea.
Because possible lru-cache adapter, if we want the same behaviour, will duplicate this logic.

Maybe we need just a simple Map-like API for cache and use this cache exclusively as storage.

But this brings us back to decide how this will be combined with ttl at the lru-cache level - I think cacheable-lookup have exact the same problem, and they have example with QuickLRU library where default maxAge is Infinity.

Also, cacheable-lookup can't prevent lookup if cache is full, and I can't solve this case and stay with Map-like API.

}

// TODO: it will require to write adapter for different caches, look for a better ideas
get full () {
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Incompatible with js Map interface

Copy link
Member

@metcoder95 metcoder95 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe we need just a simple Map-like API for cache and use this cache exclusively as storage.

This was the exact suggestion from @Uzlopak, let's follow that path; and if seeking for compatibility with lru-cache-like solutions, we can hint the recommendations for making it compatible with the DNSStorage interface.

const maxInt = Math.pow(2, 31) - 1

class DNSInstance {
export class DNSCache {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
export class DNSCache {
export class DNSStore {

As @Uzlopak suggestion 😛

@SuperOleg39
Copy link
Contributor Author

Maybe we need just a simple Map-like API for cache and use this cache exclusively as storage.

This was the exact suggestion from @Uzlopak, let's follow that path; and if seeking for compatibility with lru-cache-like solutions, we can hint the recommendations for making it compatible with the DNSStorage interface.

Thanks, I get it too late)

Is this solution will be more appropriate?

SuperOleg39@2c156e1

Copy link
Member

@metcoder95 metcoder95 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we add some tests to cover the new DNSStore feature?

return this.#records.size
}

// TODO: it will require to write adapter for different caches, look for a better ideas
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shall we remove the TODO?

Copy link
Contributor Author

@SuperOleg39 SuperOleg39 Sep 26, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Before changes, can you please help me to choose between DNSStorage implementations:

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

simple implementation, only storage logic extracted - #4589 (files)

I'd prefer this one, is simpler and with the example you shared can be enough for implementers to adapt it to their needs

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thans, #4589 is ready to review!

}
}

// TODO: deduplicate logic
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shall we abstract it then?

}
}

// TODO: deduplicate logic
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ditto

@SuperOleg39 SuperOleg39 mentioned this pull request Sep 26, 2025
7 tasks
@SuperOleg39
Copy link
Contributor Author

Closed in favour of #4589

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants