You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This adds the necessary logic to deal with caching auth tokens. As this
is intended to be a Rails plugin we are leveraging the `Rails.cache`
instead of injecting it. We already directly utilize the Rails cache in
other parts of the code so this is consistent.
We store the key as the auth token and the value as a hash containing
the authenticated user and team ids. This will allow clients to by-pass
the expensive network call while still being able to resolve the
critical information about user and team related ids. This could also
potentially save on a potential DB lookup if the endpoints using this no
longer need to perform the full user and/or team lookups and instance
creations.
However, we still provide the common `current_user` helper as it is
likely many existing endpoints still rely on it. But since we are mainly
caching ids this helper is lazy loaded for those endpoints which do not
require the user lookup and instantiation. Since we are now caching the
team ids as well, we expose an additional helper to access them. For
symmetry, we expose a helper for the user id.
This is also the first step in supporting team level tokens. With this
hash format, such a token would simply have a `nil` user id and a
single team id (still in an array).
Lastly, we need to be aware that different caching strategies may treat
hashes differently. Mainly, an in-memory cache has some odd quirks. The
in-memory cache returns **the same hash object** (the object ids match)
when you read it. However, if the hash was frozen before being cached,
when it is read it will **not** be frozen. And since reading the cache
gives a new pointer to the same object (just with different frozen
state) it can be mutated. Resulting in additional reads seeing the
modified state.
To prevent this, and encourage clients to be defensive, we actively
re-freeze the cache when we read it. This also must include freezing the
internal team id array. On a cache miss, this means we also need to be
sure we freeze it before setting our internal state.
In the interest of reconfigurability, without needing to make code
deploys, we read the cache lifetime from the environment. Since these
are meant to be API auth tokens, which tend to have relatively long
lifetimes, but are still sensitive enough to that we may need to kill
them relatively quickly, we default to a shorter lifetime: 1 minutes.
This was arbitrarily chosen and not based on any usage patterns.
Since this gem may be used on a variety of different servers (i.e.
threaded and multi-process) we need to be aware of potential dog pile
effects. To try to allow for faster network requests to handle the cache
we keep the `race_condition_ttl` to the minimum: 1 second. From what
we've seen in the vast majority of cases this will be more than enough
time for any request to return.
We are not caching invalid tokens. The problem here is malicious attack
vectors. The API endpoints are vulnerable to a both brute force and DOS
attack by sending lots of requests with invalid tokens. To be fair a
valid token could also result in a DOS - but we generally expect large
traffic volumes with valid tokens.
So why prevent invalid tokens from being cached?
The token auth can be likened to login operations. It is generally wise
to slow this process down as much as possible. This way malicious
requests take longer to process. This makes brute force attacks
prohibitive. It also may (and I heavily stress the maybe) slow down DOS
vectors - but it's very likely attackers run more than one process /
thread and this slow down won't hinder them, only make the DOS worse by
greatly increasing the request times. Still, security wisdom states
login/auth processes should be as slow as your system can reasonably
allow.
If we allowed the invalid tokens to be cached, a DOS with the same token
would get processed a lot faster. This may allow more valid requests
through, but this is unlikely to be a significant portion of the
traffic. Caching the invalid tokens would have no affect on the time for
a brute force attempt, as by design, a BF attack will change the token
on every request.
However, a BF attack would destroy any viable valid cache. Since it's
extremely likely the invalid tokens would consume all of the available
cache space. This means valid cache hits will drop to roughly zero.
Additionally, once the cache is maxed out, every request would result in
a pruning/clean up. But since it's likely that all the invalid cached
tokens will still have long expire times we resort to the alternative
algorithm.
So by not caching the invalid tokens we greatly reduce our attack
surface. Now we'd simply have to deal with the standard DOS attack
responses which for us would mostly be handled by Heroku and dealt with
at the router level - not the app level.
Finally, we've dropped the `current_user` check. This is mainly meant to
be a shim for the test plugin to override who the currently
authenticated user/token is. We will replace this logic with simply
setting the internal ivar state/cache value appropriately.
0 commit comments