You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
| B-4 | MI cache looks up `hash_B` → **hit** (token already fresh). |
184
+
| B-5 | MI returns **HTTP 200** + **`access_token_B`** — _no extra AAD round-trip_. |
185
+
186
+
---
187
+
188
+
##### Cluster settles
189
+
190
+
***Only one** outbound call to AAD per unique revoked token, no matter how many nodes receive the claims challenge.
191
+
* Dramatically reduces pressure on the MI proxy and on ESTS in large Service Fabric (or AKS) deployments.
192
+
193
+
---
194
+
195
+
##### Why a simple `bypass_cache=true` flag isn’t enough
196
+
197
+
*`bypass_cache=true` forces **every** node to refresh → scales **O(N)** with cluster size.
198
+
Large clusters could issue thousands of token requests within seconds, triggering throttling (`429`) or high latency.
199
+
200
+
* The **hash check** turns the problem into **O(1)**:
201
+
The first node refreshes; the hash acts as an idempotency key so all other nodes immediately reuse the fresh token already in the MI cache.
202
+
203
+
128
204
#### Motivation
129
205
130
206
The *internal protocol* between the client and the RP (i.e. calling the MITS endpoint in case of Service Fabric), is a simplified version of CAE. This is because CAE is claims driven and involves JSON operations such as JSON doc merges. The RP doesn't need the actual claims to perform revocation, it just needs a signal to bypass the cache. As such, it was decided to not use the full claims value internally.
0 commit comments