You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: modules/authentication/proc-enabling-authentication-with-rhbk.adoc
+7-5Lines changed: 7 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -23,11 +23,6 @@ Save the value for the next step:
23
23
* **Client ID**
24
24
* **Client Secret**
25
25
26
-
.. Configure your {rhbk} realm for performance and security:
27
-
... Navigate to the **Configure** > **Realm Settings**.
28
-
... Set the **Access Token Lifespan** to a value greater than five minutes (preferably 10 or 15 minutes) to prevent performance issues from frequent refresh token requests for every API call.
29
-
... Enable the **Revoke Refresh Token** option to improve security by enabling the refresh token rotation strategy.
30
-
31
26
.. To prepare for the verification steps, in the same realm, get the credential information for an existing user or link:https://docs.redhat.com/en/documentation/red_hat_build_of_keycloak/24.0/html-single/getting_started_guide/index#getting-started-zip-create-a-user[create a user]. Save the user credential information for the verification steps.
32
27
33
28
. To add your {rhsso} credentials to your {product-short}, add the following key/value pairs to link:{plugins-configure-book-url}#provisioning-your-custom-configuration[your {product-short} secrets]:
@@ -182,6 +177,13 @@ auth:
182
177
183
178
--
184
179
180
+
.Security consideration
181
+
If multiple valid refresh tokens are issued due to frequent refresh token requests, older tokens will remain valid until they expire. To enhance security and prevent potential misuse of older tokens, enable a refresh token rotation strategy in your {rhbk} realm.
182
+
183
+
. From the *Configure* section of the navigation menu, click *Realm Settings*.
184
+
. From the *Realm Settings* page, click the *Tokens* tab.
185
+
. From the *Refresh tokens* section of the *Tokens* tab, toggle the *Revoke Refresh Token* to the *Enabled* position.
186
+
185
187
.Verification
186
188
. Go to the {product-short} login page.
187
189
. Your {product-short} sign-in page displays *Sign in using OIDC* and the Guest user sign-in is disabled.
Copy file name to clipboardExpand all lines: modules/release-notes/ref-release-notes-known-issues.adoc
+28-1Lines changed: 28 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -9,12 +9,39 @@ This section lists known issues in {product} {product-version}.
9
9
10
10
Currently, when deploying {product-short} using the Helm Chart, two replicas cannot run on different cluster nodes. This might also affect the upgrade from 1.3 to 1.4.0 if the new pod is scheduled on a different node.
11
11
12
-
A possible workaround for the upgrade is to manually scale down the number of replicas to 0 before upgrading your Helm release. Or manually remove the old {product-short} pod after upgrading the Helm release. However, this would imply some application downtime. You can also leverage a Pod Affinity rule to force the cluster scheduler to run your {product-short} pods on the same node.
12
+
Possible workarounds for the upgrade include the following actions:
13
+
* Manually scale down the number of replicas to 0 before upgrading your Helm release.
14
+
* Manually remove the old {product-short} pod after upgrading the Helm release. However, this would imply some application downtime.
15
+
* Leverage a Pod Affinity rule to force the cluster scheduler to run your {product-short} pods on the same node.
== [Helm] Cannot run two RHDH replicas on different nodes due to Multi-Attach errors on the dynamic plugins root PVC
23
+
24
+
If you are deploying {product-short} using the Helm Chart, it is currently impossible to have 2 replicas running on different cluster nodes. This might also affect the upgrade from 1.3 to 1.4.0 if the new pod is scheduled on a different node.
25
+
26
+
A possible workaround for the upgrade is to manually scale down the number of replicas to 0 before upgrading your Helm release. Or manually remove the old {product-short} pod after upgrading the Helm release. However, this would imply some application downtime.
27
+
You can also leverage a Pod Affinity rule to force the cluster scheduler to run your {product-short} pods on the same node.
When using {rhsso-brand-name} or {rhbk-brand-name} as an OIDC provider, the default access token lifespan is set to 5 minutes, which corresponds to the token refresh grace period set in {product-short}. This 5-minute grace period is the threshold used to trigger a new refresh token call. Since the token is always near expiration, frequent refresh token requests will cause performance issues.
38
+
39
+
This issue will be resolved in the 1.5 release. To prevent the performance issues, increase the lifespan in the {rhsso-brand-name} or {rhbk-brand-name} server by setting *Configure > Realm Settings > Access Token Lifespan* to a value greater than five minutes (preferably 10 or 15 minutes).
0 commit comments