diff --git a/src/assets/images/changelog/ai-gateway/guardrails-social-preview.png b/src/assets/images/changelog/ai-gateway/guardrails-social-preview.png new file mode 100644 index 000000000000000..3206baf0c772c96 Binary files /dev/null and b/src/assets/images/changelog/ai-gateway/guardrails-social-preview.png differ diff --git a/src/content/changelog/ai-gateway/2025-02-26-guardrails.mdx b/src/content/changelog/ai-gateway/2025-02-26-guardrails.mdx index 232b5f44008348d..e82cf9ce2bf400b 100644 --- a/src/content/changelog/ai-gateway/2025-02-26-guardrails.mdx +++ b/src/content/changelog/ai-gateway/2025-02-26-guardrails.mdx @@ -1,14 +1,14 @@ --- title: Introducing Guardrails in AI Gateway description: Keep AI interactions secure and risk-free with Guardrails in AI Gateway -products: - - ai-gateway date: 2025-02-26T6:00:00Z +preview_image: ~/assets/images/changelog/ai-gateway/guardrails-social-preview.png --- -[AI Gateway](/ai-gateway/) now includes [Guardrails](/ai-gateway/guardrails/), to help you monitor your AI apps for harmful or inappropriate content and deploy safely. +[AI Gateway](/ai-gateway/) now includes [Guardrails](/ai-gateway/guardrails/), to help you monitor your AI apps for harmful or inappropriate content and deploy safely. Within the AI Gateway settings, you can configure: + - **Guardrails**: Enable or disable content moderation as needed. - **Evaluation scope**: Select whether to moderate user prompts, model responses, or both. - **Hazard categories**: Specify which categories to monitor and determine whether detected inappropriate content should be blocked or flagged.