Skip to content

Commit e65b701

Browse files
author
fochan
committed
update
1 parent 482a560 commit e65b701

File tree

1 file changed

+68
-1
lines changed

1 file changed

+68
-1
lines changed

docs/class5/class5.rst

Lines changed: 68 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -63,15 +63,82 @@ For details, please refer to official documentation. Here a brief description.
6363

6464
**Routes** - The routes section defines the endpoints that the AI Gateway listens to and the policy that applies to each of them.
6565

66+
.. NOTE::
67+
Example shown AIGW listen for **/simply-chat** endpoint and will use policy **ai-deliver-optimize-pol** that uses OpenAI schema.
68+
69+
.. code-block:: yaml
70+
71+
routes:
72+
- path: /simply-chat
73+
policy: ai-deliver-optimize-pol
74+
schema: openai
75+
76+
6677
**Policies** - The policies section allows you to use different profiles based on different selectors.
6778

79+
.. NOTE::
80+
Example uses **rag-ai-chatbot-prompt-pol** policy which mapped to **rag-ai-chatbot-prompt** profiles.
81+
82+
.. code-block:: yaml
83+
84+
policies:
85+
- name: rag-ai-chatbot-prompt-pol
86+
profiles:
87+
- name: rag-ai-chatbot-prompt
88+
89+
6890
**Profiles** - The profiles section defines the different sets of processors and services that apply to the input and output of the AI model based on a set of rules.
6991

92+
.. NOTE::
93+
Example uses **rag-ai-chatbot-prompt** profiles which defined the **prompt-injection** processor at the **inputStages** which uses **ollama/llama3.2** service.
94+
95+
.. code-block:: yaml
96+
97+
profiles:
98+
- name: rag-ai-chatbot-prompt
99+
inputStages:
100+
- name: prompt-injection
101+
steps:
102+
- name: prompt-injection
103+
services:
104+
- name: ollama/llama3.2
105+
106+
70107
**Processors** - The processors section defines the processing services that can be applied to the input or output of the AI model.
71108

72-
**Services** - The services section defines the upstream LLM services that the AI Gateway can send traffic to.
109+
.. NOTE::
110+
Processor definition for **prompt-injection**
73111

112+
.. code-block:: yaml
113+
114+
processors:
115+
- name: prompt-injection
116+
type: external
117+
config:
118+
endpoint: "http://ai-gateway-processors-f5.trust.apps.ai"
119+
namespace: "f5"
120+
version: 1
121+
params:
122+
reject: true
123+
threshold: 0.8
124+
125+
**Services** - The services section defines the upstream LLM services that the AI Gateway can send traffic to.
74126

127+
.. NOTE::
128+
Example shown service for ollama/llama3.2 (upstream LLM). This is the service that AIGW will send to. Option for executor are ollama, openai, anthropic or http. Endpoint URL is where the upstream LLM API.
129+
130+
.. code-block:: yaml
131+
132+
- name: ollama/llama3.2
133+
type: llama3.2
134+
executor: openai
135+
config:
136+
endpoint: 'http://ollama-service.open-webui:11434/v1/chat/completions'
137+
secrets:
138+
- source: EnvVar
139+
targets:
140+
apiKey: OPENAI_PUBLIC_API_KEY
141+
75142
2 - Deploy F5 AI Gateway
76143
------------------------
77144

0 commit comments

Comments
 (0)