You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/class5/class5.rst
+68-1Lines changed: 68 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -63,15 +63,82 @@ For details, please refer to official documentation. Here a brief description.
63
63
64
64
**Routes** - The routes section defines the endpoints that the AI Gateway listens to and the policy that applies to each of them.
65
65
66
+
.. NOTE::
67
+
Example shown AIGW listen for **/simply-chat** endpoint and will use policy **ai-deliver-optimize-pol** that uses OpenAI schema.
68
+
69
+
.. code-block:: yaml
70
+
71
+
routes:
72
+
- path: /simply-chat
73
+
policy: ai-deliver-optimize-pol
74
+
schema: openai
75
+
76
+
66
77
**Policies** - The policies section allows you to use different profiles based on different selectors.
67
78
79
+
.. NOTE::
80
+
Example uses **rag-ai-chatbot-prompt-pol** policy which mapped to **rag-ai-chatbot-prompt** profiles.
81
+
82
+
.. code-block:: yaml
83
+
84
+
policies:
85
+
- name: rag-ai-chatbot-prompt-pol
86
+
profiles:
87
+
- name: rag-ai-chatbot-prompt
88
+
89
+
68
90
**Profiles** - The profiles section defines the different sets of processors and services that apply to the input and output of the AI model based on a set of rules.
69
91
92
+
.. NOTE::
93
+
Example uses **rag-ai-chatbot-prompt** profiles which defined the **prompt-injection** processor at the **inputStages** which uses **ollama/llama3.2** service.
94
+
95
+
.. code-block:: yaml
96
+
97
+
profiles:
98
+
- name: rag-ai-chatbot-prompt
99
+
inputStages:
100
+
- name: prompt-injection
101
+
steps:
102
+
- name: prompt-injection
103
+
services:
104
+
- name: ollama/llama3.2
105
+
106
+
70
107
**Processors** - The processors section defines the processing services that can be applied to the input or output of the AI model.
71
108
72
-
**Services** - The services section defines the upstream LLM services that the AI Gateway can send traffic to.
**Services** - The services section defines the upstream LLM services that the AI Gateway can send traffic to.
74
126
127
+
.. NOTE::
128
+
Example shown service for ollama/llama3.2 (upstream LLM). This is the service that AIGW will send to. Option for executor are ollama, openai, anthropic or http. Endpoint URL is where the upstream LLM API.
0 commit comments