Skip to content

Commit 1c71062

Browse files
committed
Add multitabbed code examples
1 parent 2a263bf commit 1c71062

File tree

1 file changed

+267
-7
lines changed

1 file changed

+267
-7
lines changed

content/develop/ai/langcache/api-examples.md

Lines changed: 267 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -40,30 +40,76 @@ curl -s -X POST "https://$HOST/v1/caches/$CACHE_ID/entires/search" \
4040
This example uses `cURL` and Linux shell scripts to demonstrate the API; you can use any standard REST client or library.
4141
{{% /info %}}
4242

43-
If your app is written in Javascript or Python, you can also use the LangCache Software Development Kits (SDKs) to access the API:
43+
If your app is written in Python or Javascript, you can also use the LangCache Software Development Kits (SDKs) to access the API:
4444

45-
- [LangCache SDK for Javascript](https://www.npmjs.com/package/@redis-ai/langcache)
4645
- [LangCache SDK for Python](https://pypi.org/project/langcache/)
46+
- [LangCache SDK for Javascript](https://www.npmjs.com/package/@redis-ai/langcache)
4747

4848
## Examples
4949

5050
### Search LangCache for similar responses
5151

52-
Use `POST /v1/caches/{cacheId}/entries/search` to search the cache for matching responses to a user prompt.
52+
Use [`POST /v1/caches/{cacheId}/entries/search`]({{< relref "/develop/ai/langcache/api-reference#tag/Cache-Entries/operation/search" >}}}) to search the cache for matching responses to a user prompt.
5353

54+
{{< multitabs id="search-basic"
55+
tab1="REST API"
56+
tab2="Python"
57+
tab3="Javascript" >}}
5458
```sh
5559
POST https://[host]/v1/caches/{cacheId}/entries/search
5660
{
5761
"prompt": "User prompt text"
5862
}
5963
```
64+
-tab-sep-
65+
```python
66+
from langcache import LangCache
67+
import os
68+
69+
70+
with LangCache(
71+
server_url="https://<host>",
72+
cache_id="<cacheId>",
73+
service_key=os.getenv("LANGCACHE_SERVICE_KEY", ""),
74+
) as lang_cache:
75+
76+
res = lang_cache.search(prompt="User prompt text", similarity_threshold=0.9)
77+
78+
print(res)
79+
```
80+
-tab-sep-
81+
```js
82+
import { LangCache } from "@redis-ai/langcache";
83+
84+
const langCache = new LangCache({
85+
serverURL: "https://<host>",
86+
cacheId: "<cacheId>",
87+
serviceKey: "<LANGCACHE_SERVICE_KEY>",
88+
});
89+
90+
async function run() {
91+
const result = await langCache.search({
92+
prompt: "User prompt text",
93+
similarityThreshold: 0.9
94+
});
95+
96+
console.log(result);
97+
}
98+
99+
run();
100+
```
101+
{{< /multitabs >}}
60102

61103
Place this call in your client app right before you call your LLM's REST API. If LangCache returns a response, you can send that response back to the user instead of calling the LLM.
62104

63105
If LangCache does not return a response, you should call your LLM's REST API to generate a new response. After you get a response from the LLM, you can [store it in LangCache](#store-a-new-response-in-langcache) for future use.
64106

65-
You can also scope the responses returned from LangCache by adding an `attributes` object to the request. LangCache will only return responses that match the attributes you specify.
107+
You can also scope the responses returned from LangCache by adding an `attributes` object to the request. LangCache will only return responses that match the attributes you specify.
66108

109+
{{< multitabs id="search-attributes"
110+
tab1="REST API"
111+
tab2="Python"
112+
tab3="Javascript" >}}
67113
```sh
68114
POST https://[host]/v1/caches/{cacheId}/entries/search
69115
{
@@ -73,23 +119,118 @@ POST https://[host]/v1/caches/{cacheId}/entries/search
73119
}
74120
}
75121
```
122+
-tab-sep-
123+
```python
124+
from langcache import LangCache
125+
import os
126+
127+
128+
with LangCache(
129+
server_url="https://<host>",
130+
cache_id="<cacheId>",
131+
service_key=os.getenv("LANGCACHE_SERVICE_KEY", ""),
132+
) as lang_cache:
133+
134+
res = lang_cache.search(
135+
prompt="User prompt text",
136+
attributes={"customAttributeName": "customAttributeValue"},
137+
similarity_threshold=0.9,
138+
)
139+
140+
print(res)
141+
```
142+
-tab-sep-
143+
```js
144+
import { LangCache } from "@redis-ai/langcache";
145+
146+
const langCache = new LangCache({
147+
serverURL: "https://<host>",
148+
cacheId: "<cacheId>",
149+
serviceKey: "<LANGCACHE_SERVICE_KEY>",
150+
});
151+
152+
async function run() {
153+
const result = await langCache.search({
154+
prompt: "User prompt text",
155+
similarityThreshold: 0.9,
156+
attributes: {
157+
"customAttributeName": "customAttributeValue",
158+
},
159+
});
160+
161+
console.log(result);
162+
}
163+
164+
run();
165+
```
166+
{{< /multitabs >}}
76167

77168
### Store a new response in LangCache
78169

79-
Use `POST /v1/caches/{cacheId}/entries` to store a new response in the cache.
170+
Use [`POST /v1/caches/{cacheId}/entries`]({{< relref "/develop/ai/langcache/api-reference#tag/Cache-Entries/operation/set" >}}) to store a new response in the cache.
80171

172+
{{< multitabs id="store-basic"
173+
tab1="REST API"
174+
tab2="Python"
175+
tab3="Javascript" >}}
81176
```sh
82177
POST https://[host]/v1/caches/{cacheId}/entries
83178
{
84179
"prompt": "User prompt text",
85180
"response": "LLM response text"
86181
}
87182
```
183+
-tab-sep-
184+
```python
185+
from langcache import LangCache
186+
import os
187+
188+
189+
with LangCache(
190+
server_url="https://[host]",
191+
cache_id="{cacheId}",
192+
service_key=os.getenv("LANGCACHE_SERVICE_KEY", ""),
193+
) as lang_cache:
194+
195+
res = lang_cache.set(
196+
prompt="User prompt text",
197+
response="LLM response text",
198+
)
199+
200+
print(res)
201+
```
202+
-tab-sep-
203+
```js
204+
import { LangCache } from "@redis-ai/langcache";
205+
206+
const langCache = new LangCache({
207+
serverURL: "https://<host>",
208+
cacheId: "<cacheId>",
209+
serviceKey: "<LANGCACHE_SERVICE_KEY>",
210+
});
211+
212+
async function run() {
213+
const result = await langCache.set({
214+
prompt: "User prompt text",
215+
response: "LLM response text",
216+
});
217+
218+
console.log(result);
219+
}
220+
221+
run();
222+
```
223+
{{< /multitabs >}}
88224

89225
Place this call in your client app after you get a response from the LLM. This will store the response in the cache for future use.
90226

91227
You can also store the responses with custom attributes by adding an `attributes` object to the request.
92228

229+
{{< multitabs id="store-attributes"
230+
tab1="REST API"
231+
tab2="Python"
232+
tab3="Javascript" >}}
233+
93234
```sh
94235
POST https://[host]/v1/caches/{cacheId}/entries
95236
{
@@ -100,12 +241,90 @@ POST https://[host]/v1/caches/{cacheId}/entries
100241
}
101242
}
102243
```
244+
-tab-sep-
245+
```python
246+
from langcache import LangCache
247+
import os
248+
249+
250+
with LangCache(
251+
server_url="https://[host]",
252+
cache_id="{cacheId}",
253+
service_key=os.getenv("LANGCACHE_SERVICE_KEY", ""),
254+
) as lang_cache:
255+
256+
res = lang_cache.set(
257+
prompt="User prompt text",
258+
response="LLM response text",
259+
attributes={"customAttributeName": "customAttributeValue"},
260+
)
261+
262+
print(res)
263+
```
264+
-tab-sep-
265+
266+
{{< /multitabs >}}
103267

104268
### Delete cached responses
105269

106-
Use `DELETE /v1/caches/{cacheId}/entries/{entryId}` to delete a cached response from the cache.
270+
Use [`DELETE /v1/caches/{cacheId}/entries/{entryId}`]({{< relref "/develop/ai/langcache/api-reference#tag/Cache-Entries/operation/delete" >}}) to delete a cached response from the cache.
107271

108-
You can also use `DELETE /v1/caches/{cacheId}/entries` to delete multiple cached responses at once. If you provide an `attributes` object, LangCache will delete all responses that match the attributes you specify.
272+
{{< multitabs id="delete-entry"
273+
tab1="REST API"
274+
tab2="Python"
275+
tab3="Javascript" >}}
276+
277+
```sh
278+
DELETE https://[host]/v1/caches/{cacheId}/entries/{entryId}
279+
```
280+
-tab-sep-
281+
```python
282+
from langcache import LangCache
283+
import os
284+
285+
286+
with LangCache(
287+
server_url="https://[host]",
288+
cache_id="{cacheId}",
289+
service_key=os.getenv("LANGCACHE_SERVICE_KEY", ""),
290+
) as lang_cache:
291+
292+
res = lang_cache.delete_by_id(entry_id="{entryId}")
293+
294+
print(res)
295+
```
296+
-tab-sep-
297+
```js
298+
import { LangCache } from "@redis-ai/langcache";
299+
300+
const langCache = new LangCache({
301+
serverURL: "https://<host>",
302+
cacheId: "<cacheId>",
303+
serviceKey: "<LANGCACHE_SERVICE_KEY>",
304+
});
305+
306+
async function run() {
307+
const result = await langCache.deleteById({
308+
entryId: "<entryId>",
309+
});
310+
311+
console.log(result);
312+
}
313+
314+
run();
315+
```
316+
{{< /multitabs >}}
317+
318+
You can also use [`DELETE /v1/caches/{cacheId}/entries`]({{< relref "/develop/ai/langcache/api-reference#tag/Cache-Entries/operation/deleteQuery" >}}) to delete multiple cached responses based on the `attributes` you specify. If you specify multiple `attributes`, LangCache will delete entries that contain all given attributes.
319+
320+
{{< warning >}}
321+
If you do not specify any `attributes`, all responses in the cache will be deleted. This cannot be undone.
322+
{{< /warning >}}
323+
324+
{{< multitabs id="delete-attributes"
325+
tab1="REST API"
326+
tab2="Python"
327+
tab3="Javascript" >}}
109328

110329
```sh
111330
DELETE https://[host]/v1/caches/{cacheId}/entries
@@ -115,4 +334,45 @@ DELETE https://[host]/v1/caches/{cacheId}/entries
115334
}
116335
}
117336
```
337+
-tab-sep-
338+
```python
339+
from langcache import LangCache
340+
import os
341+
342+
343+
with LangCache(
344+
server_url="https://[host]",
345+
cache_id="{cacheId}",
346+
service_key=os.getenv("LANGCACHE_SERVICE_KEY", ""),
347+
) as lang_cache:
348+
349+
res = lang_cache.delete_query(
350+
attributes={"customAttributeName": "customAttributeValue"},
351+
)
352+
353+
print(res)
354+
```
355+
-tab-sep-
356+
```js
357+
import { LangCache } from "@redis-ai/langcache";
358+
359+
const langCache = new LangCache({
360+
serverURL: "https://<host>",
361+
cacheId: "<cacheId>",
362+
serviceKey: "<LANGCACHE_SERVICE_KEY>",
363+
});
364+
365+
async function run() {
366+
const result = await langCache.deleteQuery({
367+
attributes: {
368+
"customAttributeName": "customAttributeValue",
369+
},
370+
});
371+
372+
console.log(result);
373+
}
374+
375+
run();
376+
```
377+
{{< /multitabs >}}
118378

0 commit comments

Comments
 (0)