Skip to content

Commit cf96cdb

Browse files
committed
docs: add async to tutorial
1 parent cd24641 commit cf96cdb

File tree

1 file changed

+50
-0
lines changed

1 file changed

+50
-0
lines changed

docs/tutorial.md

Lines changed: 50 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -21,6 +21,7 @@
2121
- [Chapter 10: Prompt Engineering for Mellea](#chapter-10-prompt-engineering-for-m)
2222
- [Custom Templates](#custom-templates)
2323
- [Chapter 11: Tool Calling](#chapter-11-tool-calling)
24+
- [Chapter 12: Asynchronicity](#chapter-12-asynchronicity)
2425
- [Appendix: Contributing to Melles](#appendix-contributing-to-mellea)
2526

2627
## Chapter 1: What Is Generative Programming
@@ -1317,6 +1318,55 @@ assert "web_search" in output.tool_calls
13171318
result = output.tool_calls["web_search"].call_func()
13181319
```
13191320

1321+
## Chapter 12: Asynchronicity
1322+
Mellea supports asynchronous behavior in several ways: asynchronous functions and asynchronous event loops in synchronous functions.
1323+
1324+
### Asynchronous Functions:
1325+
`MelleaSession`s have asynchronous functions that work just like regular async functions in python. These async session functions mirror their synchronous counterparts:
1326+
```
1327+
m = start_session()
1328+
result = await m.ainstruct("Write your instruction here!")
1329+
```
1330+
1331+
However, if you want to run multiple async functions at the same time, you need to be careful with your context. By default, `MelleaSession`s use a `SimpleContext` that has no history. This will work just fine when running multiple async requests at once:
1332+
```
1333+
m = start_session()
1334+
coroutines = []
1335+
1336+
for i in range(5):
1337+
coroutines.append(m.ainstruct(f"Write a math problem using {i}"))
1338+
1339+
results = await asyncio.gather(*coroutines)
1340+
```
1341+
1342+
If you try to use a `ChatContext`, you will need to await between each request so that the context can be properly modified:
1343+
```
1344+
m = start_session(ctx=ChatContext())
1345+
1346+
result = await m.ainstruct("Write a short fairy tale.")
1347+
print(result)
1348+
1349+
main_character = await m.ainstruct("Who is the main character of the previous fairy tail?")
1350+
print(main_character)
1351+
```
1352+
1353+
Otherwise, you're requests will use outdated contexts that don't have the messages you expect. For example,
1354+
```
1355+
m = start_session(ctx=ChatContext())
1356+
1357+
co1 = m.ainstruct("Write a very long math problem.") # Start first request.
1358+
co2 = m.ainstruct("Solve the math problem.") # Start second request with an empty context.
1359+
1360+
results = await asyncio.gather(co1, co2)
1361+
for result in results:
1362+
print(result) # Neither request had anything in its context.
1363+
```
1364+
1365+
### Asynchronicity in Synchronous Functions
1366+
Mellea utilizes asynchronicity internally. When you call `m.instruct`, you are using synchronous code that executes an asynchronous request to an LLM to generate the result. For a single request, this won't cause any differences in execution speed.
1367+
1368+
When using `SamplingStrategy`s or during validation, Mellea can speed up the execution time of your program by generating multiple results and validating those results against multiple requirements simultaneously. Whether you use `m.instruct` or the asynchronous `m.ainstruct`, Mellea will attempt to speed up your requests by dispatching those requests as quickly as possible and asynchronously awaiting the results.
1369+
13201370
## Appendix: Contributing to Mellea
13211371

13221372
### Contributor Guide: Requirements and Verifiers

0 commit comments

Comments
 (0)