Skip to content

Commit 47ef9a2

Browse files
Mishal DholakiaGitHub Enterprise
authored andcommitted
Fix broken link and correct spelling error
1 parent 3326c18 commit 47ef9a2

File tree

1 file changed

+10
-10
lines changed

1 file changed

+10
-10
lines changed

docs/tutorial.md

Lines changed: 10 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,7 @@
1919
- [The Guarded Nondeterminism Pattern](#guarded-nondeterminism)
2020
- [Chapter 9: Interoperability with Other Frameworks](#chapter-9-interoperability-with-other-frameworks)
2121
- [Chapter 10: Prompt Engineering for Mellea](#chapter-10-prompt-engineering-for-m)
22-
- [Custom Templates](#custom--templates)
22+
- [Custom Templates](#custom-templates)
2323
- [Appendix: Contributing to Melles](#appendix-contributing-to-mellea)
2424

2525
## Chapter 1: What Is Generative Programming
@@ -311,7 +311,7 @@ final_options = {
311311

312312
### Conclusion
313313

314-
We have now worked up from a simple "Hello, World" example to our first generative programming design pattern: **Instruct - Validate - Reapir (IVR)**.
314+
We have now worked up from a simple "Hello, World" example to our first generative programming design pattern: **Instruct - Validate - Repair (IVR)**.
315315

316316
When LLMs work well, the software developer experiences the LLM as a sort of oracle that can handle most any input and produce a sufficiently desirable output. When LLMs do not work at all, the software developer experiences the LLM as a naive markov chain that produces junk. In both cases, the LLM is just sampling from a distribution.
317317

@@ -419,7 +419,7 @@ def summarize_contract(contract_text: str) -> str:
419419

420420
@generative
421421
def summarize_short_story(story: str) -> str:
422-
"""Summarize a short story, with one paragraph on plot and one paragraph on braod themes."""
422+
"""Summarize a short story, with one paragraph on plot and one paragraph on broad themes."""
423423
```
424424

425425
```python
@@ -542,7 +542,7 @@ else:
542542
print("Summary lacks a structured conclusion.")
543543
```
544544

545-
Without these Hoare-style contracts, the only way to ensure composition is to couple the libraries, either by rewriting `summarize_meeting` to conform to `propose_business_decision`, or adding Requirements to `propose_business_decision` that may silently fail if unmet. These approahces can work, but require tight coupling between these two otherwise loosely couple libraries.
545+
Without these Hoare-style contracts, the only way to ensure composition is to couple the libraries, either by rewriting `summarize_meeting` to conform to `propose_business_decision`, or adding Requirements to `propose_business_decision` that may silently fail if unmet. These approaches can work, but require tight coupling between these two otherwise loosely coupled libraries.
546546

547547
With contracts, we **decouple** the libraries without sacrificing safe dynamic composition, by moving the coupling logic into pre- and post-condition checks. This is another LLM-native software engineering pattern: **guarded nondeterminism**.
548548

@@ -743,7 +743,7 @@ Let's see how Stembolt MFG Corporation can use tuned LoRAs to implement the Auto
743743

744744
### Training the aLoRA Adapter
745745

746-
Mellea provides a command-line interface for training [LoRA](https://arxiv.org/abs/2106.09685) or [aLoRA](https://github.com/IBM/alora) adapters. Classical LoRAs must re-process our entire context, which can get experience for quick checks happening within an inner loop (such as requirement checking). The aLoRA method allows us adapt a base LLM to new tasks, and then run the adapter with minimal compute overhead. The adapters are fast to train and fast to switch between.
746+
Mellea provides a command-line interface for training [LoRA](https://arxiv.org/abs/2106.09685) or [aLoRA](https://github.com/IBM/activated-lora) adapters. Classical LoRAs must re-process our entire context, which can get expensive for quick checks happening within an inner loop (such as requirement checking). The aLoRA method allows us to adapt a base LLM to new tasks, and then run the adapter with minimal compute overhead. The adapters are fast to train and fast to switch between.
747747

748748
We will train a lightweight adapter with the `m alora train` command on this small dataset:
749749

@@ -781,7 +781,7 @@ While training adapters, you can easily tuning the hyper-parameters as below:
781781

782782
### Upload to Hugging Face (Optional)
783783

784-
To share or reuse the trained adapter by using the `m alora upload` command to publish your trained adapter:
784+
To share or reuse the trained adapter, use the `m alora upload` command to publish your trained adapter:
785785

786786
```bash
787787
m alora upload ./checkpoints/alora_adapter \
@@ -823,7 +823,7 @@ backend.add_alora(
823823
)
824824
```
825825

826-
In the above arguments, `path_or_model_id` refers to the model checkpoint which got from last step, i.e., `m alora train` process.
826+
In the above arguments, `path_or_model_id` refers to the model checkpoint from last step, i.e., the `m alora train` process.
827827

828828
> [!NOTE]
829829
> The `generation_prompt` passed to your `backend.add_alora` call should exactly match the prompt used for training.
@@ -908,7 +908,7 @@ m = mellea.MelleaSession(
908908
)
909909
```
910910

911-
The `SimpleContext` -- which is the only context we have used so far -- is a context manager that resets the chat message history on each model call. That is, the model's context is entirely determined by the current Component. Mellea also provides a `LinearContext`, which behaves like a chat history. We can use the LinearContext to interact with cat hmodels:
911+
The `SimpleContext` -- which is the only context we have used so far -- is a context manager that resets the chat message history on each model call. That is, the model's context is entirely determined by the current Component. Mellea also provides a `LinearContext`, which behaves like a chat history. We can use the LinearContext to interact with chat models:
912912

913913
```python
914914
# file: https://github.com/generative-computing/mellea/blob/main/docs/examples/tutorial/context_example.py#L1-L5
@@ -1216,7 +1216,7 @@ def serve(
12161216
)
12171217
```
12181218

1219-
the `m serve` command then subsequently takes this funcition and runs a server that is openai compatible. For more information, please have a look at [this file](./examples/tutorial/m_serve_example.py) for how to write an `m serve` compatible program. To run the example:
1219+
the `m serve` command then subsequently takes this function and runs a server that is openai compatible. For more information, please have a look at [this file](./examples/tutorial/m_serve_example.py) for how to write an `m serve` compatible program. To run the example:
12201220

12211221
```shell
12221222
m serve docs/examples/tutorial/m_serve_example.py
@@ -1254,7 +1254,7 @@ It also contains either of the following fields
12541254
By writing a new template and/or changing the TemplateRepresentation of a component you can customize the textual representation. You can also customize based on the model.
12551255

12561256
#### Choosing a Template
1257-
Assuming a component's TemplateRepresentation contains a `template_order` field, the default TemplateFormatter grabs the relevant template by looing at the following places in order for each template in the `template_order`:
1257+
Assuming a component's TemplateRepresentation contains a `template_order` field, the default TemplateFormatter grabs the relevant template by looking at the following places in order for each template in the `template_order`:
12581258
1. the formatter's cached templates if the template has been looked up recently
12591259
2. the formatter's specified template path
12601260
3. the package that the object getting formatted is from (either 'mellea' or some third party package)

0 commit comments

Comments
 (0)