Skip to content

Commit a1d51de

Browse files
authored
Merge pull request meta-llama#12 from meta-llama/upstream_merge
Upstream merge into alpha main
2 parents 36253b6 + 27c6adb commit a1d51de

File tree

5 files changed

+9
-12
lines changed

5 files changed

+9
-12
lines changed

recipes/quickstart/README.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -2,11 +2,11 @@
22

33
If you are new to developing with Meta Llama models, this is where you should start. This folder contains introductory-level notebooks across different techniques relating to Meta Llama.
44

5-
* The [](./Running_Llama3_Anywhere/) notebooks demonstrate how to run Llama inference across Linux, Mac and Windows platforms using the appropriate tooling.
6-
* The [](./Prompt_Engineering_with_Llama_3.ipynb) notebook showcases the various ways to elicit appropriate outputs from Llama. Take this notebook for a spin to get a feel for how Llama responds to different inputs and generation parameters.
7-
* The [](./inference/) folder contains scripts to deploy Llama for inference on server and mobile. See also [](../3p_integrations/vllm/) and [](../3p_integrations/tgi/) for hosting Llama on open-source model servers.
8-
* The [](./RAG/) folder contains a simple Retrieval-Augmented Generation application using Llama 3.
9-
* The [](./finetuning/) folder contains resources to help you finetune Llama 3 on your custom datasets, for both single- and multi-GPU setups. The scripts use the native llama-recipes finetuning code found in [](../../src/llama_recipes/finetuning.py) which supports these features:
5+
* The [Running_Llama3_Anywhere](./Running_Llama3_Anywhere/) notebooks demonstrate how to run Llama inference across Linux, Mac and Windows platforms using the appropriate tooling.
6+
* The [Prompt_Engineering_with_Llama_3](./Prompt_Engineering_with_Llama_3.ipynb) notebook showcases the various ways to elicit appropriate outputs from Llama. Take this notebook for a spin to get a feel for how Llama responds to different inputs and generation parameters.
7+
* The [inference](./inference/) folder contains scripts to deploy Llama for inference on server and mobile. See also [3p_integrations/vllm](../3p_integrations/vllm/) and [3p_integrations/tgi](../3p_integrations/tgi/) for hosting Llama on open-source model servers.
8+
* The [RAG](./RAG/) folder contains a simple Retrieval-Augmented Generation application using Llama 3.
9+
* The [finetuning](./finetuning/) folder contains resources to help you finetune Llama 3 on your custom datasets, for both single- and multi-GPU setups. The scripts use the native llama-recipes finetuning code found in [finetuning.py](../../src/llama_recipes/finetuning.py) which supports these features:
1010

1111
| Feature | |
1212
| ---------------------------------------------- | - |

recipes/quickstart/finetuning/LLM_finetuning_overview.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -33,9 +33,9 @@ Full parameter fine-tuning has its own advantages, in this method there are mult
3333
You can also keep most of the layers frozen and only fine-tune a few layers. There are many different techniques to choose from to freeze/unfreeze layers based on different criteria.
3434

3535
<div style="display: flex;">
36-
<img src="../../docs/img/feature_based_fn.png" alt="Image 1" width="250" />
37-
<img src="../../docs/img/feature_based_fn_2.png" alt="Image 2" width="250" />
38-
<img src="../../docs/img/full_param_fn.png" alt="Image 3" width="250" />
36+
<img src="../../../docs/img/feature_based_fn.png" alt="Image 1" width="250" />
37+
<img src="../../../docs/img/feature_based_fn_2.png" alt="Image 2" width="250" />
38+
<img src="../../../docs/img/full_param_fn.png" alt="Image 3" width="250" />
3939
</div>
4040

4141

recipes/quickstart/finetuning/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -105,7 +105,7 @@ python -m llama_recipes.finetuning --use_peft --peft_method lora --quantization
105105
```
106106
You'll be able to access a dedicated project or run link on [wandb.ai](https://wandb.ai) and see your dashboard like the one below.
107107
<div style="display: flex;">
108-
<img src="../../../docs/images/wandb_screenshot.png" alt="wandb screenshot" width="500" />
108+
<img src="../../../docs/img/wandb_screenshot.png" alt="wandb screenshot" width="500" />
109109
</div>
110110

111111
## FLOPS Counting and Pytorch Profiling

recipes/use_cases/README.md

Lines changed: 0 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -18,6 +18,3 @@ A complete example of how to build a Llama 3 chatbot hosted on your browser that
1818

1919
## [Sales Bot](./customerservice_chatbots/ai_agent_chatbot/SalesBot.ipynb): Sales Bot with Llama3 - A Summarization and RAG Use Case
2020
An summarization + RAG use case built around the Amazon product review Kaggle dataset to build a helpful Music Store Sales Bot. The summarization and RAG are built on top of Llama models hosted on OctoAI, and the vector database is hosted on Weaviate Cloud Services.
21-
22-
## [Media Generation](./MediaGen.ipynb): Building a Video Generation Pipeline with Llama3
23-
This step-by-step tutorial shows how to use leverage Llama 3 to drive the generation of animated videos using SDXL and SVD. More specifically it relies on JSON formatting to produce a scene-by-scene story board of a recipe video. The user provides the name of a dish, then Llama 3 describes a step by step guide to reproduce the said dish. This step by step guide is brought to life with models like SDXL and SVD.

0 commit comments

Comments
 (0)