Skip to content

Commit d8b0eba

Browse files
authored
Save the preprocessor_config.json and chat_template.json for mllama model after conversion (meta-llama#741)
2 parents 799e90e + 2845421 commit d8b0eba

File tree

5 files changed

+197
-101
lines changed

5 files changed

+197
-101
lines changed

.github/scripts/spellcheck_conf/wordlist.txt

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1484,3 +1484,4 @@ uv
14841484
8xL40S
14851485
xL
14861486
EDA
1487+
DeepLearningai

recipes/3p_integrations/llamaindex/dlai_agentic_rag/README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -2,10 +2,10 @@
22

33
The folder here containts the Llama 3 ported notebooks of the DLAI short course [Building Agentic RAG with Llamaindex](https://www.deeplearning.ai/short-courses/building-agentic-rag-with-llamaindex/).
44

5-
1. [Building Agentic RAG with Llamaindex L1 Router Engine](../../../quickstart/agents/dlai/Building_Agentic_RAG_with_Llamaindex_L1_Router_Engine.ipynb) shows how to implement a simple agentic RAG, a router that will pick up one of several query tools (question answering or summarization) to execute a query on a single document. Note this notebook is located in the `quickstart` folder.
5+
1. [Building Agentic RAG with Llamaindex L1 Router Engine](../../../quickstart/agents/DeepLearningai_Course_Notebooks/Building_Agentic_RAG_with_Llamaindex_L1_Router_Engine.ipynb) shows how to implement a simple agentic RAG, a router that will pick up one of several query tools (question answering or summarization) to execute a query on a single document. Note this notebook is located in the `quickstart` folder.
66

77
2. [Building Agentic RAG with Llamaindex L2 Tool Calling](Building_Agentic_RAG_with_Llamaindex_L2_Tool_Calling.ipynb) shows how to use Llama 3 to not only pick a function to execute, but also infer an argument to pass through the function.
88

99
3. [Building Agentic RAG with Llamaindex L3 Building an Agent Reasoning Loop](Building_Agentic_RAG_with_Llamaindex_L3_Building_an_Agent_Reasoning_Loop.ipynb) shows how to define a complete agent reasoning loop to reason over tools and multiple steps on a complex question the user asks about a single document while maintaining memory.
1010

11-
3. [Building Agentic RAG with Llamaindex L4 Building a Multi-Document Agent](Building_Agentic_RAG_with_Llamaindex_L4_Building_a_Multi-Document_Agent.ipynb) shows how to use an agent to handle multiple documents and increasing degrees of complexity.
11+
3. [Building Agentic RAG with Llamaindex L4 Building a Multi-Document Agent](Building_Agentic_RAG_with_Llamaindex_L4_Building_a_Multi-Document_Agent.ipynb) shows how to use an agent to handle multiple documents and increasing degrees of complexity.

recipes/experimental/long_context/H2O/README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ Besides, LLMs usually have poor generation to long sequence during inference. H2
88

99
Current implementation supports llama-1/2/3, from 7B to 70B. Since H2O only maintains the most important KV pairs, it might missing some important information in the middle content for some knowlege-intensive tasks.
1010

11-
More details please refer to Paper: **https://arxiv.org/pdf/2306.14048**; Blog: **https://allenz.work/?p=11**.
11+
More details please refer to Paper: **https://arxiv.org/pdf/2306.14048**;
1212

1313
**Note: this implementation is tested with transformers == 4.39.0**
1414

@@ -21,7 +21,7 @@ python run_summarization.py \
2121
--input-path data/summarization/xsum.jsonl \
2222
--output-path summarization_output/xsum_h2o.jsonl \
2323
--model-name meta-llama/Meta-Llama-3-8B \
24-
--enable_h2o_generation
24+
--enable_h2o_generation
2525
```
2626

2727
##### **Results**

0 commit comments

Comments
 (0)