Skip to content

Commit 81e8a13

Browse files
committed
Merge branch 'main' into raft
2 parents 7439b9d + efbcb05 commit 81e8a13

File tree

232 files changed

+19249
-97968
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

232 files changed

+19249
-97968
lines changed

.github/scripts/check_copyright_header.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,7 @@
1111
# This software may be used and distributed according to the terms of the Llama 2 Community License Agreement.\n\n"""
1212

1313
#Files in black list must be relative to main repo folder
14-
BLACKLIST = ["eval/open_llm_leaderboard/hellaswag_utils.py"]
14+
BLACKLIST = ["tools/benchmarks/llm_eval_harness/open_llm_leaderboard/hellaswag_utils.py"]
1515

1616
if __name__ == "__main__":
1717
for ext in ["*.py", "*.sh"]:

.github/scripts/spellcheck_conf/wordlist.txt

Lines changed: 43 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1351,6 +1351,49 @@ Weaviate
13511351
MediaGen
13521352
SDXL
13531353
SVD
1354+
KV
1355+
KVs
1356+
XSUM
1357+
contrains
1358+
knowlege
1359+
kv
1360+
prefilling
1361+
DataFrame
1362+
DuckDB
1363+
Groq
1364+
GroqCloud
1365+
Replit
1366+
Teslas
1367+
duckdb
1368+
teslas
1369+
Groqs
1370+
groq
1371+
schemas
1372+
Pinecone
1373+
Pinecone's
1374+
Repl
1375+
docsearch
1376+
presidental
1377+
CrewAI
1378+
kickstart
1379+
DataFrames
1380+
Groqing
1381+
Langchain
1382+
Plotly
1383+
dfs
1384+
yfinance
1385+
Groq's
1386+
LlamaChat
1387+
chatbot's
1388+
ConversationBufferWindowMemory
1389+
chatbot's
1390+
Lamini
1391+
lamini
1392+
nba
1393+
sqlite
1394+
customerservice
1395+
fn
1396+
ExecuTorch
13541397
LLMScore
13551398
RecursiveCharacterTextSplitter
13561399
TPD

README.md

Lines changed: 1 addition & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -136,14 +136,9 @@ Contains examples are organized in folders by topic:
136136
| Subfolder | Description |
137137
|---|---|
138138
[quickstart](./recipes/quickstart) | The "Hello World" of using Llama, start here if you are new to using Llama.
139-
[finetuning](./recipes/finetuning)|Scripts to finetune Llama on single-GPU and multi-GPU setups
140-
[inference](./recipes/inference)|Scripts to deploy Llama for inference locally and using model servers
141139
[use_cases](./recipes/use_cases)|Scripts showing common applications of Meta Llama3
140+
[3p_integrations](./recipes/3p_integrations)|Partner owned folder showing common applications of Meta Llama3
142141
[responsible_ai](./recipes/responsible_ai)|Scripts to use PurpleLlama for safeguarding model outputs
143-
[llama_api_providers](./recipes/llama_api_providers)|Scripts to run inference on Llama via hosted endpoints
144-
[benchmarks](./recipes/benchmarks)|Scripts to benchmark Llama models inference on various backends
145-
[code_llama](./recipes/code_llama)|Scripts to run inference with the Code Llama models
146-
[evaluation](./recipes/evaluation)|Scripts to evaluate fine-tuned Llama models using `lm-evaluation-harness` from `EleutherAI`
147142

148143
### `src/`
149144

UPDATES.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -1,19 +1,19 @@
11
## System Prompt Update
22

33
### Observed Issue
4-
We received feedback from the community on our prompt template and we are providing an update to reduce the false refusal rates seen. False refusals occur when the model incorrectly refuses to answer a question that it should, for example due to overly broad instructions to be cautious in how it provides responses.
4+
We received feedback from the community on our prompt template and we are providing an update to reduce the false refusal rates seen. False refusals occur when the model incorrectly refuses to answer a question that it should, for example due to overly broad instructions to be cautious in how it provides responses.
55

66
### Updated approach
7-
Based on evaluation and analysis, we recommend the removal of the system prompt as the default setting. Pull request [#626](https://github.com/facebookresearch/llama/pull/626) removes the system prompt as the default option, but still provides an example to help enable experimentation for those using it.
7+
Based on evaluation and analysis, we recommend the removal of the system prompt as the default setting. Pull request [#626](https://github.com/facebookresearch/llama/pull/626) removes the system prompt as the default option, but still provides an example to help enable experimentation for those using it.
88

99
## Token Sanitization Update
1010

1111
### Observed Issue
12-
The PyTorch scripts currently provided for tokenization and model inference allow for direct prompt injection via string concatenation. Prompt injections allow for the addition of special system and instruction prompt strings from user-provided prompts.
12+
The PyTorch scripts currently provided for tokenization and model inference allow for direct prompt injection via string concatenation. Prompt injections allow for the addition of special system and instruction prompt strings from user-provided prompts.
1313

14-
As noted in the documentation, these strings are required to use the fine-tuned chat models. However, prompt injections have also been used for manipulating or abusing models by bypassing their safeguards, allowing for the creation of content or behaviors otherwise outside the bounds of acceptable use.
14+
As noted in the documentation, these strings are required to use the fine-tuned chat models. However, prompt injections have also been used for manipulating or abusing models by bypassing their safeguards, allowing for the creation of content or behaviors otherwise outside the bounds of acceptable use.
1515

1616
### Updated approach
17-
We recommend sanitizing [these strings](https://github.com/meta-llama/llama?tab=readme-ov-file#fine-tuned-chat-models) from any user provided prompts. Sanitization of user prompts mitigates malicious or accidental abuse of these strings. The provided scripts have been updated to do this.
17+
We recommend sanitizing [these strings](https://github.com/meta-llama/llama?tab=readme-ov-file#fine-tuned-chat-models) from any user provided prompts. Sanitization of user prompts mitigates malicious or accidental abuse of these strings. The provided scripts have been updated to do this.
1818

19-
Note: even with this update safety classifiers should still be applied to catch unsafe behaviors or content produced by the model. An [example](./recipes/inference/local_inference/inference.py) of how to deploy such a classifier can be found in the llama-recipes repository.
19+
Note: even with this update safety classifiers should still be applied to catch unsafe behaviors or content produced by the model. An [example](./recipes/quickstart/inference/local_inference/inference.py) of how to deploy such a classifier can be found in the llama-recipes repository.

docs/FAQ.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -16,7 +16,7 @@ Here we discuss frequently asked questions that may occur and we found useful al
1616

1717
4. Can I add custom datasets?
1818

19-
Yes, you can find more information on how to do that [here](../recipes/finetuning/datasets/README.md).
19+
Yes, you can find more information on how to do that [here](../recipes/quickstart/finetuning/datasets/README.md).
2020

2121
5. What are the hardware SKU requirements for deploying these models?
2222

docs/LLM_finetuning.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -35,9 +35,9 @@ Full parameter fine-tuning has its own advantages, in this method there are mult
3535
You can also keep most of the layers frozen and only fine-tune a few layers. There are many different techniques to choose from to freeze/unfreeze layers based on different criteria.
3636

3737
<div style="display: flex;">
38-
<img src="./images/feature-based_FN.png" alt="Image 1" width="250" />
39-
<img src="./images/feature-based_FN_2.png" alt="Image 2" width="250" />
40-
<img src="./images/full-param-FN.png" alt="Image 3" width="250" />
38+
<img src="./img/feature_based_fn.png" alt="Image 1" width="250" />
39+
<img src="./img/feature_based_fn_2.png" alt="Image 2" width="250" />
40+
<img src="./img/full_param_fn.png" alt="Image 3" width="250" />
4141
</div>
4242

4343

File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.

0 commit comments

Comments
 (0)