You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
If your issue is with model generation quality, then please at least scan the following links and papers to understand the limitations of LLaMA models. This is especially important when choosing an appropriate model size and appreciating both the significant and subtle differences between LLaMA models and ChatGPT:
239
-
- LLaMA:
240
-
-[Introducing LLaMA: A foundational, 65-billion-parameter large language model](https://ai.facebook.com/blog/large-language-model-llama-meta-ai/)
241
-
-[LLaMA: Open and Efficient Foundation Language Models](https://arxiv.org/abs/2302.13971)
242
-
- GPT-3
243
-
-[Language Models are Few-Shot Learners](https://arxiv.org/abs/2005.14165)
244
-
- GPT-3.5 / InstructGPT / ChatGPT:
245
-
-[Aligning language models to follow instructions](https://openai.com/research/instruction-following)
246
-
-[Training language models to follow instructions with human feedback](https://arxiv.org/abs/2203.02155)
247
-
248
-
## Completions
249
-
Command-line completion is available for some environments.
0 commit comments