Skip to content
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions interview_prep/60_gen_ai_questions.md
Original file line number Diff line number Diff line change
Expand Up @@ -350,7 +350,7 @@ Owner: Aishwarya Nr
---

8. What pre-training mechanisms are used for LLMs, explain a few
- Answer**:**
- Answer:

Large Language Models utilize several pre-training mechanisms to learn from vast amounts of text data before being fine-tuned on specific tasks. Key mechanisms include:

Expand Down Expand Up @@ -1011,4 +1011,4 @@ Owner: Aishwarya Nr
Over-reliance on perplexity can be problematic because it primarily measures how well a model predicts the next word in a sequence, potentially overlooking aspects such as coherence, factual accuracy, and the ability to capture nuanced meanings or implications. It may not fully reflect the model's performance on tasks requiring deep understanding or creative language use.


---
---