From 8c2b32bc8eebe5e7b6fbc02bfb56fa1662126dfd Mon Sep 17 00:00:00 2001 From: Manish Singh <35570930+manishsingh7163@users.noreply.github.com> Date: Sun, 2 Nov 2025 10:40:48 +0530 Subject: [PATCH] Fix answer formatting for LLM question Clarified the answer format for question 8 about LLM pre-training mechanisms. --- interview_prep/60_gen_ai_questions.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/interview_prep/60_gen_ai_questions.md b/interview_prep/60_gen_ai_questions.md index e59bd6c..58651ab 100644 --- a/interview_prep/60_gen_ai_questions.md +++ b/interview_prep/60_gen_ai_questions.md @@ -350,7 +350,7 @@ Owner: Aishwarya Nr --- 8. What pre-training mechanisms are used for LLMs, explain a few -- Answer**:** +- Answer: Large Language Models utilize several pre-training mechanisms to learn from vast amounts of text data before being fine-tuned on specific tasks. Key mechanisms include: @@ -1011,4 +1011,4 @@ Owner: Aishwarya Nr Over-reliance on perplexity can be problematic because it primarily measures how well a model predicts the next word in a sequence, potentially overlooking aspects such as coherence, factual accuracy, and the ability to capture nuanced meanings or implications. It may not fully reflect the model's performance on tasks requiring deep understanding or creative language use. ---- \ No newline at end of file +---