Skip to content

Commit ca8086e

Browse files
Update src/pages/[platform]/ai/concepts/prompting/index.mdx
Co-authored-by: Ian Saultz <[email protected]>
1 parent 3adc3c9 commit ca8086e

File tree

1 file changed

+1
-1
lines changed
  • src/pages/[platform]/ai/concepts/prompting

1 file changed

+1
-1
lines changed

src/pages/[platform]/ai/concepts/prompting/index.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -33,7 +33,7 @@ export function getStaticProps(context) {
3333

3434
LLM prompting refers to the process of providing a language model, such as Claude or Amazon Titan, with a specific input or "prompt" in order to generate a desired output. The prompt can be a sentence, a paragraph, or even a more complex sequence of instructions that guides the model to produce content that aligns with the user's intent.
3535

36-
The key idea behind prompting is that the way the prompt is structured and worded can significantly influence the model's response. By crafting the prompt carefully, users can leverage the LLM's extensive knowledge and language understanding capabilities to generate high-quality and relevant text, code, or other types of output.
36+
The way the prompt is structured and worded can significantly influence the model's response. By crafting the prompt carefully, users can leverage the LLM's extensive knowledge and language understanding capabilities to generate high-quality and relevant text, code, or other types of output.
3737

3838
Effective prompting involves understanding the model's strengths and limitations, as well as experimenting with different prompt formats, styles, and techniques to elicit the desired responses. This can include using specific keywords, providing context, breaking down tasks into steps, and incorporating formatting elements like bullet points or code blocks.
3939

0 commit comments

Comments
 (0)