Skip to content

Commit 2ba703a

Browse files
committed
Update context
1 parent cd882a3 commit 2ba703a

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

content/modules/ROOT/pages/module-prompt.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -38,7 +38,7 @@ You might choose to use prompt engineering over other techniques if you're looki
3838

3939
There are many tools and approaches available to you as the AI application developer to interface an LLM. We will briefly review a few of these and then make a recommendation for you to follow with the subsequent steps of this module.
4040

41-
Before jumping into specific tools, let's review the basics of interfacing with an LLM through a chat or agentic experience. Since LLMs can often support a wide variety of use cases and personas, it is important that the LLM receive clear, upfront guidance to define its objectives, constraints, persona, and tone. These instructions are provided in natural language form and are form the "System Prompt". Once a System Prompt is defined and a chat session begins, the System Prompt cannot be changed.
41+
Before jumping into specific tools, let's review the basics of interfacing with an LLM through a chat or agentic experience. Since LLMs can often support a wide variety of use cases and personas, it is important that the LLM receive clear, upfront guidance to define its objectives, constraints, persona, and tone. These instructions are provided in natural language format that are specified in the "System Prompt". Once a System Prompt is defined and a chat session begins, the System Prompt cannot be changed.
4242

4343
Depending on use case, it may be necessary for the LLM to produce a more creative or a more predictable response to the user message. Temperature is a floating point number, usually between 0 and 1, that is used to steer the model accordingly. Lower temperature values (such as 0) are more predictable and higher values (such as 1) are more creative, although even at 0 LLMs will never product 100% repeatable responses. Many tools simply use 0.8 as a default.
4444

0 commit comments

Comments
 (0)