-
Notifications
You must be signed in to change notification settings - Fork 2.6k
fix: Generate prompt #4395
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix: Generate prompt #4395
Conversation
|
Adding the "do-not-merge/release-note-label-needed" label because no release-note block was detected, please follow our release note process to remove it. DetailsInstructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
|
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: The full list of commands accepted by this bot can be found here. DetailsNeeds approval from an approver in each of these files:Approvers can indicate their approval by writing |
| for r in model.stream([SystemMessage(content=SYSTEM_ROLE), | ||
| *[HumanMessage(content=m.get('content')) if m.get( | ||
| 'role') == 'user' else AIMessage( | ||
| content=m.get('content')) for m in messages]]): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Your code has several issues that need to be addressed:
-
Variable Usage: You are referencing
applicationwithout defining it within the function scope, which will result in aNameError. Ensure thatapplicationis passed to the function and available locally. -
System Role Formatting: The
SYSTEM_ROLEformat string uses placeholders like{application_name}and{detail}, but these variables are not defined. If you want to use predefined values forsystem_role, define them before usingformat. -
Stream Call: In
process(), the stream API is called with a list of messages containing multiple roles (SystemMessage,HumanMessage,AIMessage). This might lead to incorrect formatting if they are mixed. Consider creating separate functions or classes to handle each type of message separately. -
Error Handling: The
get_model_instance_by_model_workspace_id()call can throw exceptions if models do not exist or are invalid. It's good practice to handle such exceptional cases more gracefully.
Here’s a revised version of the code addressing some of these issues:
from langchain.chat_models import get_model_instance_by_model_workspace_id
from langchain.message_history.models import SystemMessage, HumanMessage, AIMessage
class PromptGenerator:
def __init__(self):
self.SYSTEM_ROLE = "System role placeholder" # Replace this with actual system role content
def generate_prompt(self, instance: dict):
application = instance.get('application') # Assuming 'application' is provided in the input
messages = [...]
message = messages[-1]['content']
q = prompt.replace("{userInput}", message)
# Optionally set specific environment variables based on instance properties
q = q.replace("{environment_key}", f"{instance['key']}") # Example
messages[-1]['content'] = q
SUPPORTED_MODEL_TYPES = ["LLM", "IMAGE"]
model_id = ...
workspace_id = ...
model_exist = ...
if not model_exist:
raise Exception(_("Model does not exists or is not an LLM model"))
def process():
model = get_model_instance_by_model_workspace_id(model_id=model_id, workspace_id=workspace_id, **application.model_params_setting)
try:
for r in model.stream([
SystemMessage(content=self.SYSTEM_ROLE), # Direct use of local variable instead of method parameter
*[HumanMessage(content=m['content']) if m['role'] == 'user' else AIMessage(content=m['content']) for m in messages]
]):
yield r
return process()Key Changes:
- Added missing
system_roleinitialization. - Ensured that the context information (like
application.name) is correctly used. - Used a consistent approach for initializing components (
PromptGeneratorand methods). - Refactored error handling to catch more detailed errors when retrieving the model.
| 1. 角色设定必须服务于"{userInput}"内容设定应用的核心功能 | ||
| 2. 允许用户对角色设定的具体内容进行调整和优化 | ||
| 3. 如果用户要求修改某个技能或部分,在保持应用主题的前提下进行相应调整 | ||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The provided code has two issues:
- In the prompt template
INIT_TEMPLATE, there was a mistake in labeling the application name and description parameters within quotes ({application_name}). The quotation marks should be removed to correctly reference these variables. - There's a lack of consistency in variable names between the parameter values
{userInput}and those inside the role description. For example, the role descriptions mention "应用名称" and "应用描述", but the code uses "{application_name}". Consistency is crucial for user understanding.
To address these issues, you can revise the prompt template as follows:
const promptTemplates = {
INIT_TEMPLATE: `
请根据用户描述生成一个完整的AI角色人设模板:
- 应用名称:{appName}
- 应用描述:{appDesc}
用户需求:{userInput}
重要说明:
- 1. 角色设定必须服务于“${appName}”应用的核心功能。
+ 1. 角色设定必须服务于"${userInput}"内容设定应用的核心功能。
2. 允许用户对角色设定的具体内容进行调整和优化。
3. 如果用户要求修改某个技能或部分,在保持应用主题的前提下进行相应调整。
`,
};This revision ensures proper referencing using curly braces and consistency in naming conventions, which will improve usability for users interacting with this tool.
fix: Generate prompt