Skip to content

Conversation

@shaohuzhang1
Copy link
Contributor

fix: Generate prompt

@f2c-ci-robot
Copy link

f2c-ci-robot bot commented Nov 27, 2025

Adding the "do-not-merge/release-note-label-needed" label because no release-note block was detected, please follow our release note process to remove it.

Details

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@f2c-ci-robot
Copy link

f2c-ci-robot bot commented Nov 27, 2025

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by:

The full list of commands accepted by this bot can be found here.

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

for r in model.stream([SystemMessage(content=SYSTEM_ROLE),
*[HumanMessage(content=m.get('content')) if m.get(
'role') == 'user' else AIMessage(
content=m.get('content')) for m in messages]]):
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Your code has several issues that need to be addressed:

  1. Variable Usage: You are referencing application without defining it within the function scope, which will result in a NameError. Ensure that application is passed to the function and available locally.

  2. System Role Formatting: The SYSTEM_ROLE format string uses placeholders like {application_name} and {detail}, but these variables are not defined. If you want to use predefined values for system_role, define them before using format.

  3. Stream Call: In process(), the stream API is called with a list of messages containing multiple roles (SystemMessage, HumanMessage, AIMessage). This might lead to incorrect formatting if they are mixed. Consider creating separate functions or classes to handle each type of message separately.

  4. Error Handling: The get_model_instance_by_model_workspace_id() call can throw exceptions if models do not exist or are invalid. It's good practice to handle such exceptional cases more gracefully.

Here’s a revised version of the code addressing some of these issues:

from langchain.chat_models import get_model_instance_by_model_workspace_id
from langchain.message_history.models import SystemMessage, HumanMessage, AIMessage

class PromptGenerator:
    def __init__(self):
        self.SYSTEM_ROLE = "System role placeholder"  # Replace this with actual system role content

    def generate_prompt(self, instance: dict):
        application = instance.get('application')  # Assuming 'application' is provided in the input
    
        messages = [...]
        
        message = messages[-1]['content']
        q = prompt.replace("{userInput}", message)
        
        # Optionally set specific environment variables based on instance properties
        q = q.replace("{environment_key}", f"{instance['key']}")  # Example
        
        messages[-1]['content'] = q
         
        SUPPORTED_MODEL_TYPES = ["LLM", "IMAGE"]
        
        model_id = ...
        workspace_id = ...

        model_exist = ...
        
        if not model_exist:
            raise Exception(_("Model does not exists or is not an LLM model"))

        def process():
            model = get_model_instance_by_model_workspace_id(model_id=model_id, workspace_id=workspace_id, **application.model_params_setting)
            
            try:
                for r in model.stream([
                    SystemMessage(content=self.SYSTEM_ROLE),  # Direct use of local variable instead of method parameter
                    *[HumanMessage(content=m['content']) if m['role'] == 'user' else AIMessage(content=m['content']) for m in messages]
                ]):
                    yield r 
                    
        return process()

Key Changes:

  • Added missing system_role initialization.
  • Ensured that the context information (like application.name) is correctly used.
  • Used a consistent approach for initializing components (PromptGenerator and methods).
  • Refactored error handling to catch more detailed errors when retrieving the model.

1. 角色设定必须服务于"{userInput}"内容设定应用的核心功能
2. 允许用户对角色设定的具体内容进行调整和优化
3. 如果用户要求修改某个技能或部分,在保持应用主题的前提下进行相应调整
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The provided code has two issues:

  1. In the prompt template INIT_TEMPLATE, there was a mistake in labeling the application name and description parameters within quotes ({application_name}). The quotation marks should be removed to correctly reference these variables.
  2. There's a lack of consistency in variable names between the parameter values {userInput} and those inside the role description. For example, the role descriptions mention "应用名称" and "应用描述", but the code uses "{application_name}". Consistency is crucial for user understanding.

To address these issues, you can revise the prompt template as follows:

const promptTemplates = {
  INIT_TEMPLATE: `
  请根据用户描述生成一个完整的AI角色人设模板:
  
- 应用名称:{appName}
- 应用描述:{appDesc}
 用户需求:{userInput}
 
 重要说明:
- 1. 角色设定必须服务于“${appName}”应用的核心功能。
+ 1. 角色设定必须服务于"${userInput}"内容设定应用的核心功能。
 2. 允许用户对角色设定的具体内容进行调整和优化。
 3. 如果用户要求修改某个技能或部分,在保持应用主题的前提下进行相应调整。

     `,
};

This revision ensures proper referencing using curly braces and consistency in naming conventions, which will improve usability for users interacting with this tool.

@zhanweizhang7 zhanweizhang7 merged commit d9ecf8c into v2 Nov 27, 2025
3 of 6 checks passed
@zhanweizhang7 zhanweizhang7 deleted the pr@v2@fix_generate_prompt branch November 27, 2025 10:22
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants