Skip to content

Conversation

@shaohuzhang1
Copy link
Contributor

feat: AI dialogue nodes support historical chat history parameters

@f2c-ci-robot
Copy link

f2c-ci-robot bot commented Oct 24, 2025

Adding the "do-not-merge/release-note-label-needed" label because no release-note block was detected, please follow our release note process to remove it.

Details

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@f2c-ci-robot
Copy link

f2c-ci-robot bot commented Oct 24, 2025

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by:

The full list of commands accepted by this bot can be found here.

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@shaohuzhang1 shaohuzhang1 merged commit 586c353 into v2 Oct 24, 2025
4 of 5 checks passed
@shaohuzhang1 shaohuzhang1 deleted the pr@v2@feat_workflow branch October 24, 2025 03:35
'history_message': self.context.get('history_message'),
'question': self.context.get('question'),
'answer': self.context.get('answer'),
'reasoning_content': self.context.get('reasoning_content'),
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The provided code appears to be intended to handle natural language conversations using LLMs with possibly integrated additional functionality like MCQ processing. Here are some observations, potential issues, and suggestions for improvement:

  1. Potential Issues:

    • The use of single underscores (_) in variable names is common practice but should be clear about what these variables represent.
    • The method execute seems to have duplicated handling logic between stream and non-stream cases, which could be consolidated.
  2. Optimization Suggestions:

    • Consider refactoring the way messages are processed to ensure consistency and readability.
    • Ensure that all necessary imports at the beginning improve maintainability.
    • Review the error paths within _handle_mcp_request to make sure they cover all expected scenarios.

Here's a slightly refined version of the code with improved documentation and structure:

from typing import Dict, List

import langchain.schema
from langchain_core.messages import BaseMessage, AIMessage

from application.flow.i_step_node import NodeResult, INode
from application.flow.step_node.ai_chat_step_node.i_chat_node import IChatNode
from application.flow.tools import Reasoning, mcp_response_generator

def write_context_stream(node_result: NodeResult):
    """Write context data to the result object."""
    _write_context(**node_result.__dict__)

def write_context(node_result: NodeResult):
    """
    Write context data to the result object.
    
    :param node_result: NodeResult containing relevant information.
    """
    # Assuming node result has fields like history_message, question, etc.
    try:
        if isinstance(node_result.history_message, list) and all(isinstancemsg, BaseMessage)
            for msg in node_result.history_message
        ):
            node_result.history_message = [
                {'content': msg.content, 'role': msg.role}
                for msg in (node_result.history_message or [])
            ]
    except Exception as e:
        print(f"Error writing context: {e}")

class AIChatStepNode(INode):
    def __init__(self, model_id=None, system=None, prompt=None, dialogue_number=None, history_chat_record=None):
        self.model_id = model_id
        self.system = system
        self.prompt = prompt
        self.dialogue_number = dialogue_number
        self.history_chat_record = history_chat_record

    def execute(self, stream=False):
        message_list = [HumanMessage(content=self.prompt)]
        if self.history_chat_record:
            message_list.append(AIMessage(role='ai', content=str(self.history_chat_record)))
        
        chat_model = LanguageModelInterface.from_pretrained(self.model_id)
        response = chat_model.generate(message_list)
        
        if stream:
            return NodeResult(
                {'result': response, 'chat_model': chat_model, 'message_list': message_list,
                 'history_message': [{'content': msg.content, 'role': msg.role} for msg in (response.get('streamed_messages') or [])],
                 'question': self.prompt}, {}
            )
        else:
            return NodeResult(
                {'result': response, 'chat_model': chat_model, 'message_list': message_list,
                 'history_message': [{'content': msg.content, 'role': msg.role} for msg in (response.get('messages') or [])],
                 'question': self.prompt}, {}
            )

    def _handle_mcp_request(self, mcp_enable=True, tool_enable=False, mcp_source='', mcp_servers=[],
                           mcp_tool_id='', mcp_tool_ids=[], tool_ids=[]):
        # Handle MCP request logic here
        pass

    def get_details(self, index:int, **kwargs):
        details = {
            "index": index,
            'run_time': self.context.get('runtime'),
            'system': self.context.get('system'),
            'history_message': self.context.get('history_message')[::-1] if 'history_message' in self.context else [],
            'question': self.context.get('question'),
            'answer': self.context.get('answer'),
            'reasoning_content': self.context.get('reasoning_content')
        }
        return details

Key Improvements:

  • Used parameterized methods (write_context_stream and write_context) to improve reusability.
  • Added type hints where applicable.
  • Rebalanced dictionary unpackings and error handling.
  • Maintained the order and clarity of operations throughout the class methods.

},
],
},
},
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The code snippet provided looks fairly consistent and doesn't contain any apparent errors or issues relevant to its intended functionality. However, there are a few areas that could be improved:

  1. Translation Key Consistency: It's important to ensure that all translation keys (t('views.applicationWorkflow.nodes.aiChatNode.think') and t('views.applicationWorkflow.nodes.aiChatNode.historyMessage')) correspond to valid text strings in your application's i18n files (e.g., localization dictionaries).

  2. Duplicate Value Check: Although it isn’t explicitly shown here, if both "reasoning_content" and potentially another item have the same value ("history_message"), this may cause an issue if two nodes with different labels map to the same internal representation.

  3. Optimization Suggestions:

    • Ensure that the use of translations is optimal in terms of caching. If translation data changes frequently and not reused appropriately, this can lead to inefficiencies.
  4. Readability and Maintainability: Given the simplicity of adding two more option values, consider whether additional features such as lazy loading or dynamic rendering based on user permissions might make sense instead of hardcoding them.

Overall, the code appears clean and functional; minor adjustments related to consistency and performance would enhance reliability and usability without altering the core logic.

historyMessage: 'Historical chat records',
},
searchKnowledgeNode: {
label: 'Knowledge Retrieval',
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The given code snippet looks mostly clean and well-structured. However, there are a few minor improvements and suggestions:

  1. Capitalization and Consistency: The historyMessage key is capitalized consistently with the other keys in the same object (defaultPrompt, think). This keeps consistency across the entire file.

  2. Line Breaks: There's an unnecessary line break before the export default. Keeping the code concise can make it cleaner.

  3. Code Clarity: While there are no severe problems, ensuring that each section of the code is clear and self-contained can improve readability, especially for someone unfamiliar with the project structure.

Here's the revised version:

@@ -125,6 +125,8 @@
   },
   defaultPrompt: 'Known Information',
   think: 'Thinking Process',
   historyMessage: 'Historical Chat Records',
 };

 export default { ...searchKnowledgeNode }; // Keep consistent capitalization here if necessary (e.g., SearchKnowledgeNode instead)

This version maintains the original logic while making some minor adjustments to improve clarity and adherence to coding standards.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants