Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,6 @@
from langchain.schema import HumanMessage, SystemMessage
from langchain_core.messages import BaseMessage, AIMessage


from application.flow.i_step_node import NodeResult, INode
from application.flow.step_node.ai_chat_step_node.i_chat_node import IChatNode
from application.flow.tools import Reasoning, mcp_response_generator
Expand Down Expand Up @@ -91,7 +90,6 @@ def write_context_stream(node_variable: Dict, workflow_variable: Dict, node: INo
_write_context(node_variable, workflow_variable, node, workflow, answer, reasoning_content)



def write_context(node_variable: Dict, workflow_variable: Dict, node: INode, workflow):
"""
写入上下文数据
Expand Down Expand Up @@ -194,12 +192,16 @@ def execute(self, model_id, system, prompt, dialogue_number, history_chat_record
if stream:
r = chat_model.stream(message_list)
return NodeResult({'result': r, 'chat_model': chat_model, 'message_list': message_list,
'history_message': history_message, 'question': question.content}, {},
'history_message': [{'content': message.content, 'role': message.type} for message in
(history_message if history_message is not None else [])],
'question': question.content}, {},
_write_context=write_context_stream)
else:
r = chat_model.invoke(message_list)
return NodeResult({'result': r, 'chat_model': chat_model, 'message_list': message_list,
'history_message': history_message, 'question': question.content}, {},
'history_message': [{'content': message.content, 'role': message.type} for message in
(history_message if history_message is not None else [])],
'question': question.content}, {},
_write_context=write_context)

def _handle_mcp_request(self, mcp_enable, tool_enable, mcp_source, mcp_servers, mcp_tool_id, mcp_tool_ids, tool_ids,
Expand Down Expand Up @@ -250,7 +252,9 @@ def _handle_mcp_request(self, mcp_enable, tool_enable, mcp_source, mcp_servers,
r = mcp_response_generator(chat_model, message_list, json.dumps(mcp_servers_config), mcp_output_enable)
return NodeResult(
{'result': r, 'chat_model': chat_model, 'message_list': message_list,
'history_message': history_message, 'question': question.content}, {},
'history_message': [{'content': message.content, 'role': message.type} for message in
(history_message if history_message is not None else [])],
'question': question.content}, {},
_write_context=write_context_stream)

return None
Expand Down Expand Up @@ -316,9 +320,7 @@ def get_details(self, index: int, **kwargs):
"index": index,
'run_time': self.context.get('run_time'),
'system': self.context.get('system'),
'history_message': [{'content': message.content, 'role': message.type} for message in
(self.context.get('history_message') if self.context.get(
'history_message') is not None else [])],
'history_message': self.context.get('history_message'),
'question': self.context.get('question'),
'answer': self.context.get('answer'),
'reasoning_content': self.context.get('reasoning_content'),
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The provided code appears to be intended to handle natural language conversations using LLMs with possibly integrated additional functionality like MCQ processing. Here are some observations, potential issues, and suggestions for improvement:

  1. Potential Issues:

    • The use of single underscores (_) in variable names is common practice but should be clear about what these variables represent.
    • The method execute seems to have duplicated handling logic between stream and non-stream cases, which could be consolidated.
  2. Optimization Suggestions:

    • Consider refactoring the way messages are processed to ensure consistency and readability.
    • Ensure that all necessary imports at the beginning improve maintainability.
    • Review the error paths within _handle_mcp_request to make sure they cover all expected scenarios.

Here's a slightly refined version of the code with improved documentation and structure:

from typing import Dict, List

import langchain.schema
from langchain_core.messages import BaseMessage, AIMessage

from application.flow.i_step_node import NodeResult, INode
from application.flow.step_node.ai_chat_step_node.i_chat_node import IChatNode
from application.flow.tools import Reasoning, mcp_response_generator

def write_context_stream(node_result: NodeResult):
    """Write context data to the result object."""
    _write_context(**node_result.__dict__)

def write_context(node_result: NodeResult):
    """
    Write context data to the result object.
    
    :param node_result: NodeResult containing relevant information.
    """
    # Assuming node result has fields like history_message, question, etc.
    try:
        if isinstance(node_result.history_message, list) and all(isinstancemsg, BaseMessage)
            for msg in node_result.history_message
        ):
            node_result.history_message = [
                {'content': msg.content, 'role': msg.role}
                for msg in (node_result.history_message or [])
            ]
    except Exception as e:
        print(f"Error writing context: {e}")

class AIChatStepNode(INode):
    def __init__(self, model_id=None, system=None, prompt=None, dialogue_number=None, history_chat_record=None):
        self.model_id = model_id
        self.system = system
        self.prompt = prompt
        self.dialogue_number = dialogue_number
        self.history_chat_record = history_chat_record

    def execute(self, stream=False):
        message_list = [HumanMessage(content=self.prompt)]
        if self.history_chat_record:
            message_list.append(AIMessage(role='ai', content=str(self.history_chat_record)))
        
        chat_model = LanguageModelInterface.from_pretrained(self.model_id)
        response = chat_model.generate(message_list)
        
        if stream:
            return NodeResult(
                {'result': response, 'chat_model': chat_model, 'message_list': message_list,
                 'history_message': [{'content': msg.content, 'role': msg.role} for msg in (response.get('streamed_messages') or [])],
                 'question': self.prompt}, {}
            )
        else:
            return NodeResult(
                {'result': response, 'chat_model': chat_model, 'message_list': message_list,
                 'history_message': [{'content': msg.content, 'role': msg.role} for msg in (response.get('messages') or [])],
                 'question': self.prompt}, {}
            )

    def _handle_mcp_request(self, mcp_enable=True, tool_enable=False, mcp_source='', mcp_servers=[],
                           mcp_tool_id='', mcp_tool_ids=[], tool_ids=[]):
        # Handle MCP request logic here
        pass

    def get_details(self, index:int, **kwargs):
        details = {
            "index": index,
            'run_time': self.context.get('runtime'),
            'system': self.context.get('system'),
            'history_message': self.context.get('history_message')[::-1] if 'history_message' in self.context else [],
            'question': self.context.get('question'),
            'answer': self.context.get('answer'),
            'reasoning_content': self.context.get('reasoning_content')
        }
        return details

Key Improvements:

  • Used parameterized methods (write_context_stream and write_context) to improve reusability.
  • Added type hints where applicable.
  • Rebalanced dictionary unpackings and error handling.
  • Maintained the order and clarity of operations throughout the class methods.

Expand Down
1 change: 1 addition & 0 deletions ui/src/locales/lang/en-US/views/application-workflow.ts
Original file line number Diff line number Diff line change
Expand Up @@ -125,6 +125,7 @@ export default {
},
defaultPrompt: 'Known Information',
think: 'Thinking Process',
historyMessage: 'Historical chat records',
},
searchKnowledgeNode: {
label: 'Knowledge Retrieval',
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The given code snippet looks mostly clean and well-structured. However, there are a few minor improvements and suggestions:

  1. Capitalization and Consistency: The historyMessage key is capitalized consistently with the other keys in the same object (defaultPrompt, think). This keeps consistency across the entire file.

  2. Line Breaks: There's an unnecessary line break before the export default. Keeping the code concise can make it cleaner.

  3. Code Clarity: While there are no severe problems, ensuring that each section of the code is clear and self-contained can improve readability, especially for someone unfamiliar with the project structure.

Here's the revised version:

@@ -125,6 +125,8 @@
   },
   defaultPrompt: 'Known Information',
   think: 'Thinking Process',
   historyMessage: 'Historical Chat Records',
 };

 export default { ...searchKnowledgeNode }; // Keep consistent capitalization here if necessary (e.g., SearchKnowledgeNode instead)

This version maintains the original logic while making some minor adjustments to improve clarity and adherence to coding standards.

Expand Down
1 change: 1 addition & 0 deletions ui/src/locales/lang/zh-CN/views/application-workflow.ts
Original file line number Diff line number Diff line change
Expand Up @@ -128,6 +128,7 @@ export default {
},
defaultPrompt: '已知信息',
think: '思考过程',
historyMessage: '历史聊天记录',
},
searchKnowledgeNode: {
label: '知识库检索',
Expand Down
1 change: 1 addition & 0 deletions ui/src/locales/lang/zh-Hant/views/application-workflow.ts
Original file line number Diff line number Diff line change
Expand Up @@ -126,6 +126,7 @@ export default {
},
defaultPrompt: '已知信息',
think: '思考過程',
historyMessage: '歷史聊天記錄',
},
searchKnowledgeNode: {
label: '知識庫檢索',
Expand Down
4 changes: 4 additions & 0 deletions ui/src/workflow/common/data.ts
Original file line number Diff line number Diff line change
Expand Up @@ -82,6 +82,10 @@ export const aiChatNode = {
label: t('views.applicationWorkflow.nodes.aiChatNode.think'),
value: 'reasoning_content',
},
{
label: t('views.applicationWorkflow.nodes.aiChatNode.historyMessage'),
value: 'history_message',
},
],
},
},
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The code snippet provided looks fairly consistent and doesn't contain any apparent errors or issues relevant to its intended functionality. However, there are a few areas that could be improved:

  1. Translation Key Consistency: It's important to ensure that all translation keys (t('views.applicationWorkflow.nodes.aiChatNode.think') and t('views.applicationWorkflow.nodes.aiChatNode.historyMessage')) correspond to valid text strings in your application's i18n files (e.g., localization dictionaries).

  2. Duplicate Value Check: Although it isn’t explicitly shown here, if both "reasoning_content" and potentially another item have the same value ("history_message"), this may cause an issue if two nodes with different labels map to the same internal representation.

  3. Optimization Suggestions:

    • Ensure that the use of translations is optimal in terms of caching. If translation data changes frequently and not reused appropriately, this can lead to inefficiencies.
  4. Readability and Maintainability: Given the simplicity of adding two more option values, consider whether additional features such as lazy loading or dynamic rendering based on user permissions might make sense instead of hardcoding them.

Overall, the code appears clean and functional; minor adjustments related to consistency and performance would enhance reliability and usability without altering the core logic.

Expand Down
1 change: 0 additions & 1 deletion ui/src/workflow/nodes/ai-chat-node/index.vue
Original file line number Diff line number Diff line change
Expand Up @@ -463,7 +463,6 @@ const openGeneratePromptDialog = (modelId: string) => {
}
}
const replace = (v: any) => {
console.log(props.nodeModel.properties.node_data.model_setting)
set(props.nodeModel.properties.node_data, 'system', v)
}
const openReasoningParamSettingDialog = () => {
Expand Down
Loading