- 
                Notifications
    You must be signed in to change notification settings 
- Fork 5.5k
(feat) Component code gen: updating to o1-mini model, upgrading langchain* API #13938
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change | 
|---|---|---|
|  | @@ -22,15 +22,7 @@ def get_env_var(var_name, required=False, default=None): | |
| "openai_embeddings_model": openai_embeddings_model, | ||
| "openai": { | ||
| "api_key": get_env_var("OPENAI_API_KEY", required=openai_api_type == "openai"), | ||
| "model": get_env_var("OPENAI_MODEL", default="gpt-4-0125-preview"), | ||
| }, | ||
| "azure": { | ||
| "deployment_name": get_env_var("OPENAI_DEPLOYMENT_NAME", required=openai_api_type == "azure"), | ||
| "embeddings_deployment_name": get_env_var("OPENAI_EMBEDDINGS_DEPLOYMENT_NAME", required=openai_api_type == "azure"), | ||
| "api_version": get_env_var("OPENAI_API_VERSION", default="2024-03-01-preview"), | ||
| "api_base": get_env_var("OPENAI_API_BASE", required=openai_api_type == "azure"), | ||
| "api_key": get_env_var("OPENAI_API_KEY", required=openai_api_type == "azure"), | ||
| "model": get_env_var("OPENAI_MODEL", default="gpt-4-32k"), | ||
| "model": get_env_var("OPENAI_MODEL", default="o1-mini"), | ||
| There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Tip Codebase Verification Update the README.md to reflect the removal of the Azure configuration. The Azure configuration is still mentioned in the  Additionally, confirm that the removal of the Azure configuration from the codebase is intentional and does not affect any functionality. 
 Analysis chainVerify the impact of the model change and the removal of the Azure configuration. The change in the default OpenAI model from  Additionally, the removal of the Azure configuration suggests a move away from Azure's services or a refactoring of how these services are integrated. Please confirm that this removal is intentional and does not break any existing functionality. Run the following script to verify the impact of the changes: Scripts executedThe following scripts were executed for the analysis: Script: #!/bin/bash
# Description: Verify the impact of the model change and the removal of the Azure configuration.
# Test 1: Search for usages of the old model. Expect: No occurrences.
echo "Searching for usages of the old model..."
rg --type python $'gpt-4-0125-preview'
# Test 2: Search for usages of the new model. Expect: Only occurrences in test files or mocks.
echo "Searching for usages of the new model..."
rg --type python $'o1-mini'
# Test 3: Search for usages of the removed Azure configuration. Expect: No occurrences.
echo "Searching for usages of the removed Azure configuration..."
rg --type python $'OPENAI_DEPLOYMENT_NAME|OPENAI_EMBEDDINGS_DEPLOYMENT_NAME|OPENAI_API_VERSION|OPENAI_API_BASE'
Length of output: 579 Script: #!/bin/bash
# Description: Verify the impact of the model change and the removal of the Azure configuration.
# Test 1: Search for usages of the old model. Expect: No occurrences.
echo "Searching for usages of the old model..."
rg 'gpt-4-0125-preview'
# Test 2: Search for usages of the new model. Expect: Only occurrences in test files or mocks.
echo "Searching for usages of the new model..."
rg 'o1-mini'
# Test 3: Search for usages of the removed Azure configuration. Expect: No occurrences.
echo "Searching for usages of the removed Azure configuration..."
rg 'OPENAI_DEPLOYMENT_NAME|OPENAI_EMBEDDINGS_DEPLOYMENT_NAME|OPENAI_API_VERSION|OPENAI_API_BASE'
Length of output: 911 | ||
| }, | ||
| "browserless": { | ||
| "api_key": get_env_var("BROWSERLESS_API_KEY"), | ||
|  | ||
| Original file line number | Diff line number | Diff line change | ||||||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| @@ -1,19 +1,16 @@ | ||||||||||||||||||
| from templates.common.suffix import suffix | ||||||||||||||||||
| from templates.common.format_instructions import format_instructions | ||||||||||||||||||
| from templates.common.docs_system_instructions import docs_system_instructions | ||||||||||||||||||
| from langchain.schema import ( | ||||||||||||||||||
| # AIMessage, | ||||||||||||||||||
| HumanMessage, | ||||||||||||||||||
| SystemMessage | ||||||||||||||||||
| ) | ||||||||||||||||||
| from langchain.tools.json.tool import JsonSpec | ||||||||||||||||||
| from langchain.agents.agent_toolkits.json.toolkit import JsonToolkit | ||||||||||||||||||
| from langchain.chat_models import ChatOpenAI, AzureChatOpenAI | ||||||||||||||||||
| from langchain.llms.openai import OpenAI | ||||||||||||||||||
| from langchain.agents import create_json_agent, ZeroShotAgent, AgentExecutor | ||||||||||||||||||
| from langchain.schema import HumanMessage | ||||||||||||||||||
| from langchain.agents.react.agent import create_react_agent | ||||||||||||||||||
| from langchain_community.agent_toolkits import JsonToolkit, create_json_agent | ||||||||||||||||||
| from langchain_community.tools.json.tool import JsonSpec | ||||||||||||||||||
| 
      Comment on lines
    
      +4
     to 
      +7
    
   There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Approve relevant imports but remove the unused import. The new imports from  However, the static analysis tool has correctly identified that  Remove the unused import: -from langchain_community.agent_toolkits import JsonToolkit, create_json_agent
+from langchain_community.agent_toolkits import JsonToolkitCommittable suggestion
 
        Suggested change
       
 ToolsRuff
 | ||||||||||||||||||
|  | ||||||||||||||||||
| import openai | ||||||||||||||||||
| from langchain_openai.chat_models.base import ChatOpenAI | ||||||||||||||||||
| 
      Comment on lines
    
      +9
     to 
      +10
    
   There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Remove the unused import. The static analysis tool has correctly identified that  Remove the unused import: -import openaiCommittable suggestion
 
        Suggested change
       
 ToolsRuff
 | ||||||||||||||||||
| from langchain.agents import ZeroShotAgent, AgentExecutor | ||||||||||||||||||
| from langchain.chains import LLMChain | ||||||||||||||||||
| from config.config import config | ||||||||||||||||||
| import openai # required | ||||||||||||||||||
| from dotenv import load_dotenv | ||||||||||||||||||
| load_dotenv() | ||||||||||||||||||
|  | ||||||||||||||||||
|  | @@ -32,22 +29,15 @@ def __init__(self, docs, templates, auth_example, parsed_common_files): | |||||||||||||||||
| system_instructions = format_template( | ||||||||||||||||||
| f"{templates.system_instructions(auth_example, parsed_common_files)}\n{docs_system_instructions}") | ||||||||||||||||||
|  | ||||||||||||||||||
| model = ChatOpenAI(model_name=config['openai']['model']) | ||||||||||||||||||
| tools = OpenAPIExplorerTool.create_tools(docs) | ||||||||||||||||||
| tool_names = [tool.name for tool in tools] | ||||||||||||||||||
|  | ||||||||||||||||||
| prompt_template = ZeroShotAgent.create_prompt( | ||||||||||||||||||
| tools=tools, | ||||||||||||||||||
| prefix=system_instructions, | ||||||||||||||||||
| suffix=suffix, | ||||||||||||||||||
| format_instructions=format_instructions, | ||||||||||||||||||
| input_variables=['input', 'agent_scratchpad'] | ||||||||||||||||||
| ) | ||||||||||||||||||
|  | ||||||||||||||||||
| llm_chain = LLMChain(llm=get_llm(), prompt=prompt_template) | ||||||||||||||||||
| agent = ZeroShotAgent(llm_chain=llm_chain, allowed_tools=tool_names) | ||||||||||||||||||
| verbose = True if config['logging']['level'] == 'DEBUG' else False | ||||||||||||||||||
|  | ||||||||||||||||||
| self.agent_executor = AgentExecutor.from_agent_and_tools( | ||||||||||||||||||
| # o1-preview doesn't support system instruction, so we just concatenate into the prompt | ||||||||||||||||||
| prompt = f"{system_instructions}\n\n{format_instructions}" | ||||||||||||||||||
|  | ||||||||||||||||||
| agent = create_react_agent(model, tools, prompt) | ||||||||||||||||||
| verbose = True if config['logging']['level'] == 'DEBUG' else False | ||||||||||||||||||
| There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Simplify the unnecessary expression. The static analysis tool has correctly identified that the  Simplify the expression like this: -verbose = True if config['logging']['level'] == 'DEBUG' else False
+verbose = config['logging']['level'] == 'DEBUG'Committable suggestion
 
        Suggested change
       
 ToolsRuff
 | ||||||||||||||||||
| self.agent_executor = AgentExecutor( | ||||||||||||||||||
| agent=agent, tools=tools, verbose=verbose) | ||||||||||||||||||
|  | ||||||||||||||||||
| def run(self, input): | ||||||||||||||||||
|  | @@ -87,15 +77,9 @@ def create_user_prompt(prompt, urls_content): | |||||||||||||||||
|  | ||||||||||||||||||
|  | ||||||||||||||||||
| def get_llm(): | ||||||||||||||||||
| if config['openai_api_type'] == "azure": | ||||||||||||||||||
| azure_config = config["azure"] | ||||||||||||||||||
| return AzureChatOpenAI(deployment_name=azure_config['deployment_name'], | ||||||||||||||||||
| model_name=azure_config["model"], temperature=config["temperature"], request_timeout=300) | ||||||||||||||||||
| else: | ||||||||||||||||||
| openai_config = config["openai"] | ||||||||||||||||||
| print(f"Using OpenAI API: {openai_config['model']}") | ||||||||||||||||||
| return ChatOpenAI( | ||||||||||||||||||
| model_name=openai_config["model"], temperature=config["temperature"]) | ||||||||||||||||||
| openai_config = config["openai"] | ||||||||||||||||||
| print(f"Using OpenAI API: {openai_config['model']}") | ||||||||||||||||||
| return ChatOpenAI(model_name=openai_config["model"], temperature=1) | ||||||||||||||||||
|  | ||||||||||||||||||
|  | ||||||||||||||||||
| def ask_agent(prompt, docs, templates, auth_example, parsed_common_files, urls_content): | ||||||||||||||||||
|  | @@ -111,8 +95,7 @@ def no_docs(prompt, templates, auth_example, parsed_common_files, urls_content, | |||||||||||||||||
| pd_instructions = format_template( | ||||||||||||||||||
| templates.system_instructions(auth_example, parsed_common_files)) | ||||||||||||||||||
|  | ||||||||||||||||||
| result = get_llm()(messages=[ | ||||||||||||||||||
| SystemMessage(content="You are the most intelligent software engineer in the world. You carefully provide accurate, factual, thoughtful, nuanced code, and are brilliant at reasoning. Follow all of the instructions below — they are all incredibly important. This code will be shipped directly to production, so it's important that it's accurate and complete."), | ||||||||||||||||||
| result = get_llm().invoke([ | ||||||||||||||||||
| HumanMessage(content=user_prompt + | ||||||||||||||||||
| pd_instructions if normal_order else pd_instructions+user_prompt), | ||||||||||||||||||
| ]) | ||||||||||||||||||
|  | ||||||||||||||||||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Tip
Codebase Verification
Inconsistent usage of
OPENAI_MODELvariable witho1-previewvalue.The
OPENAI_MODELvariable is currently set with a default value ofo1-miniinpackages/component_code_gen/config/config.py, whileo1-previewis only mentioned in a comment inpackages/component_code_gen/helpers/langchain_helpers.py. Please review the codebase to ensure thato1-previewis correctly set as the model value where intended.packages/component_code_gen/config/config.py: Default value iso1-mini.packages/component_code_gen/helpers/langchain_helpers.py: Comment mentionso1-preview.Analysis chain
Verify the usage of the updated
OPENAI_MODELvariable in the codebase.The
OPENAI_MODELvariable has been updated fromgpt-4-32ktoo1-preview, indicating a transition to a different model version or type. Ensure that all references to this variable in the codebase have been updated to use the new model.Run the following script to verify the usage of the updated variable:
Scripts executed
The following scripts were executed for the analysis:
Script:
Length of output: 62
Script:
Length of output: 316