Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 1 addition & 4 deletions packages/component_code_gen/.env.example
Original file line number Diff line number Diff line change
Expand Up @@ -3,9 +3,6 @@ BROWSERLESS_API_KEY=your-browserless-api-key
SUPABASE_URL=https://your-supabase-url.supabase.co
SUPABASE_API_KEY=your-supabase-service-role-key

OPENAI_API_TYPE=azure
OPENAI_DEPLOYMENT_NAME=deployment-name
OPENAI_API_BASE=https://resource-name.openai.azure.com
OPENAI_API_KEY=azure-api-key
OPENAI_MODEL=gpt-4-32k
OPENAI_MODEL=o1-preview
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Tip

Codebase Verification

Inconsistent usage of OPENAI_MODEL variable with o1-preview value.

The OPENAI_MODEL variable is currently set with a default value of o1-mini in packages/component_code_gen/config/config.py, while o1-preview is only mentioned in a comment in packages/component_code_gen/helpers/langchain_helpers.py. Please review the codebase to ensure that o1-preview is correctly set as the model value where intended.

  • packages/component_code_gen/config/config.py: Default value is o1-mini.
  • packages/component_code_gen/helpers/langchain_helpers.py: Comment mentions o1-preview.
Analysis chain

Verify the usage of the updated OPENAI_MODEL variable in the codebase.

The OPENAI_MODEL variable has been updated from gpt-4-32k to o1-preview, indicating a transition to a different model version or type. Ensure that all references to this variable in the codebase have been updated to use the new model.

Run the following script to verify the usage of the updated variable:

Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Verify that the updated `OPENAI_MODEL` variable is used correctly in the codebase.

# Test: Search for the usage of `OPENAI_MODEL`. Expect: Occurrences of `o1-preview`.
rg --type python $'OPENAI_MODEL'

Length of output: 62


Script:

#!/bin/bash
# Description: Verify that the updated `OPENAI_MODEL` variable is used correctly in the codebase.

# Search for the usage of `OPENAI_MODEL` and the specific value `o1-preview`.
rg --type py 'OPENAI_MODEL'
rg --type py 'o1-preview'

Length of output: 316

OPENAI_EMBEDDINGS_MODEL=text-embedding-3-small
10 changes: 1 addition & 9 deletions packages/component_code_gen/config/config.py
Original file line number Diff line number Diff line change
Expand Up @@ -22,15 +22,7 @@ def get_env_var(var_name, required=False, default=None):
"openai_embeddings_model": openai_embeddings_model,
"openai": {
"api_key": get_env_var("OPENAI_API_KEY", required=openai_api_type == "openai"),
"model": get_env_var("OPENAI_MODEL", default="gpt-4-0125-preview"),
},
"azure": {
"deployment_name": get_env_var("OPENAI_DEPLOYMENT_NAME", required=openai_api_type == "azure"),
"embeddings_deployment_name": get_env_var("OPENAI_EMBEDDINGS_DEPLOYMENT_NAME", required=openai_api_type == "azure"),
"api_version": get_env_var("OPENAI_API_VERSION", default="2024-03-01-preview"),
"api_base": get_env_var("OPENAI_API_BASE", required=openai_api_type == "azure"),
"api_key": get_env_var("OPENAI_API_KEY", required=openai_api_type == "azure"),
"model": get_env_var("OPENAI_MODEL", default="gpt-4-32k"),
"model": get_env_var("OPENAI_MODEL", default="o1-mini"),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Tip

Codebase Verification

Update the README.md to reflect the removal of the Azure configuration.

The Azure configuration is still mentioned in the README.md, which might lead to confusion if it is no longer relevant. Please update the documentation to align with the current state of the codebase.

Additionally, confirm that the removal of the Azure configuration from the codebase is intentional and does not affect any functionality.

  • README.md: Contains references to the removed Azure configuration.
Analysis chain

Verify the impact of the model change and the removal of the Azure configuration.

The change in the default OpenAI model from "gpt-4-0125-preview" to "o1-mini" could have a significant impact on the performance and capabilities of the application. Please ensure that this change has been thoroughly tested and validated.

Additionally, the removal of the Azure configuration suggests a move away from Azure's services or a refactoring of how these services are integrated. Please confirm that this removal is intentional and does not break any existing functionality.

Run the following script to verify the impact of the changes:

Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Verify the impact of the model change and the removal of the Azure configuration.

# Test 1: Search for usages of the old model. Expect: No occurrences.
echo "Searching for usages of the old model..."
rg --type python $'gpt-4-0125-preview'

# Test 2: Search for usages of the new model. Expect: Only occurrences in test files or mocks.
echo "Searching for usages of the new model..."
rg --type python $'o1-mini'

# Test 3: Search for usages of the removed Azure configuration. Expect: No occurrences.
echo "Searching for usages of the removed Azure configuration..."
rg --type python $'OPENAI_DEPLOYMENT_NAME|OPENAI_EMBEDDINGS_DEPLOYMENT_NAME|OPENAI_API_VERSION|OPENAI_API_BASE'

Length of output: 579


Script:

#!/bin/bash
# Description: Verify the impact of the model change and the removal of the Azure configuration.

# Test 1: Search for usages of the old model. Expect: No occurrences.
echo "Searching for usages of the old model..."
rg 'gpt-4-0125-preview'

# Test 2: Search for usages of the new model. Expect: Only occurrences in test files or mocks.
echo "Searching for usages of the new model..."
rg 'o1-mini'

# Test 3: Search for usages of the removed Azure configuration. Expect: No occurrences.
echo "Searching for usages of the removed Azure configuration..."
rg 'OPENAI_DEPLOYMENT_NAME|OPENAI_EMBEDDINGS_DEPLOYMENT_NAME|OPENAI_API_VERSION|OPENAI_API_BASE'

Length of output: 911

},
"browserless": {
"api_key": get_env_var("BROWSERLESS_API_KEY"),
Expand Down
55 changes: 19 additions & 36 deletions packages/component_code_gen/helpers/langchain_helpers.py
Original file line number Diff line number Diff line change
@@ -1,19 +1,16 @@
from templates.common.suffix import suffix
from templates.common.format_instructions import format_instructions
from templates.common.docs_system_instructions import docs_system_instructions
from langchain.schema import (
# AIMessage,
HumanMessage,
SystemMessage
)
from langchain.tools.json.tool import JsonSpec
from langchain.agents.agent_toolkits.json.toolkit import JsonToolkit
from langchain.chat_models import ChatOpenAI, AzureChatOpenAI
from langchain.llms.openai import OpenAI
from langchain.agents import create_json_agent, ZeroShotAgent, AgentExecutor
from langchain.schema import HumanMessage
from langchain.agents.react.agent import create_react_agent
from langchain_community.agent_toolkits import JsonToolkit, create_json_agent
from langchain_community.tools.json.tool import JsonSpec
Comment on lines +4 to +7
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Approve relevant imports but remove the unused import.

The new imports from langchain and langchain_community modules are relevant to the changes made in the file.

However, the static analysis tool has correctly identified that create_json_agent is imported but unused.

Remove the unused import:

-from langchain_community.agent_toolkits import JsonToolkit, create_json_agent
+from langchain_community.agent_toolkits import JsonToolkit
Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
from langchain.schema import HumanMessage
from langchain.agents.react.agent import create_react_agent
from langchain_community.agent_toolkits import JsonToolkit, create_json_agent
from langchain_community.tools.json.tool import JsonSpec
from langchain.schema import HumanMessage
from langchain.agents.react.agent import create_react_agent
from langchain_community.agent_toolkits import JsonToolkit
from langchain_community.tools.json.tool import JsonSpec
Tools
Ruff

6-6: langchain_community.agent_toolkits.create_json_agent imported but unused

Remove unused import: langchain_community.agent_toolkits.create_json_agent

(F401)


import openai
from langchain_openai.chat_models.base import ChatOpenAI
Comment on lines +9 to +10
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Remove the unused import.

The static analysis tool has correctly identified that openai is imported but unused.

Remove the unused import:

-import openai
Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
import openai
from langchain_openai.chat_models.base import ChatOpenAI
from langchain_openai.chat_models.base import ChatOpenAI
Tools
Ruff

9-9: openai imported but unused

Remove unused import: openai

(F401)

from langchain.agents import ZeroShotAgent, AgentExecutor
from langchain.chains import LLMChain
from config.config import config
import openai # required
from dotenv import load_dotenv
load_dotenv()

Expand All @@ -32,22 +29,15 @@ def __init__(self, docs, templates, auth_example, parsed_common_files):
system_instructions = format_template(
f"{templates.system_instructions(auth_example, parsed_common_files)}\n{docs_system_instructions}")

model = ChatOpenAI(model_name=config['openai']['model'])
tools = OpenAPIExplorerTool.create_tools(docs)
tool_names = [tool.name for tool in tools]

prompt_template = ZeroShotAgent.create_prompt(
tools=tools,
prefix=system_instructions,
suffix=suffix,
format_instructions=format_instructions,
input_variables=['input', 'agent_scratchpad']
)

llm_chain = LLMChain(llm=get_llm(), prompt=prompt_template)
agent = ZeroShotAgent(llm_chain=llm_chain, allowed_tools=tool_names)
verbose = True if config['logging']['level'] == 'DEBUG' else False

self.agent_executor = AgentExecutor.from_agent_and_tools(
# o1-preview doesn't support system instruction, so we just concatenate into the prompt
prompt = f"{system_instructions}\n\n{format_instructions}"

agent = create_react_agent(model, tools, prompt)
verbose = True if config['logging']['level'] == 'DEBUG' else False
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Simplify the unnecessary expression.

The static analysis tool has correctly identified that the True if ... else False expression is unnecessary.

Simplify the expression like this:

-verbose = True if config['logging']['level'] == 'DEBUG' else False
+verbose = config['logging']['level'] == 'DEBUG'
Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
verbose = True if config['logging']['level'] == 'DEBUG' else False
verbose = config['logging']['level'] == 'DEBUG'
Tools
Ruff

39-39: Remove unnecessary True if ... else False

Remove unnecessary True if ... else False

(SIM210)

self.agent_executor = AgentExecutor(
agent=agent, tools=tools, verbose=verbose)

def run(self, input):
Expand Down Expand Up @@ -87,15 +77,9 @@ def create_user_prompt(prompt, urls_content):


def get_llm():
if config['openai_api_type'] == "azure":
azure_config = config["azure"]
return AzureChatOpenAI(deployment_name=azure_config['deployment_name'],
model_name=azure_config["model"], temperature=config["temperature"], request_timeout=300)
else:
openai_config = config["openai"]
print(f"Using OpenAI API: {openai_config['model']}")
return ChatOpenAI(
model_name=openai_config["model"], temperature=config["temperature"])
openai_config = config["openai"]
print(f"Using OpenAI API: {openai_config['model']}")
return ChatOpenAI(model_name=openai_config["model"], temperature=1)


def ask_agent(prompt, docs, templates, auth_example, parsed_common_files, urls_content):
Expand All @@ -111,8 +95,7 @@ def no_docs(prompt, templates, auth_example, parsed_common_files, urls_content,
pd_instructions = format_template(
templates.system_instructions(auth_example, parsed_common_files))

result = get_llm()(messages=[
SystemMessage(content="You are the most intelligent software engineer in the world. You carefully provide accurate, factual, thoughtful, nuanced code, and are brilliant at reasoning. Follow all of the instructions below — they are all incredibly important. This code will be shipped directly to production, so it's important that it's accurate and complete."),
result = get_llm().invoke([
HumanMessage(content=user_prompt +
pd_instructions if normal_order else pd_instructions+user_prompt),
])
Expand Down
Loading
Loading