Skip to content

Conversation

@sunnymodi21
Copy link

@sunnymodi21 sunnymodi21 commented Oct 30, 2025

This PR upgrades the browser-use library from v0.1.48 to v0.9.4 which removes Playwright dependencies, along with necessary compatibility updates.


Summary by cubic

Upgrade browser-use to v0.9.4 and migrate from Playwright to the new CDP-based API. This removes Playwright, switches to system Chromium, and updates agents, controller, WebUI, and tests.

  • Refactors

    • Replace Playwright with browser-use CDP APIs and remove Playwright setup/tests.
    • Add a small compatibility layer (browser_compat.py) to bridge legacy Browser/Context/Controller calls.
    • Update agents to new types (Agent, BrowserProfile, BrowserSession) and refactor LLM usage to browser_use.llm.
    • Replace custom MCP client with browser_use.mcp; add an optional MCP server process in supervisord.
    • Simplify Docker: install Chromium + driver, set CHROME_BIN, add dbus, remove Playwright-specific env.
    • Clean up WebUI components and tests to match the new API.
    • Dependencies: browser-use 0.9.4, gradio 5.49.1, json-repair 0.49.0; remove unused langchain adapters.
  • Migration

    • Ensure Chromium is available and CHROME_BIN is set (Docker does this; host installs should mirror).
    • Remove Playwright installs and env (PLAYWRIGHT_BROWSERS_PATH, playwright install).
    • Use browser_use MCP integration; custom MCP setup/cleanup is no longer needed in tests.
    • README updated: skip Playwright step; run WebUI as before. Optional MCP server runs on port 3000 via supervisord.

Written for commit 425f0f3. Summary will update automatically on new commits.

…ility with version 0.6.0

- Updated `requirements.txt` to reflect new versions of dependencies, including `browser-use`, `gradio`, `json-repair`, and `langchain-mistralai`.
- Refactored `browser_use_agent.py` to remove deprecated methods and simplify logic.
- Updated `deep_research_agent.py` to use `BrowserProfile` instead of `BrowserConfig`.
- Introduced a compatibility layer in `browser_compat.py` to support legacy API calls.
- Adjusted `custom_browser.py` and `custom_controller.py` to align with the new `browser_use` structure.
- Modified web UI components to utilize the updated `Agent` and `BrowserSession` classes.
- Removed unused imports and commented-out code to clean up the codebase.
…mports across multiple files for improved compatibility. Removed deprecated imports and adjusted code to align with the new structure of `browser_use`.
… `gradio==5.49.1`, and modify model names in `config.py` for the `anthropic` provider to reflect new versions.
- Resolved Dockerfile conflicts by adopting upstream Playwright browser setup
- Updated docker-compose.yml from upstream
- Includes Docker build fixes from upstream commits
@CLAassistant
Copy link

CLAassistant commented Oct 30, 2025

CLA assistant check
All committers have signed the CLA.

Copy link
Contributor

@cubic-dev-ai cubic-dev-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

9 issues found across 23 files

Prompt for AI agents (all 9 issues)

Understand the root cause of the following 9 issues and fix them.


<file name="src/webui/components/browser_use_agent_tab.py">

<violation number="1" location="src/webui/components/browser_use_agent_tab.py:487">
CustomController is created without ask_assistant_callback, so the agent can no longer prompt the user for help. Please pass ask_callback_wrapper when constructing the controller.</violation>
</file>

<file name="README.md">

<violation number="1" location="README.md:58">
Installing required Python packages was removed from the setup steps. Please restore an install step (e.g., `uv pip install -r requirements.txt`) before telling users to configure the environment; otherwise the WebUI run command will fail.</violation>
</file>

<file name="src/webui/components/deep_research_agent_tab.py">

<violation number="1" location="src/webui/components/deep_research_agent_tab.py:187">
Switching to `DeepResearchAgent.research()` makes the awaited result a string path, but the downstream code still treats it like a dict (e.g., calls `.keys()`), so the run now crashes after completion. Please adapt this section to the new return type.</violation>
</file>

<file name="src/browser/custom_browser.py">

<violation number="1" location="src/browser/custom_browser.py:8">
Replacing the chrome args import with a placeholder comment leaves CHROME_ARGS and related helpers undefined, so `_setup_builtin_browser` will raise a NameError the first time it runs. Please restore the proper imports from the new module path.</violation>
</file>

<file name="tests/test_controller.py">

<violation number="1" location="tests/test_controller.py:15">
This test now just prints a message and returns, so it no longer exercises controller MCP behavior and will always pass. Remove the test or mark it skipped instead of silently disabling it.</violation>
</file>

<file name="src/agent/deep_research/deep_research_agent.py">

<violation number="1" location="src/agent/deep_research/deep_research_agent.py:184">
`stop()` can no longer cancel an active research task because the event created here is not stored on the agent, so the cancellation signal is never delivered.</violation>
</file>

<file name="src/utils/llm_provider.py">

<violation number="1" location="src/utils/llm_provider.py:134">
Requests for the existing “mistral” and “ibm” providers now fall through to this new ValueError, so selecting them will crash instead of returning an LLM client. Please restore handling for these providers (or remove them from the configuration/UI).</violation>
</file>

<file name="tests/test_llm_api.py">

<violation number="1" location="tests/test_llm_api.py:71">
This change drops the encoded image content when image_path is provided—vision tests like test_openai_model now send text-only prompts, so the image is ignored. Restore the create_message_content usage (or equivalent) so image payloads are still attached.</violation>

<violation number="2" location="tests/test_llm_api.py:74">
llm.ainvoke returns a coroutine; without awaiting it this test now treats an unexecuted coroutine as the response, so the model call never runs and the printed output is just the coroutine object. Please await the call or use the synchronous invoke variant.</violation>
</file>

Since this is your first cubic review, here's how it works:

  • cubic automatically reviews your code and comments on bugs and improvements
  • Teach cubic by replying to its comments. cubic learns from your replies and gets better over time
  • Ask questions if you need clarification on any suggestion

React with 👍 or 👎 to teach cubic. Mention @cubic-dev-ai to give feedback, ask questions, or re-run the review.

Co-authored-by: cubic-dev-ai[bot] <191113872+cubic-dev-ai[bot]@users.noreply.github.com>
Copy link
Contributor

@cubic-dev-ai cubic-dev-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No issues found across 1 file

Co-authored-by: cubic-dev-ai[bot] <191113872+cubic-dev-ai[bot]@users.noreply.github.com>
Copy link
Contributor

@cubic-dev-ai cubic-dev-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No issues found across 1 file

Copy link
Contributor

@cubic-dev-ai cubic-dev-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reviewed changes from recent commits (found 2 issues).

2 issues found across 7 files

Prompt for AI agents (all 2 issues)

Understand the root cause of the following 2 issues and fix them.


<file name="src/browser/browser_compat.py">

<violation number="1" location="src/browser/browser_compat.py:49">
`model_dump` here should accept the same keyword arguments as Pydantic&#39;s API; otherwise calls like `model_dump(exclude_none=True)` will raise `TypeError`, defeating the compatibility shim.</violation>
</file>

<file name="src/utils/llm_provider.py">

<violation number="1" location="src/utils/llm_provider.py:134">
IBM’s endpoint rejects requests without a project_id/space_id header. Please fail fast if IBM_PROJECT_ID (or project_id kwarg) is missing so we don’t instantiate a broken client.</violation>
</file>

React with 👍 or 👎 to teach cubic. Mention @cubic-dev-ai to give feedback, ask questions, or re-run the review.

self.save_downloads_path = save_downloads_path
self._extra = kwargs

def model_dump(self) -> Dict[str, Any]:
Copy link
Contributor

@cubic-dev-ai cubic-dev-ai bot Nov 3, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

model_dump here should accept the same keyword arguments as Pydantic's API; otherwise calls like model_dump(exclude_none=True) will raise TypeError, defeating the compatibility shim.

Prompt for AI agents
Address the following comment on src/browser/browser_compat.py at line 49:

<comment>`model_dump` here should accept the same keyword arguments as Pydantic&#39;s API; otherwise calls like `model_dump(exclude_none=True)` will raise `TypeError`, defeating the compatibility shim.</comment>

<file context>
@@ -45,6 +45,17 @@ def __init__(self,
         self.save_downloads_path = save_downloads_path
         self._extra = kwargs
+    
+    def model_dump(self) -&gt; Dict[str, Any]:
+        &quot;&quot;&quot;Compatibility method for pydantic model_dump&quot;&quot;&quot;
+        return {
</file context>
Suggested change
def model_dump(self) -> Dict[str, Any]:
def model_dump(self, **kwargs) -> Dict[str, Any]:
Fix with Cubic

extra_headers = {}
if provider == "ibm":
project_id = kwargs.get("project_id") or os.getenv("IBM_PROJECT_ID")
if project_id:
Copy link
Contributor

@cubic-dev-ai cubic-dev-ai bot Nov 3, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IBM’s endpoint rejects requests without a project_id/space_id header. Please fail fast if IBM_PROJECT_ID (or project_id kwarg) is missing so we don’t instantiate a broken client.

Prompt for AI agents
Address the following comment on src/utils/llm_provider.py at line 134:

<comment>IBM’s endpoint rejects requests without a project_id/space_id header. Please fail fast if IBM_PROJECT_ID (or project_id kwarg) is missing so we don’t instantiate a broken client.</comment>

<file context>
@@ -116,19 +118,29 @@ def get_llm_model(provider: str, **kwargs) -&gt; BaseChatModel:
+        extra_headers = {}
+        if provider == &quot;ibm&quot;:
+            project_id = kwargs.get(&quot;project_id&quot;) or os.getenv(&quot;IBM_PROJECT_ID&quot;)
+            if project_id:
+                extra_headers[&quot;X-Project-ID&quot;] = project_id
             
</file context>
Fix with Cubic

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants