Skip to content

letta-code cli failed to call local letta server with Ollama only #1101

@ansidev

Description

@ansidev

Reproduction steps

  1. OS: macOS
  2. Setup letta-server
docker run \
  -d \
  --name letta-server \
  -v ~/.letta/.persist/pgdata:/var/lib/postgresql/data \
  -p 8283:8283 \
  -e OLLAMA_BASE_URL="http://host.docker.internal:{{.OLLAMA_PORT}}/v1" \
  letta/letta:latest
  1. Run letta CLI
LETTA_BASE_URL="http://localhost:8283" letta
  1. Verified ollama models are updated in letta CLI using /model.
  2. Send any message
  3. Server response:
⚠ {
    "error": {
      "error": {
        "message_type": "error_message",
        "run_id": "run-f2c55435-eaa9-4b5d-b2f9-30f7a9ca052b",
        "error_type": "llm_error",
        "message": "An error occurred with the LLM request.",
        "detail": "Unhandled LLM error: Invalid port: 'http:'",
        "seq_id": null
      },
      "run_id": "run-f2c55435-eaa9-4b5d-b2f9-30f7a9ca052b"
    }
  }

⚠ Downstream provider issues? Use /model to switch to another provider
  1. Server logs:
2026-02-23 07:29:26.879 UTC [40] LOG:  checkpoint starting: time
2026-02-23 07:29:30.232 UTC [40] LOG:  checkpoint complete: wrote 35 buffers (0.2%); 0 WAL file(s) added, 0 removed, 0 recycled; write=3.338 s, sync=0.008 s, total=3.354 s; sync files=25, longest=0.002 s, average=0.001 s; distance=99 kB, estimate=180 kB
Letta.agent-fd44f092-caaf-45c5-b76e-bf63e4980701 - WARNING - Context token estimate is not set
Letta.letta.orm.sqlalchemy_base - WARNING - SECURITY: Listing org-scoped model Step without actor. This bypasses organization filtering.
Letta.agent-fd44f092-caaf-45c5-b76e-bf63e4980701 - WARNING - Error during step processing: Unhandled LLM error: Invalid port: 'http:'
Letta.agent-fd44f092-caaf-45c5-b76e-bf63e4980701 - INFO - Running final update. Step Progression: StepProgression.START
Letta.agent-fd44f092-caaf-45c5-b76e-bf63e4980701 - WARNING - Error during agent stream: Unhandled LLM error: Invalid port: 'http:'
Traceback (most recent call last):
  File "/app/.venv/lib/python3.11/site-packages/httpx/_urlparse.py", line 409, in normalize_port
    port_as_int = int(port)
                  ^^^^^^^^^
ValueError: invalid literal for int() with base 10: 'http:'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/app/letta/adapters/simple_llm_stream_adapter.py", line 139, in invoke_llm
    stream = await self.llm_client.stream_async(request_data, self.llm_config)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/app/letta/otel/tracing.py", line 393, in async_wrapper
    return await func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/app/letta/llm_api/openai_client.py", line 793, in stream_async
    client = AsyncOpenAI(**kwargs)
             ^^^^^^^^^^^^^^^^^^^^^
  File "/app/.venv/lib/python3.11/site-packages/openai/_client.py", line 517, in __init__
    super().__init__(
  File "/app/.venv/lib/python3.11/site-packages/openai/_base_client.py", line 1399, in __init__
    super().__init__(
  File "/app/.venv/lib/python3.11/site-packages/openai/_base_client.py", line 383, in __init__
    self._base_url = self._enforce_trailing_slash(URL(base_url))
                                                  ^^^^^^^^^^^^^
  File "/app/.venv/lib/python3.11/site-packages/httpx/_urls.py", line 117, in __init__
    self._uri_reference = urlparse(url, **kwargs)
                          ^^^^^^^^^^^^^^^^^^^^^^^
  File "/app/.venv/lib/python3.11/site-packages/httpx/_urlparse.py", line 321, in urlparse
    parsed_port: int | None = normalize_port(port, scheme)
                              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/app/.venv/lib/python3.11/site-packages/httpx/_urlparse.py", line 411, in normalize_port
    raise InvalidURL(f"Invalid port: {port!r}")
httpx.InvalidURL: Invalid port: 'http:'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/app/letta/agents/letta_agent_v3.py", line 377, in stream
    async for chunk in response:
  File "/app/letta/agents/letta_agent_v3.py", line 960, in _step
    raise e
  File "/app/letta/agents/letta_agent_v3.py", line 786, in _step
    raise e
  File "/app/letta/agents/letta_agent_v3.py", line 774, in _step
    async for chunk in invocation:
  File "/app/letta/adapters/simple_llm_stream_adapter.py", line 141, in invoke_llm
    raise self.llm_client.handle_llm_error(e)
letta.errors.LLMError: Unhandled LLM error: Invalid port: 'http:'
Letta.letta.services.streaming_service - ERROR - Run run-3830b891-7439-407b-a5d6-04b31cc6f114 stopped with LLM error: Unhandled LLM error: Invalid port: 'http:', error_data: {'message_type': 'error_message', 'run_id': 'run-3830b891-7439-407b-a5d6-04b31cc6f114', 'error_type': 'llm_error', 'message': 'An error occurred with the LLM request.', 'detail': "Unhandled LLM error: Invalid port: 'http:'", 'seq_id': None}
Letta.letta.services.run_manager - WARNING - Run run-3830b891-7439-407b-a5d6-04b31cc6f114 completed without a completed_at timestamp
Letta.letta.services.run_manager - ERROR - Run run-3830b891-7439-407b-a5d6-04b31cc6f114 is already in a terminal state failed with stop reason llm_api_error, but is being updated with data {'status': <RunStatus.failed: 'failed'>, 'completed_at': None, 'stop_reason': <StopReasonType.llm_api_error: 'llm_api_error'>, 'metadata': {'error': {'message_type': 'error_message', 'run_id': 'run-3830b891-7439-407b-a5d6-04b31cc6f114', 'error_type': 'llm_error', 'message': 'An error occurred with the LLM request.', 'detail': "Unhandled LLM error: Invalid port: 'http:'", 'seq_id': None}}, 'total_duration_ns': None}
Letta.letta.orm.sqlalchemy_base - WARNING - SECURITY: Listing org-scoped model Step without actor. This bypasses organization filtering.
Letta.letta.orm.sqlalchemy_base - WARNING - SECURITY: Listing org-scoped model Step without actor. This bypasses organization filtering.
Letta.letta.server.rest_api.redis_stream_manager - WARNING - [Stream Finalizer] Appending forced [DONE] for run=run-3830b891-7439-407b-a5d6-04b31cc6f114 (saw_error=True, saw_done=False, final_stop_reason=llm_api_error)
Letta.agent-fd44f092-caaf-45c5-b76e-bf63e4980701 - WARNING - Context token estimate is not set
Letta.letta.orm.sqlalchemy_base - WARNING - SECURITY: Listing org-scoped model Step without actor. This bypasses organization filtering.
Letta.agent-fd44f092-caaf-45c5-b76e-bf63e4980701 - WARNING - Error during step processing: Unhandled LLM error: Invalid port: 'http:'
Letta.agent-fd44f092-caaf-45c5-b76e-bf63e4980701 - INFO - Running final update. Step Progression: StepProgression.START
Letta.agent-fd44f092-caaf-45c5-b76e-bf63e4980701 - WARNING - Error during agent stream: Unhandled LLM error: Invalid port: 'http:'
Traceback (most recent call last):
  File "/app/.venv/lib/python3.11/site-packages/httpx/_urlparse.py", line 409, in normalize_port
    port_as_int = int(port)
                  ^^^^^^^^^
ValueError: invalid literal for int() with base 10: 'http:'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/app/letta/adapters/simple_llm_stream_adapter.py", line 139, in invoke_llm
    stream = await self.llm_client.stream_async(request_data, self.llm_config)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/app/letta/otel/tracing.py", line 393, in async_wrapper
    return await func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/app/letta/llm_api/openai_client.py", line 793, in stream_async
    client = AsyncOpenAI(**kwargs)
             ^^^^^^^^^^^^^^^^^^^^^
  File "/app/.venv/lib/python3.11/site-packages/openai/_client.py", line 517, in __init__
    super().__init__(
  File "/app/.venv/lib/python3.11/site-packages/openai/_base_client.py", line 1399, in __init__
    super().__init__(
  File "/app/.venv/lib/python3.11/site-packages/openai/_base_client.py", line 383, in __init__
    self._base_url = self._enforce_trailing_slash(URL(base_url))
                                                  ^^^^^^^^^^^^^
  File "/app/.venv/lib/python3.11/site-packages/httpx/_urls.py", line 117, in __init__
    self._uri_reference = urlparse(url, **kwargs)
                          ^^^^^^^^^^^^^^^^^^^^^^^
  File "/app/.venv/lib/python3.11/site-packages/httpx/_urlparse.py", line 321, in urlparse
    parsed_port: int | None = normalize_port(port, scheme)
                              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/app/.venv/lib/python3.11/site-packages/httpx/_urlparse.py", line 411, in normalize_port
    raise InvalidURL(f"Invalid port: {port!r}")
httpx.InvalidURL: Invalid port: 'http:'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/app/letta/agents/letta_agent_v3.py", line 377, in stream
    async for chunk in response:
  File "/app/letta/agents/letta_agent_v3.py", line 960, in _step
    raise e
  File "/app/letta/agents/letta_agent_v3.py", line 786, in _step
    raise e
  File "/app/letta/agents/letta_agent_v3.py", line 774, in _step
    async for chunk in invocation:
  File "/app/letta/adapters/simple_llm_stream_adapter.py", line 141, in invoke_llm
    raise self.llm_client.handle_llm_error(e)
letta.errors.LLMError: Unhandled LLM error: Invalid port: 'http:'
Letta.letta.services.streaming_service - ERROR - Run run-227feb5f-270e-47ff-80ff-f9ca83d3494d stopped with LLM error: Unhandled LLM error: Invalid port: 'http:', error_data: {'message_type': 'error_message', 'run_id': 'run-227feb5f-270e-47ff-80ff-f9ca83d3494d', 'error_type': 'llm_error', 'message': 'An error occurred with the LLM request.', 'detail': "Unhandled LLM error: Invalid port: 'http:'", 'seq_id': None}
Letta.letta.services.run_manager - WARNING - Run run-227feb5f-270e-47ff-80ff-f9ca83d3494d completed without a completed_at timestamp
Letta.letta.services.run_manager - WARNING - Run run-227feb5f-270e-47ff-80ff-f9ca83d3494d completed without a completed_at timestamp
Letta.letta.orm.sqlalchemy_base - WARNING - SECURITY: Listing org-scoped model Step without actor. This bypasses organization filtering.
Letta.letta.orm.sqlalchemy_base - WARNING - SECURITY: Listing org-scoped model Step without actor. This bypasses organization filtering.
Letta.letta.server.rest_api.redis_stream_manager - WARNING - [Stream Finalizer] Appending forced [DONE] for run=run-227feb5f-270e-47ff-80ff-f9ca83d3494d (saw_error=True, saw_done=False, final_stop_reason=llm_api_error)
Letta.agent-fd44f092-caaf-45c5-b76e-bf63e4980701 - WARNING - Context token estimate is not set
Letta.letta.orm.sqlalchemy_base - WARNING - SECURITY: Listing org-scoped model Step without actor. This bypasses organization filtering.
Letta.agent-fd44f092-caaf-45c5-b76e-bf63e4980701 - WARNING - Error during step processing: Unhandled LLM error: Invalid port: 'http:'
Letta.agent-fd44f092-caaf-45c5-b76e-bf63e4980701 - INFO - Running final update. Step Progression: StepProgression.START
Letta.agent-fd44f092-caaf-45c5-b76e-bf63e4980701 - WARNING - Error during agent stream: Unhandled LLM error: Invalid port: 'http:'
Traceback (most recent call last):
  File "/app/.venv/lib/python3.11/site-packages/httpx/_urlparse.py", line 409, in normalize_port
    port_as_int = int(port)
                  ^^^^^^^^^
ValueError: invalid literal for int() with base 10: 'http:'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/app/letta/adapters/simple_llm_stream_adapter.py", line 139, in invoke_llm
    stream = await self.llm_client.stream_async(request_data, self.llm_config)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/app/letta/otel/tracing.py", line 393, in async_wrapper
    return await func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/app/letta/llm_api/openai_client.py", line 793, in stream_async
    client = AsyncOpenAI(**kwargs)
             ^^^^^^^^^^^^^^^^^^^^^
  File "/app/.venv/lib/python3.11/site-packages/openai/_client.py", line 517, in __init__
    super().__init__(
  File "/app/.venv/lib/python3.11/site-packages/openai/_base_client.py", line 1399, in __init__
    super().__init__(
  File "/app/.venv/lib/python3.11/site-packages/openai/_base_client.py", line 383, in __init__
    self._base_url = self._enforce_trailing_slash(URL(base_url))
                                                  ^^^^^^^^^^^^^
  File "/app/.venv/lib/python3.11/site-packages/httpx/_urls.py", line 117, in __init__
    self._uri_reference = urlparse(url, **kwargs)
                          ^^^^^^^^^^^^^^^^^^^^^^^
  File "/app/.venv/lib/python3.11/site-packages/httpx/_urlparse.py", line 321, in urlparse
    parsed_port: int | None = normalize_port(port, scheme)
                              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/app/.venv/lib/python3.11/site-packages/httpx/_urlparse.py", line 411, in normalize_port
    raise InvalidURL(f"Invalid port: {port!r}")
httpx.InvalidURL: Invalid port: 'http:'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/app/letta/agents/letta_agent_v3.py", line 377, in stream
    async for chunk in response:
  File "/app/letta/agents/letta_agent_v3.py", line 960, in _step
    raise e
  File "/app/letta/agents/letta_agent_v3.py", line 786, in _step
    raise e
  File "/app/letta/agents/letta_agent_v3.py", line 774, in _step
    async for chunk in invocation:
  File "/app/letta/adapters/simple_llm_stream_adapter.py", line 141, in invoke_llm
    raise self.llm_client.handle_llm_error(e)
letta.errors.LLMError: Unhandled LLM error: Invalid port: 'http:'
Letta.letta.services.streaming_service - ERROR - Run run-787560bf-792c-4bad-97ab-417faa5e9b69 stopped with LLM error: Unhandled LLM error: Invalid port: 'http:', error_data: {'message_type': 'error_message', 'run_id': 'run-787560bf-792c-4bad-97ab-417faa5e9b69', 'error_type': 'llm_error', 'message': 'An error occurred with the LLM request.', 'detail': "Unhandled LLM error: Invalid port: 'http:'", 'seq_id': None}
Letta.letta.services.run_manager - WARNING - Run run-787560bf-792c-4bad-97ab-417faa5e9b69 completed without a completed_at timestamp
Letta.letta.services.run_manager - WARNING - Run run-787560bf-792c-4bad-97ab-417faa5e9b69 completed without a completed_at timestamp
Letta.letta.orm.sqlalchemy_base - WARNING - SECURITY: Listing org-scoped model Step without actor. This bypasses organization filtering.
Letta.letta.orm.sqlalchemy_base - WARNING - SECURITY: Listing org-scoped model Step without actor. This bypasses organization filtering.
Letta.letta.server.rest_api.redis_stream_manager - WARNING - [Stream Finalizer] Appending forced [DONE] for run=run-787560bf-792c-4bad-97ab-417faa5e9b69 (saw_error=True, saw_done=False, final_stop_reason=llm_api_error)
Letta.agent-fd44f092-caaf-45c5-b76e-bf63e4980701 - WARNING - Context token estimate is not set
Letta.letta.orm.sqlalchemy_base - WARNING - SECURITY: Listing org-scoped model Step without actor. This bypasses organization filtering.
Letta.agent-fd44f092-caaf-45c5-b76e-bf63e4980701 - WARNING - Error during step processing: Unhandled LLM error: Invalid port: 'http:'
Letta.agent-fd44f092-caaf-45c5-b76e-bf63e4980701 - INFO - Running final update. Step Progression: StepProgression.START
Letta.agent-fd44f092-caaf-45c5-b76e-bf63e4980701 - WARNING - Error during agent stream: Unhandled LLM error: Invalid port: 'http:'
Traceback (most recent call last):
  File "/app/.venv/lib/python3.11/site-packages/httpx/_urlparse.py", line 409, in normalize_port
    port_as_int = int(port)
                  ^^^^^^^^^
ValueError: invalid literal for int() with base 10: 'http:'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/app/letta/adapters/simple_llm_stream_adapter.py", line 139, in invoke_llm
    stream = await self.llm_client.stream_async(request_data, self.llm_config)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/app/letta/otel/tracing.py", line 393, in async_wrapper
    return await func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/app/letta/llm_api/openai_client.py", line 793, in stream_async
    client = AsyncOpenAI(**kwargs)
             ^^^^^^^^^^^^^^^^^^^^^
  File "/app/.venv/lib/python3.11/site-packages/openai/_client.py", line 517, in __init__
    super().__init__(
  File "/app/.venv/lib/python3.11/site-packages/openai/_base_client.py", line 1399, in __init__
    super().__init__(
  File "/app/.venv/lib/python3.11/site-packages/openai/_base_client.py", line 383, in __init__
    self._base_url = self._enforce_trailing_slash(URL(base_url))
                                                  ^^^^^^^^^^^^^
  File "/app/.venv/lib/python3.11/site-packages/httpx/_urls.py", line 117, in __init__
    self._uri_reference = urlparse(url, **kwargs)
                          ^^^^^^^^^^^^^^^^^^^^^^^
  File "/app/.venv/lib/python3.11/site-packages/httpx/_urlparse.py", line 321, in urlparse
    parsed_port: int | None = normalize_port(port, scheme)
                              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/app/.venv/lib/python3.11/site-packages/httpx/_urlparse.py", line 411, in normalize_port
    raise InvalidURL(f"Invalid port: {port!r}")
httpx.InvalidURL: Invalid port: 'http:'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/app/letta/agents/letta_agent_v3.py", line 377, in stream
    async for chunk in response:
  File "/app/letta/agents/letta_agent_v3.py", line 960, in _step
    raise e
  File "/app/letta/agents/letta_agent_v3.py", line 786, in _step
    raise e
  File "/app/letta/agents/letta_agent_v3.py", line 774, in _step
    async for chunk in invocation:
  File "/app/letta/adapters/simple_llm_stream_adapter.py", line 141, in invoke_llm
    raise self.llm_client.handle_llm_error(e)
letta.errors.LLMError: Unhandled LLM error: Invalid port: 'http:'
Letta.letta.services.streaming_service - ERROR - Run run-f2c55435-eaa9-4b5d-b2f9-30f7a9ca052b stopped with LLM error: Unhandled LLM error: Invalid port: 'http:', error_data: {'message_type': 'error_message', 'run_id': 'run-f2c55435-eaa9-4b5d-b2f9-30f7a9ca052b', 'error_type': 'llm_error', 'message': 'An error occurred with the LLM request.', 'detail': "Unhandled LLM error: Invalid port: 'http:'", 'seq_id': None}
Letta.letta.services.run_manager - WARNING - Run run-f2c55435-eaa9-4b5d-b2f9-30f7a9ca052b completed without a completed_at timestamp
Letta.letta.services.run_manager - ERROR - Run run-f2c55435-eaa9-4b5d-b2f9-30f7a9ca052b is already in a terminal state failed with stop reason llm_api_error, but is being updated with data {'status': <RunStatus.failed: 'failed'>, 'completed_at': None, 'stop_reason': <StopReasonType.llm_api_error: 'llm_api_error'>, 'metadata': {'error': {'message_type': 'error_message', 'run_id': 'run-f2c55435-eaa9-4b5d-b2f9-30f7a9ca052b', 'error_type': 'llm_error', 'message': 'An error occurred with the LLM request.', 'detail': "Unhandled LLM error: Invalid port: 'http:'", 'seq_id': None}}, 'total_duration_ns': None}
Letta.letta.orm.sqlalchemy_base - WARNING - SECURITY: Listing org-scoped model Step without actor. This bypasses organization filtering.
Letta.letta.orm.sqlalchemy_base - WARNING - SECURITY: Listing org-scoped model Step without actor. This bypasses organization filtering.
Letta.letta.server.rest_api.redis_stream_manager - WARNING - [Stream Finalizer] Appending forced [DONE] for run=run-f2c55435-eaa9-4b5d-b2f9-30f7a9ca052b (saw_error=True, saw_done=False, final_stop_reason=llm_api_error)

Observations

Server log indicates that an OpenAI client was initialized.
This is unexpected behavior since I only enable Ollama provider on my local Letta server.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions