-
Notifications
You must be signed in to change notification settings - Fork 1.5k
Closed
Labels
Description
Initial Checks
- I confirm that I'm using the latest version of Pydantic AI
- I confirm that I searched for my issue in https://github.com/pydantic/pydantic-ai/issues before opening this issue
Description
I tried running example from https://ai.pydantic.dev/models/openai/#ollama in Jupyter notebook.
With error:
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
Cell In[5], [line 19](vscode-notebook-cell:?execution_count=5&line=19)
13 ollama_model = OpenAIChatModel(
14 model_name='qwen3',
15 provider=OllamaProvider(base_url='http://localhost:11434/v1'),
16 )
17 agent = Agent(ollama_model, output_type=CityLocation)
---> [19](vscode-notebook-cell:?execution_count=5&line=19) result = agent.run_sync('Where were the olympics held in 2012?')
20 print(result.output)
21 #> city='London' country='United Kingdom'
File c:\Users\Krystof\AppData\Local\Programs\Python\Python313\Lib\site-packages\pydantic_ai\agent\abstract.py:317, in AbstractAgent.run_sync(self, user_prompt, output_type, message_history, deferred_tool_results, model, deps, model_settings, usage_limits, usage, infer_name, toolsets, event_stream_handler)
314 if infer_name and self.name is None:
315 self._infer_name(inspect.currentframe())
--> [317](file:///C:/Users/Krystof/AppData/Local/Programs/Python/Python313/Lib/site-packages/pydantic_ai/agent/abstract.py:317) return get_event_loop().run_until_complete(
318 self.run(
319 user_prompt,
320 output_type=output_type,
321 message_history=message_history,
322 deferred_tool_results=deferred_tool_results,
323 model=model,
324 deps=deps,
325 model_settings=model_settings,
326 usage_limits=usage_limits,
327 usage=usage,
328 infer_name=False,
329 toolsets=toolsets,
330 event_stream_handler=event_stream_handler,
331 )
332 )
File c:\Users\Krystof\AppData\Local\Programs\Python\Python313\Lib\asyncio\base_events.py:696, in BaseEventLoop.run_until_complete(self, future)
685 """Run until the Future is done.
686
687 If the argument is a coroutine, it is wrapped in a Task.
(...) 693 Return the Future's result, or raise its exception.
694 """
695 self._check_closed()
--> [696](file:///C:/Users/Krystof/AppData/Local/Programs/Python/Python313/Lib/asyncio/base_events.py:696) self._check_running()
698 new_task = not futures.isfuture(future)
699 future = tasks.ensure_future(future, loop=self)
File c:\Users\Krystof\AppData\Local\Programs\Python\Python313\Lib\asyncio\base_events.py:632, in BaseEventLoop._check_running(self)
630 def _check_running(self):
631 if self.is_running():
--> [632](file:///C:/Users/Krystof/AppData/Local/Programs/Python/Python313/Lib/asyncio/base_events.py:632) raise RuntimeError('This event loop is already running')
633 if events._get_running_loop() is not None:
634 raise RuntimeError(
635 'Cannot run the event loop while another loop is running')
RuntimeError: This event loop is already running
The same code works fine in normal python interpreter. Is this a known or expected behaviour?
Example Code
from pydantic import BaseModel
from pydantic_ai import Agent
from pydantic_ai.models.openai import OpenAIChatModel
from pydantic_ai.providers.ollama import OllamaProvider
class CityLocation(BaseModel):
city: str
country: str
ollama_model = OpenAIChatModel(
model_name='qwen3',
provider=OllamaProvider(base_url='http://localhost:11434/v1'),
)
agent = Agent(ollama_model, output_type=CityLocation)
result = agent.run_sync('Where were the olympics held in 2012?')
print(result.output)
#> city='London' country='United Kingdom'
print(result.usage())
#> RunUsage(input_tokens=57, output_tokens=8, requests=1)Python, Pydantic AI & LLM client version
Python 3.13.1
Pydantic 2.11.9