Skip to content

Handle plugins that crash during load #1280

@jcushman

Description

@jcushman

I installed a plugin that turned out to crash on my machine:

> llm install llm-gpt4all
...
> llm --help
Traceback (most recent call last):
  File "/Users/jcushman/.local/bin/llm", line 4, in <module>
    from llm.cli import cli
  File "/Users/jcushman/.local/share/uv/tools/llm/lib/python3.12/site-packages/llm/cli.py", line 3761, in <module>
    load_plugins()
  File "/Users/jcushman/.local/share/uv/tools/llm/lib/python3.12/site-packages/llm/plugins.py", line 28, in load_plugins
    pm.load_setuptools_entrypoints("llm")
  File "/Users/jcushman/.local/share/uv/tools/llm/lib/python3.12/site-packages/pluggy/_manager.py", line 416, in load_setuptools_entrypoints
    plugin = ep.load()
             ^^^^^^^^^
  File "/Users/jcushman/.local/share/uv/python/cpython-3.12.5-macos-x86_64-none/lib/python3.12/importlib/metadata/__init__.py", line 205, in load
    module = import_module(match.group('module'))
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/jcushman/.local/share/uv/python/cpython-3.12.5-macos-x86_64-none/lib/python3.12/importlib/__init__.py", line 90, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/jcushman/.local/share/uv/tools/llm/lib/python3.12/site-packages/llm_gpt4all.py", line 1, in <module>
    from gpt4all import GPT4All as _GPT4All
  File "/Users/jcushman/.local/share/uv/tools/llm/lib/python3.12/site-packages/gpt4all/__init__.py", line 1, in <module>
    from .gpt4all import CancellationError as CancellationError, Embed4All as Embed4All, GPT4All as GPT4All
  File "/Users/jcushman/.local/share/uv/tools/llm/lib/python3.12/site-packages/gpt4all/gpt4all.py", line 23, in <module>
    from ._pyllmodel import (CancellationError as CancellationError, EmbCancelCallbackType, EmbedResult as EmbedResult,
  File "/Users/jcushman/.local/share/uv/tools/llm/lib/python3.12/site-packages/gpt4all/_pyllmodel.py", line 37, in <module>
    raise RuntimeError(textwrap.dedent("""\
RuntimeError: Running GPT4All under Rosetta is not supported due to CPU feature requirements.
Please install GPT4All in an environment that uses a native ARM64 Python interpreter.

It took me longer than it should have to get back to a stable state, since llm uninstall couldn't run — one correct answer I could have noticed earlier is to copy "/Users/jcushman/.local/share/uv/tools/llm/" from the stack trace and run /Users/jcushman/.local/share/uv/tools/llm/bin/python -m pip uninstall llm-gpt4all.

I wonder if llm/plugins.py could catch plugin errors by default. (Maybe with an environment variable or something to not catch during plugin development, if that seems useful.)

The Installing Plugins docs could also have a section on uninstalling, with and without llm install -- maybe not necessary if errors are caught.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions