Skip to content

Conversation

KrystofS
Copy link
Contributor

@KrystofS KrystofS commented Sep 19, 2025

Description

I've introduced helper class for dynamically loading MistralAI models to replace static constatants from helpers. This loads the models once per client lifetime by default (as discussed on the original issue) with the option for users to also refresh model list manually.

Fixes #17442

New Package?

Did I fill in the tool.llamahub section in the pyproject.toml and provide a detailed README.md for my new integration or package?

  • Yes
  • No

Version Bump?

Did I bump the version in the pyproject.toml file of the package I am updating? (Except for the llama-index-core package)

  • Yes
  • No

Type of Change

Please delete options that are not relevant.

  • New feature (non-breaking change which adds functionality)

How Has This Been Tested?

Your pull-request will likely not be merged unless it is covered by some form of impactful unit testing.

  • I added new unit tests to cover this change

Suggested Checklist:

  • I have performed a self-review of my own code
  • I have commented my code, particularly in hard-to-understand areas
  • I have made corresponding changes to the documentation
  • I have added Google Colab support for the newly added notebooks.
  • My changes generate no new warnings
  • I have added tests that prove my fix is effective or that my feature works
  • New and existing unit tests pass locally with my changes
  • I ran uv run make format; uv run make lint to appease the lint gods

@dosubot dosubot bot added the size:L This PR changes 100-499 lines, ignoring generated files. label Sep 19, 2025
@KrystofS
Copy link
Contributor Author

KrystofS commented Sep 19, 2025

@logan-markewich Can you please help me out with unit tests? I see there's only a mock api key. Should I mock the API response too? It is not obvious to me how the model responses are mocked for example.

Copy link
Member

@AstraBert AstraBert left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not super sure I agree with this PR: I understand where it is coming from (we would not have to manually add a model every time mistral drops one), but there are several pitfalls that make this approach not super efficient:

  • We are adding more code than we are deleting
  • We are performing synchronous calls to an API simply to fetch metadata, which in general I would avoid
  • We are applying this solution only to Mistral: if we adopt this approach, it would be good to have something similar for all LLMs that support this, maybe adding a base class in the core framework

I would therefore suggest not to make this change

@KrystofS
Copy link
Contributor Author

KrystofS commented Sep 22, 2025

  • We are adding more code than we are deleting - I don't see how this is a problem, codebases grow.
  • We are performing synchronous calls to an API simply to fetch metadata, which in general I would avoid - Do you have any suggestions how to implement this better? I admit that this was sort of the first thing that came to my mind and I did not think much about optimization as this is called once per client lifetime (as discussed on the original issue).
  • We are applying this solution only to Mistral: if we adopt this approach, it would be good to have something similar for all LLMs that support this, maybe adding a base class in the core framework - I was thinking the same. However, for instance Claude does not provide as much details (about function calling etc.) and for runtimes like ollama it does not make any sense at all.

@logan-markewich
Copy link
Collaborator

I'll be honest, after thinking about this, I'd rather we just maintain the static list for now. The current code introduces extra latency, blocks async event loops, and in general is not extendable to most LLMs.

Plus it might not always be clear how to map their API results to features that we need to know about (thinking, tool calling, modalities, etc.)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
size:L This PR changes 100-499 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[Feature Request]: Add dated Mistral AI models
3 participants