You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
* langchain compatible llminterfacev2 interface created
* langchain compatible methods added to openai
* broken test cases sorted
* brand new tests for llm interface v2
* vertex ai started to support llm interface v2
* brand new test cases added
* invoke with tools method is not mandatory
* llminterfacev2 support added to ollama
* llm interface v2 supported in mistralai
* llm interface v2 support added to cohere
* llm interface v2 support added to anthropic
* more tests for llminterfacev2
* mypy fixes
* test_openai_llm possibly failed because of import in ci cd
* Attempt CI/CD-compatible async mock for OpenAILLM tests
* Attempt CI/CD-safe async mock for OpenAILLM v1 test
* restoring graphrag e2e tests for v1
* existing e2e test for graphrag sorted
* create message history only if langchain compatible branch
* typo in docstring
* avoid repeated langchain compatible check code
* avoid to create vector idx and fulltext with same property
* avoid to create vector idx and fulltext with same property
* make is instance of langchain check prettier
* use params from invoke if available similar to LC
* define kwargs for invoke in the interface
* fixing outer scope definition warning
* arg names replaced with LC arg names
* some additional docstring for new input args
* docstring updated
* use (a)invokev2 function names instead of brand new
* use (a)invoke_v1 function name instead of legacy invoke
* keep rate limit handlers for v2 functions
* use warning for deprecated llm interface v1
* resolve conflict after Alex's recent uv change
* formatted
* mypy problems sorted
* initialize llms with llm interface v2
* invoke with tools dropped in llm interface v2
* revert the behaviour back for return_context
* e2e tests updated for restored behaviour for return_context
* llm can also be lc object, added any for this
* users initialize their own llm objects, so we don't have to make llm interface's init same as lc
* auto format after manual conflict resolution
0 commit comments