Replies: 4 comments 14 replies
-
Hello, @rvasa779! I'm here to assist you. If you need help with bugs, questions, or becoming a contributor, I'm the bot for the job. The To resolve this issue, you need to ensure that the
Here is an example of how you might modify the def _response_to_result(response):
result = {}
for candidate in response.candidates:
generation_info = {}
if hasattr(candidate.finish_reason, 'name'):
generation_info["finish_reason"] = candidate.finish_reason.name
else:
generation_info["finish_reason"] = str(candidate.finish_reason)
result["generation_info"] = generation_info
return result This modification ensures that if Additionally, ensure that the Here is the relevant part of the code that needs to be checked and possibly corrected: def img_prompt_func(data_dict, num_images=2):
messages = []
if data_dict["context"]["images"]:
for image in data_dict["context"]["images"][:num_images]:
image_message = {
"type": "image_url",
"image_url": {"url": f"data:image/jpeg;base64,{image}"},
}
messages.append(image_message)
text_message = {
"type": "text",
"text": (
"You are an analyst tasked with answering questions about visual content.\n"
"You will be given a set of image(s) from a slide deck / presentation.\n"
"Use this information to answer the user question. \n"
f"User-provided question: {data_dict['question']}\n\n"
),
}
messages.append(text_message)
return [HumanMessage(content=messages)] Ensure that the data_dict = {
"context": {
"images": ["base64_encoded_image_string1", "base64_encoded_image_string2"]
},
"question": "What is shown in these images?"
} If |
Beta Was this translation helpful? Give feedback.
-
transferring to langchain-google repo for visibility, cc @lkuligin |
Beta Was this translation helpful? Give feedback.
-
Do you happen to have any updates on this matter? |
Beta Was this translation helpful? Give feedback.
-
This seems to be happening when a tool call is hallucinated by the gemini models. This is the output that I am seeing from the langchain_google_vertexai/_utils.py [See this] (https://github.com/langchain-ai/langchain-google/blob/main/libs/vertexai/langchain_google_vertexai/_utils.py#L172) get_generation_info: <class 'google.cloud.aiplatform_v1beta1.types.content.Candidate'>; finish_reason: 15
finish_message: "Unexpected tool call: print(weather.get_current_weather(place=\'Bern\'))" finish_reason here is returned as 15 which fails when we try to access This is explained here and here. These seem to be the part of the REST api. Are the python apis out of date? Can we in the meantime have a fix out from langchain side so it does not throw AttributeError? |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Checked other resources
Commit to Help
Example Code
Description
I am using ChatGoogleGenerativeAI along with AzureOpenAI in configurable mode.
I have written a simple fastapi to write a joke on user given topic. When I run the above code configuring my "llm" parameter as azure it works fine. However, when I run the same code with "llm" parameter as gemini the code fails with the below error:
[2024-06-14T05:30:47.913Z] System.Private.CoreLib: Exception while executing function: Functions.HttpTrigger1. System.Private.CoreLib: Result: Failure Exception: AttributeError: 'int' object has no attribute 'name' Stack: File "C:\Program Files\Microsoft\Azure Functions Core Tools\workers\python\3.9\WINDOWS\X64\azure_functions_worker\dispatcher.py", line 479, in _handle__invocation_request call_result = await self._loop.run_in_executor( File "C:\Users\Digital\AppData\Local\Programs\Python\Python39\lib\concurrent\futures\thread.py", line 52, in run result = self.fn(*self.args, **self.kwargs) File "C:\Program Files\Microsoft\Azure Functions Core Tools\workers\python\3.9\WINDOWS\X64\azure_functions_worker\dispatcher.py", line 752, in _run_sync_func return ExtensionManager.get_sync_invocation_wrapper(context, File "C:\Program Files\Microsoft\Azure Functions Core Tools\workers\python\3.9\WINDOWS\X64\azure_functions_worker\extension.py", line 215, in _raw_invocation_wrapper result = function(**args) File "<code_path>\egg-gemini\HttpTrigger1\__init__.py", line 19, in main return func.AsgiMiddleware(app).handle(req, context) File "C:\Program Files\Microsoft\Azure Functions Core Tools\workers\python\3.9\WINDOWS\X64\azure\functions\_http_asgi.py", line 172, in handle return self._handle(req, context) File "C:\Program Files\Microsoft\Azure Functions Core Tools\workers\python\3.9\WINDOWS\X64\azure\functions\_http_asgi.py", line 177, in _handle asgi_response = asyncio.run( File "C:\Users\Digital\AppData\Local\Programs\Python\Python39\lib\asyncio\runners.py", line 44, in run return loop.run_until_complete(main) File "C:\Users\Digital\AppData\Local\Programs\Python\Python39\lib\asyncio\base_events.py", line 642, in run_until_complete return future.result() File "C:\Program Files\Microsoft\Azure Functions Core Tools\workers\python\3.9\WINDOWS\X64\azure\functions\_http_asgi.py", line 80, in from_app await app(scope, res._receive, res._send) File "<code_path>\egg-gemini\.venv\lib\site-packages\fastapi\applications.py", line 1054, in __call__ await super().__call__(scope, receive, send) File "<code_path>\egg-gemini\.venv\lib\site-packages\starlette\applications.py", line 123, in __call__ await self.middleware_stack(scope, receive, send) File "<code_path>\egg-gemini\.venv\lib\site-packages\starlette\middleware\errors.py", line 186, in __call__ raise exc File "<code_path>\egg-gemini\.venv\lib\site-packages\starlette\middleware\errors.py", line 164, in __call__ await self.app(scope, receive, _send) File "<code_path>\egg-gemini\.venv\lib\site-packages\starlette\middleware\cors.py", line 85, in __call__ await self.app(scope, receive, send) File "<code_path>\egg-gemini\.venv\lib\site-packages\starlette\middleware\exceptions.py", line 65, in __call__ await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send) File "<code_path>\egg-gemini\.venv\lib\site-packages\starlette\_exception_handler.py", line 64, in wrapped_app raise exc File "<code_path>\egg-gemini\.venv\lib\site-packages\starlette\_exception_handler.py", line 53, in wrapped_app await app(scope, receive, sender) File "<code_path>\egg-gemini\.venv\lib\site-packages\starlette\routing.py", line 756, in __call__ await self.middleware_stack(scope, receive, send) File "<code_path>\egg-gemini\.venv\lib\site-packages\starlette\routing.py", line 776, in app await route.handle(scope, receive, send) File "<code_path>\egg-gemini\.venv\lib\site-packages\starlette\routing.py", line 297, in handle await self.app(scope, receive, send) File "<code_path>\egg-gemini\.venv\lib\site-packages\starlette\routing.py", line 77, in app await wrap_app_handling_exceptions(app, request)(scope, receive, send) File "<code_path>\egg-gemini\.venv\lib\site-packages\starlette\_exception_handler.py", line 64, in wrapped_app raise exc File "<code_path>\egg-gemini\.venv\lib\site-packages\starlette\_exception_handler.py", line 53, in wrapped_app await app(scope, receive, sender) File "<code_path>\egg-gemini\.venv\lib\site-packages\starlette\routing.py", line 72, in app response = await func(request) File "<code_path>\egg-gemini\.venv\lib\site-packages\fastapi\routing.py", line 278, in app raw_response = await run_endpoint_function( File "<code_path>\egg-gemini\.venv\lib\site-packages\fastapi\routing.py", line 191, in run_endpoint_function return await dependant.call(**values) File "<code_path>\egg-gemini\src\__init__.py", line 100, in run_agent return chain.with_config(configurable={"llm": "gemini"}).invoke({'input':topic}) File "<code_path>\egg-gemini\.venv\lib\site-packages\langchain_core\runnables\base.py", line 4573, in invoke return self.bound.invoke( File "<code_path>\egg-gemini\.venv\lib\site-packages\langchain_core\runnables\base.py", line 2504, in invoke input = step.invoke(input, config) File "<code_path>\egg-gemini\.venv\lib\site-packages\langchain_core\runnables\configurable.py", line 117, in invoke return runnable.invoke(input, config, **kwargs) File "<code_path>\egg-gemini\.venv\lib\site-packages\langchain_core\language_models\chat_models.py", line 170, in invoke self.generate_prompt( File "<code_path>\egg-gemini\.venv\lib\site-packages\langchain_core\language_models\chat_models.py", line 599, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) File "<code_path>\egg-gemini\.venv\lib\site-packages\langchain_core\language_models\chat_models.py", line 456, in generate raise e File "<code_path>\egg-gemini\.venv\lib\site-packages\langchain_core\language_models\chat_models.py", line 446, in generate self._generate_with_cache( File "<code_path>\egg-gemini\.venv\lib\site-packages\langchain_core\language_models\chat_models.py", line 671, in _generate_with_cache result = self._generate( File "<code_path>\egg-gemini\.venv\lib\site-packages\langchain_google_genai\chat_models.py", line 766, in _generate return _response_to_result(response) File "<code_path>\egg-gemini\.venv\lib\site-packages\langchain_google_genai\chat_models.py", line 551, in _response_to_result generation_info["finish_reason"] = candidate.finish_reason.name
Attached is my requirement.txt file too
requirements.txt
System Info
System Information
Package Information
Packages not installed (Not Necessarily a Problem)
The following packages were not found:
Beta Was this translation helpful? Give feedback.
All reactions