When using Dashscope (Alibaba Cloud) as a custom LLM provider, the application throws a litellm.BadRequestError with message "DashscopeException - [] is too short - 'tools'" when attempting to use Deep Research functionality.
litellm.BadRequestError: DashscopeException - [] is too short - 'tools'
Traceback (most recent call last):
File "/usr/local/lib/python3.11/site-packages/litellm/llms/openai/openai.py", line 762, in completion
raise e
File "/usr/local/lib/python3.11/site-packages/litellm/llms/openai/openai.py", line 645, in completion
return self.streaming(
^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/litellm/llms/openai/openai.py", line 935, in streaming
headers, response = self.make_sync_openai_chat_completion_request(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/litellm/litellm_core_utils/logging_utils.py", line 237, in sync_wrapper
result = func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/litellm/llms/openai/openai.py", line 502, in make_sync_openai_chat_completion_request
raise e
File "/usr/local/lib/python3.11/site-packages/litellm/llms/openai/openai.py", line 477, in make_sync_openai_chat_completion_request
raw_response = openai_client.chat.completions.with_raw_response.create(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/openai/_legacy_response.py", line 364, in wrapped
return cast(LegacyAPIResponse[R], func(*args, **kwargs))
^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/openai/_utils/_utils.py", line 286, in wrapper
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/openai/resources/chat/completions/completions.py", line 1192, in create
return self._post(
^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/openai/_base_client.py", line 1259, in post
return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/openai/_base_client.py", line 1047, in request
raise self._make_status_error_from_response(err.response) from None
openai.BadRequestError: Error code: 400 - {'error': {'code': None, 'message': "[] is too short - 'tools'", 'param': None, 'type': 'invalid_request_error'}, 'request_id': '2085ac25-b6b7-960e-9e75-9a97c67752eb'}
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.11/site-packages/litellm/main.py", line 2531, in completion
raise e
File "/usr/local/lib/python3.11/site-packages/litellm/main.py", line 2503, in completion
response = openai_chat_completions.completion(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/litellm/llms/openai/openai.py", line 773, in completion
raise OpenAIError(
litellm.llms.openai.common_utils.OpenAIError: Error code: 400 - {'error': {'code': None, 'message': "[] is too short - 'tools'", 'param': None, 'type': 'invalid_request_error'}, 'request_id': '2085ac25-b6b7-960e-9e75-9a97c67752eb'}
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/app/onyx/chat/process_message.py", line 1166, in _run_model
run_deep_research_llm_loop(
File "/app/onyx/utils/timing.py", line 52, in wrapped_func
result = func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/app/onyx/deep_research/dr_loop.py", line 351, in run_deep_research_llm_loop
packet = next(research_plan_generator)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/onyx/chat/llm_step.py", line 1148, in run_llm_step_pkt_generator
for packet in llm.stream(
File "/app/onyx/llm/multi_llm.py", line 843, in stream
self._completion(
File "/app/onyx/llm/multi_llm.py", line 679, in _completion
raise e
File "/app/onyx/llm/multi_llm.py", line 654, in _completion
response = litellm.completion(
^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/litellm/utils.py", line 1742, in wrapper
raise e
File "/usr/local/lib/python3.11/site-packages/litellm/utils.py", line 1563, in wrapper
result = original_function(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/litellm/main.py", line 4242, in completion
raise exception_type(
^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/litellm/litellm_core_utils/exception_mapping_utils.py", line 2378, in exception_type
raise e
File "/usr/local/lib/python3.11/site-packages/litellm/litellm_core_utils/exception_mapping_utils.py", line 444, in exception_type
raise BadRequestError(
litellm.exceptions.BadRequestError: litellm.BadRequestError: DashscopeException - [] is too short - 'tools'
Description
When using Dashscope (Alibaba Cloud) as a custom LLM provider, the application throws a
litellm.BadRequestErrorwith message "DashscopeException - [] is too short - 'tools'" when attempting to use Deep Research functionality.Steps to Reproduce
Install Onyx using the quick start script:
curl -fsSL https://onyx.app/install_onyx.sh | bashSelect "lite" option during installation.
Navigate to Admin Panel → Language Models → Custom Models
Configure the custom model with the following settings:
dashscopehttps://coding-intl.dashscope.aliyuncs.com/v1Click Save and exit Admin Panel
Create a New Session
Select Deep Research mode
Enter any research prompt
Actual: Error occurs:
litellm.BadRequestError: DashscopeException - [] is too short - 'tools'Error Message
Environment
Additional Context
The error suggests that the
toolsparameter being sent to the Dashscope API is an empty array[], which Dashscope rejects as being "too short". This likely occurs because Deep Research mode may be sending tool definitions that are either empty or not properly formatted for the Dashscope API.