You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
fix: Prevent escaping of Latin characters in LLM response
Merge #2937
**Closes#2936**
This Pull Request addresses the issue where `LlmAgent` outputs, when configured with `output_schema` and `tools`, were presenting escaped Latin characters (e.g., `\xf3` for `ó`) in the final response. This behavior occurred because `json.dumps` was being called with `ensure_ascii=True` (its default), which is not ideal for human-readable output, especially when dealing with non-ASCII characters common in many languages like Portuguese.
**Changes Proposed:**
* Modified the `_OutputSchemaRequestProcessor` in `src/google/adk/flows/llm_flows/_output_schema_processor.py` to explicitly set `ensure_ascii=False` when calling `json.dumps` for the `set_model_response` tool's output.
**Impact:**
This change ensures that all non-ASCII characters in the structured model response are preserved in their natural form, improving the readability and user experience of agent outputs, particularly for users interacting in languages with accented characters or other special symbols.
**Testing:**
The fix was verified locally by running an `LlmAgent` with an `output_schema` and confirming that responses containing Latin characters (e.g., "ação", "caminhão", "ícone") are now correctly displayed without escaping.
COPYBARA_INTEGRATE_REVIEW=#2937 from amenegola:fix/issue-2936-escape-chars 6cac00f
PiperOrigin-RevId: 808622892
0 commit comments