Skip to content

Conversation

detteiu0330
Copy link

…content

Fix JSON serialization to handle non-ASCII characters in _convert_to_content

Motivation and Context

Set ensure_ascii in json.dump of _convert_to_contect function to false to solve the problem that non-alphabetic characters are converted to ascii characters when trying to retrieve them.

How Has This Been Tested?

Tested using a server that retrieves chat history from slack's API.

Breaking Changes

Not necessary.

Types of changes

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Breaking change (fix or feature that would cause existing functionality to change)
  • Documentation update

Checklist

  • I have read the MCP Documentation
  • My code follows the repository's style guidelines
  • New and existing tests pass locally
  • I have added appropriate error handling
  • I have added or updated documentation as needed

Additional context

@Kludex
Copy link
Member

Kludex commented Apr 19, 2025

How can I reproduce this? Please share a snippet/what I need to run to reproduce.

@detteiu0330
Copy link
Author

detteiu0330 commented Apr 21, 2025

@Kludex

I faced this issue when I used the below server code.

from mcp.server import FastMCP

mcp = FastMCP("greeting")

@mcp.tool(
    name="greet",
    description="Greet a user with a personalized message",
)
def greet(name: str) -> str:
    """
    Greet a user with a personalized message.

    Args:
        name (str): The name of the user to greet.

    Returns:
        str: A personalized greeting message.
    """
    result = {
        "message": f"こんにちは, {name}さん! 今日はどんなことをしたいですか?"
    }
    return result 

if __name__ == "__main__":
    mcp.run()

For example, if response type is "dict", it is converted to json string by json.dumps but its contents converted to ascii at the same time.
So, the result from mcp server becomes ascii like below.

{"message": "\u3053\u3093\u306b\u3061\u306f, MCP Server\u3055\u3093! \u4eca\u65e5\u306f\u3069\u3093\u306a\u3053\u3068\u3092\u3057\u305f\u3044\u3067\u3059\u304b\uff1f"}

It causes to exceed the limit of tokens, so it should be fixed, I think.

@detteiu0330
Copy link
Author

detteiu0330 commented Apr 27, 2025

@Kludex
I closed this PR because the issue is resolved by merging another one.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants