Skip to content

feat(api): add response_format support for openai API#918

Merged
hiworldwzj merged 1 commit intoModelTC:mainfrom
WuSiYu:feat-openai-response-format
Jun 5, 2025
Merged

feat(api): add response_format support for openai API#918
hiworldwzj merged 1 commit intoModelTC:mainfrom
WuSiYu:feat-openai-response-format

Conversation

@WuSiYu
Copy link
Collaborator

@WuSiYu WuSiYu commented May 30, 2025

No description provided.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hello @WuSiYu, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

Summary of Changes

Hello! gemini-code-assist here to provide a summary of this pull request. This PR, authored by WuSiYu, introduces support for the response_format parameter in the OpenAI API endpoint implementation. The primary goal is to allow users to specify the desired output format, specifically handling the guided_json format by extracting the provided schema and passing it to the underlying sampling parameters. This enhances the API's compatibility with features like guided generation.

Highlights

  • Add response_format to API Request Model: The ChatCompletionRequest model in api_models.py has been updated to include an optional response_format field, which is expected to be a dictionary.
  • Handle guided_json format: The chat_completions_impl function in api_openai.py now checks for the presence of the response_format parameter. If found and it contains a schema key, the schema object is serialized to a JSON string and added to the sampling parameters as guided_json.

Changelog

  • lightllm/server/api_models.py
    • Added response_format: Optional[Dict] = None to the ChatCompletionRequest class (line 65).
  • lightllm/server/api_openai.py
    • Added a conditional block to check for request.response_format (line 127).
    • Inside the block, it retrieves the schema from response_format (line 128).
    • If schema exists, it serializes it to a JSON string and adds it to sampling_params_dict as guided_json (lines 129-131).
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.


An API takes a call,
Expects JSON, standing tall.
With schema defined,
Output aligned,
A perfect response for all.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request adds support for response_format with a schema to enable guided JSON output in the OpenAI API. The core logic looks good. I've added one suggestion to improve the robustness of handling the schema object. Please also consider updating the API documentation to reflect this new parameter.

Summary of Findings

  • Schema Type Validation: The handling of response_format.schema could be more robust by explicitly checking if the schema is a dictionary before attempting to use it for guided JSON generation. This would prevent potential downstream errors if an invalid schema type is provided.
  • API Documentation: The API documentation should be updated to include the new response_format parameter, detailing its usage and expected structure for the /v1/chat/completions endpoint.

Merge Readiness

The pull request is a valuable addition. I recommend addressing the suggestion regarding schema type validation for improved robustness. Once that's considered and the documentation is planned for an update, the PR should be in good shape. As an AI, I am not authorized to approve pull requests; please ensure further review and approval by the maintainers.

Comment on lines 127 to 131
if request.response_format:
obj = request.response_format.get('schema')
if obj:
# guided_json takes str instead of dict obj
sampling_params_dict['guided_json'] = json.dumps(obj)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The code currently retrieves the schema from request.response_format and, if it's truthy, serializes it to JSON for guided_json. However, if response_format.schema is provided but is not a dictionary (e.g., it's a string or a list), json.dumps() will still process it, but the resulting string is unlikely to be a valid JSON schema. This could lead to errors later when xgrammar tries to compile this schema.

To improve robustness, could we check if the retrieved schema object is indeed a dictionary before proceeding? If it's present but not a dictionary, it might be better to log a warning and skip setting guided_json, or raise an error for invalid input format.

Using a more descriptive variable name like json_schema_object instead of obj could also enhance readability.

        json_schema_object = request.response_format.get('schema')
        if isinstance(json_schema_object, dict):
            # guided_json takes str instead of dict obj
            sampling_params_dict['guided_json'] = json.dumps(json_schema_object)
        elif json_schema_object is not None:
            # If 'schema' is provided but is not a dictionary, it's an invalid format.
            # Log a warning and skip setting guided_json, as it would likely fail downstream.
            logger.warning(
                f"Parameter 'response_format.schema' was provided but is not a dictionary (type: {type(json_schema_object)}). "
                f"Skipping guided JSON output. Schema must be a valid JSON object."
            )

Style Guide References

@WuSiYu WuSiYu force-pushed the feat-openai-response-format branch from 7b7a033 to a4a5eee Compare May 30, 2025 14:53
@hiworldwzj hiworldwzj merged commit 4ce405d into ModelTC:main Jun 5, 2025
1 check passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants