feat(api): add response_format support for openai API#918
feat(api): add response_format support for openai API#918hiworldwzj merged 1 commit intoModelTC:mainfrom
Conversation
There was a problem hiding this comment.
Hello @WuSiYu, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!
Summary of Changes
Hello! gemini-code-assist here to provide a summary of this pull request. This PR, authored by WuSiYu, introduces support for the response_format parameter in the OpenAI API endpoint implementation. The primary goal is to allow users to specify the desired output format, specifically handling the guided_json format by extracting the provided schema and passing it to the underlying sampling parameters. This enhances the API's compatibility with features like guided generation.
Highlights
- Add
response_formatto API Request Model: TheChatCompletionRequestmodel inapi_models.pyhas been updated to include an optionalresponse_formatfield, which is expected to be a dictionary. - Handle
guided_jsonformat: Thechat_completions_implfunction inapi_openai.pynow checks for the presence of theresponse_formatparameter. If found and it contains aschemakey, the schema object is serialized to a JSON string and added to the sampling parameters asguided_json.
Changelog
- lightllm/server/api_models.py
- Added
response_format: Optional[Dict] = Noneto theChatCompletionRequestclass (line 65).
- Added
- lightllm/server/api_openai.py
- Added a conditional block to check for
request.response_format(line 127). - Inside the block, it retrieves the
schemafromresponse_format(line 128). - If
schemaexists, it serializes it to a JSON string and adds it tosampling_params_dictasguided_json(lines 129-131).
- Added a conditional block to check for
Using Gemini Code Assist
The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.
Invoking Gemini
You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.
| Feature | Command | Description |
|---|---|---|
| Code Review | /gemini review |
Performs a code review for the current pull request in its current state. |
| Pull Request Summary | /gemini summary |
Provides a summary of the current pull request in its current state. |
| Comment | @gemini-code-assist | Responds in comments when explicitly tagged, both in issue comments and review comments. |
| Help | /gemini help |
Displays a list of available commands. |
Customization
To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.
Limitations & Feedback
Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.
You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.
An API takes a call,
Expects JSON, standing tall.
With schema defined,
Output aligned,
A perfect response for all.
Footnotes
-
Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution. ↩
There was a problem hiding this comment.
Code Review
This pull request adds support for response_format with a schema to enable guided JSON output in the OpenAI API. The core logic looks good. I've added one suggestion to improve the robustness of handling the schema object. Please also consider updating the API documentation to reflect this new parameter.
Summary of Findings
- Schema Type Validation: The handling of
response_format.schemacould be more robust by explicitly checking if the schema is a dictionary before attempting to use it for guided JSON generation. This would prevent potential downstream errors if an invalid schema type is provided. - API Documentation: The API documentation should be updated to include the new
response_formatparameter, detailing its usage and expected structure for the/v1/chat/completionsendpoint.
Merge Readiness
The pull request is a valuable addition. I recommend addressing the suggestion regarding schema type validation for improved robustness. Once that's considered and the documentation is planned for an update, the PR should be in good shape. As an AI, I am not authorized to approve pull requests; please ensure further review and approval by the maintainers.
lightllm/server/api_openai.py
Outdated
| if request.response_format: | ||
| obj = request.response_format.get('schema') | ||
| if obj: | ||
| # guided_json takes str instead of dict obj | ||
| sampling_params_dict['guided_json'] = json.dumps(obj) |
There was a problem hiding this comment.
The code currently retrieves the schema from request.response_format and, if it's truthy, serializes it to JSON for guided_json. However, if response_format.schema is provided but is not a dictionary (e.g., it's a string or a list), json.dumps() will still process it, but the resulting string is unlikely to be a valid JSON schema. This could lead to errors later when xgrammar tries to compile this schema.
To improve robustness, could we check if the retrieved schema object is indeed a dictionary before proceeding? If it's present but not a dictionary, it might be better to log a warning and skip setting guided_json, or raise an error for invalid input format.
Using a more descriptive variable name like json_schema_object instead of obj could also enhance readability.
json_schema_object = request.response_format.get('schema')
if isinstance(json_schema_object, dict):
# guided_json takes str instead of dict obj
sampling_params_dict['guided_json'] = json.dumps(json_schema_object)
elif json_schema_object is not None:
# If 'schema' is provided but is not a dictionary, it's an invalid format.
# Log a warning and skip setting guided_json, as it would likely fail downstream.
logger.warning(
f"Parameter 'response_format.schema' was provided but is not a dictionary (type: {type(json_schema_object)}). "
f"Skipping guided JSON output. Schema must be a valid JSON object."
)Style Guide References
7b7a033 to
a4a5eee
Compare
No description provided.