Skip to content

SEP: Add tool annotation descriptors in-code #190

@matteo8p

Description

@matteo8p

Preface

This issue is related to tool annotations and is currently only relevant to ChatGPT apps SDK. This is because MCP apps doesn't use tool annotations yet. I added it as a SEP here because I anticipate that MCP apps will also utilize annotations in a similar way. There's currently also isn't anywhere good to put this SEP but here.

Whether or not we want annotations in MCP apps is a separate discussion.

Context

ChatGPT apps SDK requires every tool to define annotations in the tool descriptor. These annotations tell ChatGPT if the tool is read only or dangerous (enforced guardrails w/ explicit approval). See their reference docs for details.

OpenAI app submission form requires developers to write justifications for these annotations.

Image

Issue

We're noticing a lot of developers miss the importance of setting annotations in ChatGPT apps. They only realize they have to set it in the app submission form.

I think it's important for developers to think about these annotations early on in their tool design process, not at the submission stage. Instead of writing down the annotation justifications in the submission form, developers should write them down in-code. This is similar to how we write tool descriptions.

Proposed solution

I propose that developers define the annotation descriptions in code. This provides several benefits:

  • Enforces users thinking about annotations intentionally at the early stages of tool design, not at the submission phase.
  • Auto updates on app submission. Users don't have to fill out the annotations on the submission form.
  • It's an extension of a tool description. Gives the developer further clarity on what the tool does and its consequences.

The primary drawback I can think of is that now tools code is slightly longer.

My proposed solution that can be implemented today would be to have these justifications be baked into tool _meta as openai/annotations/readOnlyHint, etc. It follows existing OpenAI patterns and is simple.

What annotation descriptions might look like in a tool

  server.registerTool(
    "echo_test",
    {
      title: "Echo Test",
      description:
        "Echo's back a message.",
      _meta: {
        "openai/widgetAccessible": true,
        "openai/annotations/readOnlyHint": "It only echos an input message, does not mutate anything", 
        "openai/annotations/destructiveHint": "[justification here]", 
       "openai/annotations/openWorldHint": "[justification here]",
      },
      inputSchema: {
        ...
      },
      annotations: {
        readOnlyHint: true,
        destructiveHint: false,
        openWorldHint: false,        
      }
    },
    async ({ }) => {
      ...
    },
  );

The above solution is most simple to implement, but I think a more robust solution is to bake it within annotations itself. This would require a change in ModelContextProtocol spec and approval would take longer.

This is what it would look like:

  server.registerTool(
    "echo_test",
    {
      title: "Echo Test",
      description:
        "Echo's back a message.",
      _meta: {
        "openai/widgetAccessible": true,
        "openai/annotations/readOnlyHint": "It only echos an input message, does not mutate anything", 
        "openai/annotations/destructiveHint": "[justification here]", 
       "openai/annotations/openWorldHint": "[justification here]",
      },
      inputSchema: {
        ...
      },
      annotations: {
        readOnlyHint: {
            value: true, 
            description: "This only echos an input string."
        }
        ...    
      }
    },
    async ({ }) => {
      ...
    },
  );

Metadata

Metadata

Assignees

No one assigned

    Labels

    enhancementNew feature or request

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions