-
Notifications
You must be signed in to change notification settings - Fork 1.4k
SEP-1686: Tasks #1041
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
SEP-1686: Tasks #1041
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
SEP-1686 currently states that it "introduces a mechanism for requestors (which can be either clients or servers, depending on the direction of communication) to augment their requests with tasks." Is this still the case?
I don't see any tests or examples demonstrating using a TaskStore with an McpClient (it's all McpServer or the Protocol base type), although I suppose it should work considering its shared code. It still might be nice to have an end-to-end test demonstrating that client-side support for stuff like elicitations works using the public APIs.
| * Note: This is not suitable for production use as all data is lost on restart. | ||
| * For production, consider implementing TaskStore with a database or distributed cache. | ||
| */ | ||
| export class InMemoryTaskStore implements TaskStore { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there a reason we cannot create a default in-memory task store with settable max limits on the total number of tasks?
I like that this PR adds the TaskStore abstraction which opens the door for a distributed implementation and doesn't require the application developer to manually handle tasks/get, tasks/list, etc., but it feels weird to me to not have a production-ready in-memory solution provided by the SDK.
It should still be off by default to make sure people are explicit about any maxTrackedTask limits and the like, but it should be built in and not left to an example that is "not suitable for production." This will prove it's possible to implement a production-quality TaskStore in the easiest case where everything is tracked in-memory and lost on process restart.
- Note: This is not suitable for production use as all data is lost on restart.
- For production, consider implementing TaskStore with a database or distributed cache.
This is true, but I think the impact is overstated. The logic in protocol.ts that calls _taskStore.updateTaskStatus(taskMetadata.taskId, 'input_required'); and then sets the task back to 'working' when the a request completes also breaks if the server restarts. If the process exits before the client provides a response, the task will be permanently left in an 'input_required' state indefinitely without some manual intervention outside of the SDK.
In most cases supported by the SDK, when tasks cannot outlive the MCP server process, an in-memory TaskStore would be better than a distributed one because the client will be informed that the Task is no longer being tracked by the server after it restarts automatically.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't disagree, but precedence was the main reason for leaving this as an example and leaving out any limits, really - it's the same case with InMemoryEventStore and InMemoryOAuthClientProvider. @ihrpr @felixweinberger any input here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If the SDK is going to provide a production-grade implementation, it needs to have limits and some form of logging extension points, but if it is not going to provide a production-grade implementation, I don't want to misrepresent this example as one.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It can never implement a production-ready service implementation, because that will always require some additional resources
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Seems right to me to have this as an interface - it seems hard to provide a generic "production-ready" approach here; I guess we could use sqlite? But I think the argument witht he InMemoryEventStore and InMemoryOAuthClientProvider are convincing. If we wanted to create production ready examples, we can do that as follow-ups.
However, it's important this is documented clearly, which it is in the README.
Agreed, will add this. edit: Done |
Can you clarify what you mean by this part? If you mean that you would put the queue directly between the client and server, rather than having it within the server, that sounds like a custom transport, if I'm interpreting it correctly. A custom transport that handles this would work, but we do need something for sHTTP, and I'd prefer not to make the sHTTP transport have special logic for particular JSON-RPC requests if that can be avoided. The message queue here is transport-agnostic. Specifically, if this were pushed into the transport, sHTTP would need to handle this by:
Both of these things sound like they should be the responsibility of |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @LucaButBoring for implementing this!
I think my primary feedback is that the README needs to be updated / expanded to explain all the APIs we expect people to use. This could be done in a follow-up though.
I also tried to test the examples but ran into some issues here, because of a discrepancy between taskId creation on server / client, because InMemoryTaskStore ignores the taskId being created:
// Generate a simple task ID (in production, use a more secure method)
const taskId = `task-${Date.now()}-${Math.random().toString(36).substring(2, 9)}`;And just uses the generateTaskId() result instead:
const taskId = this.generateTaskId();
So I think if we fix those 2 things this is good to go from my side. Thanks for adding a whopping 11k+ lines of test coverage for this feature!
Co-authored-by: Felix Weinberger <[email protected]>
Co-authored-by: Felix Weinberger <[email protected]>
Co-authored-by: Felix Weinberger <[email protected]>
Co-authored-by: Felix Weinberger <[email protected]>
Removes some more now-redundant parameters from RequestTaskStore.createTask and removes a redundant throw from the example.
|
@felixweinberger Updated the example and README, let me know what you think 😄 I also simplified the Finally, I added |
|
Updated @maxisbey |
src/types.ts
Outdated
| export const TaskStatusNotificationParamsSchema = z.object({ | ||
| task: TaskSchema | ||
| }); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was testing interplay between a python server <> typescript client and vice versa.
- python server <> typescript client worked
- python client <> typescript server ran into some validation issues because of this
I think this should actually be:
export const TaskStatusNotificationParamsSchema = NotificationsParamsSchema.merge(TaskSchema);Because the spec doesn't nest task in the notification: https://modelcontextprotocol.io/specification/draft/schema#taskstatusnotificationparams
TaskStatusNotificationParams: NotificationParams & TaskThere was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Updated in latest commit
|
@maxisbey @LucaButBoring I tested TS <> Py. The following worked:
But this didn't work:
To make this work cleanly I had to make 2 small changes:
Then I get this:
|
|
TS SDK fix is in: 681cf0d |
Gonna test elicitation examples as well, assuming that works fine I think we should get this landed. |

This PR implements the required changes for modelcontextprotocol/modelcontextprotocol#1686, which adds task augmentation to requests.
Motivation and Context
The current MCP specification supports tool calls that execute a request and eventually receive a response, and tool calls can be passed a progress token to integrate with MCP’s progress-tracking functionality, enabling host applications to receive status updates for a tool call via notifications. However, there is no way for a client to explicitly request the status of a tool call, resulting in states where it is possible for a tool call to have been dropped on the server, and it is unknown if a response or a notification may ever arrive. Similarly, there is no way for a client to explicitly retrieve the result of a tool call after it has completed — if the result was dropped, clients must call the tool again, which is undesirable for tools expected to take minutes or more. This is particularly relevant for MCP servers abstracting existing workflow-based APIs, such as AWS Step Functions, Workflows for Google Cloud, or APIs representing CI/CD pipelines, among other applications.
This proposal (and implementation) solves this by introducing the concept of Tasks, which are pending work objects that can augment any other request. Clients generate a task ID and augment their request with it — that task ID is both a reference to the request and an idempotency token. If the server accepts the task augmentation request, clients can then poll for the status and eventual result of the task with the
tasks/getandtasks/resultoperations.How Has This Been Tested?
Unit tests and updated sHTTP example.
Breaking Changes
None.
Types of changes
Checklist
Additional context