Skip to content

CLI command to generate a project plan with a structured-output LLM API call to a "smart" model #13

@chriscarrollsmith

Description

@chriscarrollsmith

Goal: Implement an MCP tool for generating a project plan using an LLM

  1. Install the Vercel AI SDK: npm install @ai-sdk/vercel. Also install the provider extensions: npm install @ai-sdk/openai @ai-sdk/google @ai-sdk/deepseek.
  2. The user can set a DEEPSEEK_API_KEY, GEMINI_API_KEY, or OPENAI_API_KEY as environment variables when running index.ts to start the server (or in the shell where cli.ts is run). This is basically already supported, but will need to be documented.
  3. Create the Tool object with tool-calling schema in tools.js. I'm thinking it can should take a text or filepath for the prompt paramter and an arbitrary number of optional file attachments. The prompts and text content of the attachments should be wrapped in XML tags (e.g. <prompt>...</prompt> and <attachment>...</attachment>). Additionally, there should probably be provider and model parameters.
  4. Create a toolExecutor for the tool schema in toolExecutors.js to perform validation and call a TaskManager method that will handle the actual operation to generate and create the project plan. Validation should include a check that the provider API key is set as an environment variable.
  5. Create a TaskManager method to handle the operation. It should use the Vercel AI SDK to generate the structured output. The tool schema should be passed in as an argument to the SDK method to inform the LLM of the expected output structure. (The tool schema can be accessed with dot notation from the Tool object.) Once the LLM response is received, it should be used to call the createProject method.
  6. The results of the createProject method should be returned to the toolExecutor to be returned to the user.
  7. As a bonus, we will also create a new command in src/client/cli.ts to call the createProject method.

Files to touch: src/utils/errors.ts,src/types/index.ts,src/server/tools.ts,src/server/toolExecutors.ts,src/server/TaskManager.ts,src/client/cli.ts

Implementation Plan

Below is a detailed step-by-step project plan to implement the new “Generate Project Plan” feature using the AI SDK, building on the existing MCP framework in this repository. These steps assume you want to add a new tool (e.g., "generate_project_plan") that accepts a prompt (and optionally file attachments) and then automatically creates a corresponding project with tasks. This plan also aligns with the AI SDK documentation for generating or streaming structured data (including how to provide a Zod schema, how to handle malformed JSON, etc.).

────────────────────────────────────────────────────────────────────

  1. Install Dependencies & Document Environment Variables
    ────────────────────────────────────────────────────────────────────
    1.1 Add the Vercel AI SDK:
    npm install ai
    1.2 Install the provider libraries you intend to use:
    npm install @ai-sdk/openai @ai-sdk/google @ai-sdk/deepseek
    1.3 Document that DEEPSEEK_API_KEY, GEMINI_API_KEY, and OPENAI_API_KEY can be set in the environment.
    • Update your README or environment docs to clarify how each provider is selected at runtime.

────────────────────────────────────────────────────────────────────
2. Create a New Tool Definition in src/server/tools.ts
────────────────────────────────────────────────────────────────────
2.1 In src/server/tools.ts, add a new Tool definition named "generate_project_plan" (or a similar name).
• The tool’s inputSchema should expect:
- prompt: string (or optional file path)
- provider: string (e.g., "openai" or "google" or "deepseek")
- model: string (e.g., "gpt-4-turbo" or "gemini-1.5-flash-latest")
- attachments: an optional array of file contents or text.
• Because we want to wrap prompt and attachments in XML tags, note that in the instructions or the description of the tool so that the calling code does the proper formatting.

Example snippet (pseudo-code):

// ...
const generateProjectPlanTool: Tool = {
name: "generate_project_plan",
description: "Use LLM to generate a project plan and tasks for a given prompt.",
inputSchema: {
type: "object",
properties: {
prompt: { type: "string" },
provider: { type: "string" },
model: { type: "string" },
attachments: {
type: "array",
items: { type: "string" }
}
},
required: ["prompt", "provider", "model"]
}
};
// ...

2.2 Add generateProjectPlanTool to the ALL_TOOLS array so it is registered like the others.

────────────────────────────────────────────────────────────────────
3. Create a Matching Tool Executor in src/server/toolExecutors.ts
────────────────────────────────────────────────────────────────────
3.1 Define a new executor, e.g., generateProjectPlanToolExecutor, that:
• Validates "prompt", "provider", "model" as required strings.
• Validates attachments if present (ensures it’s an array of strings).
• Verifies that the correct provider API key (depending on provider) is set in process.env. If not set, throw an error.
• Calls a new TaskManager method, e.g., taskManager.generateProjectPlan(...).

Example snippet (pseudo-code):

const generateProjectPlanToolExecutor: ToolExecutor = {
name: "generate_project_plan",
async execute(taskManager, args) {
// 1) Validate required params
const prompt = validateRequiredStringParam(args.prompt, "prompt");
const provider = validateRequiredStringParam(args.provider, "provider");
const model = validateRequiredStringParam(args.model, "model");
// 2) Validate attachments
let attachments = [];
if (Array.isArray(args.attachments)) {
attachments = args.attachments.map(a => String(a));
}
// 3) Check environment variable
// e.g., if (provider==="openai" && !process.env.OPENAI_API_KEY) throw ...
// 4) Call the new TaskManager method
const result = await taskManager.generateProjectPlan({
prompt,
provider,
model,
attachments
});
// 5) Return the result in the standard tool format
return formatToolResponse(result);
}
};
toolExecutorMap.set(generateProjectPlanToolExecutor.name, generateProjectPlanToolExecutor);

────────────────────────────────────────────────────────────────────
4. Implement the generateProjectPlan Method in src/server/TaskManager.ts
────────────────────────────────────────────────────────────────────
4.1 In the TaskManager class, create a new method, e.g. generateProjectPlan(options).
• Use the AI SDK (e.g., from "ai" or "@ai-sdk/openai") to call generateObject or streamObject.
• Provide a Zod schema or JSON schema that describes the structure of the returned plan. For example, you might want the LLM to return:
{
projectPlan: string,
tasks: [
{ title: string, description: string, ... }
]
}
• Pass the prompt and attachments as part of the final prompt. The text might look like:
" ...the user’s question... \n...maybe file content..." etc.
• The provider and model can be selected in code; e.g., openai(model) or google(model).

4.2 Use generateObject from the AI SDK:
• If valid structured JSON is returned, create a new project via this.createProject(...).
• If invalid or incomplete JSON, handle the error or attempt repair.

Example snippet (pseudo-code):

import { generateObject } from "ai";
import { z } from "zod";

const projectPlanSchema = z.object({
projectPlan: z.string(),
tasks: z.array(z.object({
title: z.string(),
description: z.string(),
// optionally toolRecommendations, ruleRecommendations, etc.
}))
});

public async generateProjectPlan({
prompt,
provider,
model,
attachments
}: {
prompt: string;
provider: string;
model: string;
attachments: string[];
}) {
// 1) Assemble final LLM prompt
let llmPrompt = <prompt>${prompt}</prompt>;
for (const att of attachments) {
llmPrompt += \n<attachment>${att}</attachment>;
}

// 2) Choose provider client
// e.g. if (provider==="openai") { const modelClient = openai(model); } ...

// 3) Call generateObject
let structured;
try {
const { object } = await generateObject({
model: openai(model), // or google(model), deepseek(model), etc.
schema: projectPlanSchema,
prompt: llmPrompt,
});
structured = object;
} catch (err) {
// ...handle or re-throw
}

// 4) structured should have { projectPlan, tasks }. Create a project
const creationResult = await this.createProject(
structured.projectPlan,
structured.tasks
);
return creationResult;
}

Notes:
• If you want partial streaming, you could use streamObject instead of generateObject.
• If you want to pass advanced parameters, you can do so (e.g., temperature, maxTokens, etc.).
• Make sure your environment variables (OPENAI_API_KEY, GEMINI_API_KEY, etc.) are set based on which provider you choose.

────────────────────────────────────────────────────────────────────
5. Return the New Project from generateProjectPlan to the Executor
────────────────────────────────────────────────────────────────────
5.1 Inside generateProjectPlan, the successful result is a StandardResponse from createProject. Return that to the executor.
5.2 The executor returns via formatToolResponse, so the final output to the user becomes a JSON block with the new project ID, tasks, etc.

────────────────────────────────────────────────────────────────────
6. Write or Update Documentation & Error Handling
────────────────────────────────────────────────────────────────────
6.1 For each major step, ensure you have adequate try/catch or error-handling logic (e.g., NoObjectGeneratedError from the AI SDK).
6.2 Log or track meaningful error messages if environment variables are missing or if the LLM fails to parse JSON.

────────────────────────────────────────────────────────────────────
7. (Bonus) Add a New CLI Command in src/client/cli.ts
────────────────────────────────────────────────────────────────────
7.1 In cli.ts, create a new subcommand called "generate-plan" (or similar). For example:

program
.command("generate-plan")
.description("Generate a new project plan via LLM and create a project.")
.requiredOption("--prompt ", "Prompt text to feed to the LLM")
.option("--model ", "LLM model, e.g. gpt-4-turbo", "gpt-4-turbo")
.option("--provider ", "Provider, e.g. openai", "openai")
.action(async (options) => {
try {
// e.g. pass these to the new tool via a direct call or an HTTP call
const result = await executeToolWithErrorHandling("generate_project_plan", {
prompt: options.prompt,
provider: options.provider,
model: options.model,
attachments: []
}, taskManager);

  console.log(result.content[0].text); 
} catch (err) {
  // handle
}

});

7.2 Optionally allow file attachments via flags: --attachmentFile . You would read the file from disk, append the content to attachments, and pass them to the tool.

────────────────────────────────────────────────────────────────────
8. Test the Workflow End-to-End
────────────────────────────────────────────────────────────────────
8.1 Ensure that environment variables like OPENAI_API_KEY are set.
8.2 Run the CLI command for generate-plan or call the tool through your usual server.
8.3 Confirm that the answer from the LLM is properly parsed into structured JSON.
8.4 Check that a new project is created with tasks.
8.5 Verify error handling when the LLM returns malformed JSON or if no tasks are produced.

────────────────────────────────────────────────────────────────────
Conclusion
────────────────────────────────────────────────────────────────────
Following the steps above, you will have:
• A new “generate_project_plan” tool to gather high-level project specs from user prompts (plus optional attachments).
• A corresponding generateProjectPlanToolExecutor to validate inputs and talk to the TaskManager.
• A generateProjectPlan method in TaskManager that leverages the AI SDK to parse an LLM-generated plan.
• Automatic project creation in your existing data store.
• (Optionally) a CLI command for direct usage.

This architecture respects the existing “MCP Tools” pattern, keeps high-level logic in the tool executor, and centralizes data operations in TaskManager. Adjust or refine the approach for more advanced error recovery, streaming, or custom Zod schemas as your feature set grows.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions