Skip to content

Conversation

@fightZy
Copy link
Contributor

@fightZy fightZy commented Jun 3, 2025

Modifications:

  1. Add a data size reading interface.
  2. Allow reading multiple nodes at once.
  3. Dynamically inject a tool prompt to guide the LLM to prune and read data when encountering sizes exceeding the specified limit.

Core Objective of the Prompt:
Guided by the "pruning" concept, the Agent decomposes data reading top-down. If a node's size exceeds the limit, it reads only level-1 data (self-properties and child node information), then applies the same rule recursively to child nodes until completion.
image

Experimental Results:
Advantages:

  • Significantly improves the experience of reading large datasets, directly avoiding exceptions caused by bulk data retrieval.

Disadvantages:

  1. Lengthy Process: Cannot complete all tasks in one go. For example, restoring a design稿 to code in Cursor requires multiple rounds of dialogue to progress step-by-step.
  2. Context Overflow: After multiple dialogue rounds, data exceeds the context window, causing the LLM to forget prior information.

Improvement for Disadvantage 2:
Enable the Agent to automatically maintain a task document record.md to log all task-related content or to-dos. Each dialogue starts with reading record.md.

My Experimental Input in Cursor:

@App.tsx  
Gradually restore the design to code on this page  
@figma url  

After fetching data, read, record, and update the todo list in the document, including any to-dos and task progress for the current task. Record Figma node IDs when specifying data for easy lookup @record.md  

Usage:
Specify the maximum data retrieval limit per request via env GET_NODE_SIZE_LIMIT (unit: KB):

"Framelink Figma MCP": {  
  "command": "<command>",  
  "args": [  
    <...args>  
  ],  
  "env": {  
    "GET_NODE_SIZE_LIMIT": 150  
  }  
},  

The scenarios of my self-experimentation may be relatively limited, and everyone is welcome to try the experiment and provide suggestions.

张业 and others added 7 commits May 22, 2025 15:50
- Introduced size calculations for YAML and JSON outputs.
- Updated logging to include data sizes for generated results.
- Refactored log writing to support JSON format and improved directory handling for logs.
- Added a utility function to streamline JSON to YAML logging.
- Removed YAML size calculations and logging from the MCP server.
- Updated logging to only include JSON data size.
- Added environment check to conditionally log JSON to YAML in Figma service.
- Introduced a new tool to calculate the memory size of Figma nodes, allowing multiple node IDs to be processed simultaneously.
- Updated the existing tool to fetch Figma data, incorporating size limits for data retrieval based on YAML output.
- Refactored the Figma service to accept multiple node IDs for fetching node data.
- Optimize the `jest.config.cjs` file to configure the Jest test environment and TypeScript support.
- Update `tsconfig.json` to exclude the test folder.
- Add `tsconfig.test.json`, a dedicated TypeScript configuration for testing.
- Updated the error message to provide more detailed information when the data size exceeds the specified limit, including the file key and node ID.
@GLips
Copy link
Owner

GLips commented Jun 4, 2025

Nice! This is extremely promising. The LLM did a pretty great job with a multi-frame app mockup I've got.

I'm going to review the code more closely but this is working great. Very exciting.

One thing I think we'll want to do is refine the prompt a bit to make sure the LLM follows the instructions. Specifically, I found that Claude Sonnet 4 (which I was testing with) wanted to get ALL the data before implementing anything, which kinda defeats the purpose of the pruning strategy.

Updating the prompt helped it work more iteratively—grabbing one screen's design first and in full, implementing it, then moving on to the next frame. Still have a bit more refining and testing to do there but I love this. Nice work!

Will be back a bit later with more thoughts on any code organization or the like.

@fightZy
Copy link
Contributor Author

fightZy commented Jun 5, 2025

Yes, in my experiment, the LLM does not 100% follow the prompt to obtain data step by step. My current approach is to prompt the LLM through response error messages, as I am doing in the code. Of course, it would be better if we can optimize the prompt! Looking forward to more of your sharing!

Repository owner deleted a comment from cursor bot Jun 6, 2025
@fightZy
Copy link
Contributor Author

fightZy commented Jun 6, 2025

I’ve tested the latest prompt and think it’s already excellent and crystal-clear. However, I still think there are some points worth discussing.

In my testing:

  1. The first-step behavior varies significantly across models, manifesting in three scenarios:
    1. get_figma_data_size;
    2. get_figma_data (full data);
    3. get_figma_data with the depth: 1 parameter.
Model First-Step Behavior
Claude 4 Predominantly 2, with occasional 1s.
Claude 4-thinking Mostly 1, with a few instances of 2.
Gemini-2.5-pro All three scenarios occurred, with 1 and 3 dominating.
GPT-4.1 All three scenarios occurred, primarily 1 and 3.

In summary: Performance across models is unstable, and scenario 3 is undesirable.

  1. The iteration steps may not follow expectations. When handling oversized data, after retrieving data size and using depth: 1, Claude 4-thinking often tries to fetch full data again. Most models also attempt to read multiple child nodes simultaneously, rather than following the prompt’s "depth-first" instruction.

Based on this, I further optimized the prompt::
1.Enhanced the data size limit prompt, leading most models to start with get_size as the first step.
2.Replaced "screen/component" with the more universal term "node," resulting in more stable model performance.
3.Emphasized selecting from child nodes to avoid attempting to retrieve full-volume data from parent nodes..

prompt content, I look forward to your guidance when you have time to review it again. @GLips

## Figma Data Size Guidelines
- **Over ${sizeLimit}KB**: with \`depth: 1\` to get structure only, enter the *Pruning Reading Strategy*
- **Under ${sizeLimit}KB**: Get full data without depth parameter

## Figma Data Pruning Reading Strategy

**IMPORTANT: Work incrementally, not comprehensively.**

### Core Principle
Retrieve and implement ONE node at a time. Don't try to understand the entire design upfront.

### Pruning Reading Process
1. **Start Small**: Get shallow data (depth: 1) of the main node to see basic information of itself and children nodes
2. **Pick One**: Choose one from the child nodes to implement completely
3. **Get Full Data**: Retrieve complete data for that one node only
4. **Implement**: Implement specifically according to user needs based on the content of the node
5. **Repeat**: Move to the next node only after the current one is done

### Key Point
**Don't analyze multiple nodes in parallel.** Focus on implementing one complete, working node at a time. This avoids context overload and produces better results.

@GLips
Copy link
Owner

GLips commented Jun 6, 2025

Sweet. I just released what we've got here to a new beta package and let folks in the Discord know. Gonna keep experimenting myself and see if we get any feedback in there. Hope we can cut a main release for this later next week.

fightZy added 2 commits June 22, 2025 14:36
…hance effects handling in transformers

- Updated the version of @figma/rest-api-spec in package.json.
- Added support for TextureEffect and NoiseEffect in effects.ts, including functions to simplify and generate styles for these effects.
- Improved the buildSimplifiedEffects function to handle visibility checks more robustly.
- Enhanced parsePaint function in common.ts to support PATTERN type for Figma paints.
- Todo: Test the newly added figma types.
- Added a caching layer using LRUCache to optimize retrieval of Figma node data.
- Introduced ParseDataCache class to manage cache operations, including validation of cache freshness based on file metadata.
- Enhanced FigmaService to utilize caching for node requests, improving performance and reducing API calls.
- Updated config to include a useCache option for enabling/disabling caching.
- Added tests for cache functionality, ensuring correct behavior for cache hits, misses, and freshness validation.
@fightZy
Copy link
Contributor Author

fightZy commented Jun 22, 2025

I have added an optional caching function to the data acquisition logic, which has a significant speed advantage when repeatedly reading data from a large file. Additionally, I updated the Figma API version to use a new interface. I noticed some new data types and have done simple adaptations, but haven't tested the adaptation for the new types yet, as I haven't found design files with corresponding data.
It may be necessary to further consider the handling of new data types. Looking forward to your reply. @GLips

fightZy added 2 commits July 24, 2025 22:48
- Introduced a new tool, get_figma_data_size, to fetch the memory size of Figma data.
- Updated index.ts to register the new tool with the server.
- Enhanced get_figma_data tool to include size limit handling and guidelines for data retrieval.
- Updated relevant types and imports across multiple files for consistency.
@stone-w4tch3r
Copy link

Are there any plans to continue work on this PR and merge it?

I hope that caching feature will allow to use this MCP with figma free plan

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants