-
-
Notifications
You must be signed in to change notification settings - Fork 1k
Experimental feature: Allow the LLM to automatically prune and read large data. #158
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
- Introduced size calculations for YAML and JSON outputs. - Updated logging to include data sizes for generated results. - Refactored log writing to support JSON format and improved directory handling for logs. - Added a utility function to streamline JSON to YAML logging.
- Removed YAML size calculations and logging from the MCP server. - Updated logging to only include JSON data size. - Added environment check to conditionally log JSON to YAML in Figma service.
- Introduced a new tool to calculate the memory size of Figma nodes, allowing multiple node IDs to be processed simultaneously. - Updated the existing tool to fetch Figma data, incorporating size limits for data retrieval based on YAML output. - Refactored the Figma service to accept multiple node IDs for fetching node data.
- Optimize the `jest.config.cjs` file to configure the Jest test environment and TypeScript support. - Update `tsconfig.json` to exclude the test folder. - Add `tsconfig.test.json`, a dedicated TypeScript configuration for testing.
- Updated the error message to provide more detailed information when the data size exceeds the specified limit, including the file key and node ID.
|
Nice! This is extremely promising. The LLM did a pretty great job with a multi-frame app mockup I've got. I'm going to review the code more closely but this is working great. Very exciting. One thing I think we'll want to do is refine the prompt a bit to make sure the LLM follows the instructions. Specifically, I found that Claude Sonnet 4 (which I was testing with) wanted to get ALL the data before implementing anything, which kinda defeats the purpose of the pruning strategy. Updating the prompt helped it work more iteratively—grabbing one screen's design first and in full, implementing it, then moving on to the next frame. Still have a bit more refining and testing to do there but I love this. Nice work! Will be back a bit later with more thoughts on any code organization or the like. |
|
Yes, in my experiment, the LLM does not 100% follow the prompt to obtain data step by step. My current approach is to prompt the LLM through response error messages, as I am doing in the code. Of course, it would be better if we can optimize the prompt! Looking forward to more of your sharing! |
|
I’ve tested the latest prompt and think it’s already excellent and crystal-clear. However, I still think there are some points worth discussing. In my testing:
In summary: Performance across models is unstable, and scenario 3 is undesirable.
Based on this, I further optimized the prompt:: prompt content, I look forward to your guidance when you have time to review it again. @GLips |
|
Sweet. I just released what we've got here to a new beta package and let folks in the Discord know. Gonna keep experimenting myself and see if we get any feedback in there. Hope we can cut a main release for this later next week. |
…hance effects handling in transformers - Updated the version of @figma/rest-api-spec in package.json. - Added support for TextureEffect and NoiseEffect in effects.ts, including functions to simplify and generate styles for these effects. - Improved the buildSimplifiedEffects function to handle visibility checks more robustly. - Enhanced parsePaint function in common.ts to support PATTERN type for Figma paints. - Todo: Test the newly added figma types.
- Added a caching layer using LRUCache to optimize retrieval of Figma node data. - Introduced ParseDataCache class to manage cache operations, including validation of cache freshness based on file metadata. - Enhanced FigmaService to utilize caching for node requests, improving performance and reducing API calls. - Updated config to include a useCache option for enabling/disabling caching. - Added tests for cache functionality, ensuring correct behavior for cache hits, misses, and freshness validation.
|
I have added an optional caching function to the data acquisition logic, which has a significant speed advantage when repeatedly reading data from a large file. Additionally, I updated the Figma API version to use a new interface. I noticed some new data types and have done simple adaptations, but haven't tested the adaptation for the new types yet, as I haven't found design files with corresponding data. |
- Introduced a new tool, get_figma_data_size, to fetch the memory size of Figma data. - Updated index.ts to register the new tool with the server. - Enhanced get_figma_data tool to include size limit handling and guidelines for data retrieval. - Updated relevant types and imports across multiple files for consistency.
|
Are there any plans to continue work on this PR and merge it? I hope that caching feature will allow to use this MCP with figma free plan |
Modifications:
Core Objective of the Prompt:

Guided by the "pruning" concept, the Agent decomposes data reading top-down. If a node's size exceeds the limit, it reads only level-1 data (self-properties and child node information), then applies the same rule recursively to child nodes until completion.
Experimental Results:
Advantages:
Disadvantages:
Improvement for Disadvantage 2:
Enable the Agent to automatically maintain a task document
record.mdto log all task-related content or to-dos. Each dialogue starts with readingrecord.md.My Experimental Input in Cursor:
Usage:
Specify the maximum data retrieval limit per request via env
GET_NODE_SIZE_LIMIT(unit: KB):The scenarios of my self-experimentation may be relatively limited, and everyone is welcome to try the experiment and provide suggestions.