LibreChat Artifacts Preview not showing #5393
Replies: 5 comments 2 replies
-
Hi @danny-avila follow up. I noticed it only works for me if I use Sonnet. Joe |
Beta Was this translation helpful? Give feedback.
-
Try setting enforce: false |
Beta Was this translation helpful? Give feedback.
-
@joesa did you ever get it working with local model? your thread (and skimming some others and looking closely at his prompt while pausing Danny's video) helped me figure this out. SOLUTION: I'm running LibreChat (self-hosted in Docker) + LM Studio + Mistral Small 24b 4bit MLX flavor (because I'm on an M4 with a single GPU with only 48GB VRAM + 64GB RAM) and using a greatly simplified instruction in a LibreChat Agent (details below) I attempted this with their dedicated Codestral 22b model [MLX flavor] and it kept putting the actual "artifact" headers inside the normal code block... but perhaps with some more tinkering I could get it to work as well) Update: Later I got the Codestral 22b GGUF flavor to work and correctly output artifacts using the same instruction below. It only made two slight mistakes by including Update: Solved this too. The model was confused by having two variations of the same instruction that I pulled from the various versions in Danny's JS file: "7. NEVER use triple backticks to enclose the artifact, ONLY the content within the artifact. Always use triple backticks (```) to enclose the content within the artifact, regardless of the content type." Removing the second instance solved this problem sometimes. Update: Got a smaller ~10GB MLX flavor codestral-22b-v0.1-mlx-3 working as well. Moving the Artifact prompt into the system instructions in LM Studio made this work every time. For some reason, these smaller local models just don't receive or ignore the Artifact prompt sometimes otherwise (ie: it doesn't show up in developer logs in LM Studio if being sent by LibreChat). From Danny's original longer artifacts instruction/prompt, I stripped out all JS, tags, but kept the markdown that included html-related instructions to keep this simple for testing and it works. So someone could just add back whatever they need from Danny's original JS file (ie: his examples about Mermaid and React): https://raw.githubusercontent.com/danny-avila/LibreChat/refs/heads/main/api/app/clients/prompts/artifacts.js My abbreviated Instruction pasted into my "coder with artifacts" agent (and check the toggles for enable artifacts + custom prompt mode)
NOTE ON BACKTICKS: In the plain text above I noted them simply as ``` and this seemed to help when using plain text instructions in LibreChat or system prompt in LM Studio. Perhaps the escaped version of it (/ Note the final instruction I added "If changes are requested, they should also be placed inside the remark-directive markdown format for artifacts as the original code was." because it was switching to inline HTML whenever I would ask it to make changes to the original code in LibreChat. PS: I also increased the context length in LM Studio (default is around 4,000 and I pushed it closer to 8,000) just in case. Not sure if that helped or if it was the simplified prompt in general. Previously this helped when I was getting errors when using Cline with local models. Cline sends extremely long system prompts: cline/cline#1446 |
Beta Was this translation helpful? Give feedback.
-
Here's a revised prompt where I've cut it down by ~300 words, made it more direct and less redundant. I started by asking Mistral (web version) to improve the prompt and then added back things that it really needed to work locally. This works every time with mistral-small-3.1-24b-instruct-2503@8bit and also correctly updates the artifact in LibreChat window every time I request changes!
|
Beta Was this translation helpful? Give feedback.
-
Pretty crazy but I was able to get a really small model to correctly output the Artifact as well by reiterating things in the chat prompt. Small models seem to ignore the LibreChat Agent Instructions unless you call them out explicitly. Or for some reason they never even receive the LibreChat Agent Instructions unless you mention it in the chat prompt (with large models it shows up in the developer logs in LM Studio. With small models, the instruction doesn't even show in the developer logs). I had to give very insistent instructions, but with these it was able to match the quality that 22B and 24B models get on their first try: Follow the system prompt regarding outputting all code inside an artifact. do not ignore the system prompt! do not output any code if it's not nested in the artifact!! Pause for a few seconds. Then read this prompt: work slowly to avoid mistakes. using html and css, code a responsive page using css grid with 6 items. please keep all css inside the html style head. body 40px padding. body background white. each item should contain 200 characters of lorem ipsum placeholder text, each item should use a different system font from the other items. use nth-item instead of unique css names for each grid item. each item should have 10px round corners and a different background color (all different very light pastels). do not take shortcuts on placeholder content, values in the code, or by instructing the user via comments to repeat content or to repeat css. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Hi Danny, I have beta enabled for artifacts, per image below, but not window pops up to preview code.
I have tried different models none of which seem to trigger the Artifacts preview pane.
By the way I am rocking a 3 x A6000 GPUs (48gb each) so I am able to load 72b parameter models like Llama 3.1 and 3.3s.
Is there an opensource particular llm you would recommend for coding?
Attached are the screenshots requested and also some console screenshots.
Thanks,
Joe
Beta Was this translation helpful? Give feedback.
All reactions