-
Notifications
You must be signed in to change notification settings - Fork 1
Description
OCR sometimes outputs irrelevant text resembling prompts or conversations.
Example:
`& Topics You can login using your huggingface.co credentials. ~
More
This forum is powered by Discourse and relies on a trust-level system. As a new user, you're temporarily limited in the number of topics and posts you can
cateconine create. To lift those restrictions, just spend time reading other posts (to be precise, enter 5 topics, read through 30 posts and spend a total of 10 minutes
reading).
Beginners
Start with reading this post. Then maybe someone already had that error that is bugging you check with a quick search. Or you can read the latest awesome
Intermediate paper the team discussed.
Course
mw reesen How to request mistral:7b-instruct to skip returning context?
aa Models
HM Models
Feb 21
All categories enya Rapa
1/4
Hiall, my first time around here, so please bear with me ifI missed any rules/recommendations. Feb21
Ihave been learning to use and build projects using ollama with local model mistral:7b-instruct v3. This
particular use case, I am asking the model to impersonate accessibility auditor to generate a report. I
have used the model in other contexts as well.
Mistral continues to include the full context on the response with thousands and thousands of lines of "
[3,1027,781,2744.... ", no matter how I request on the prompt to exclude this. I can't seem to understand
if this is model related or ollama API related. This context increases the size of any logs by 10s of MBs
and complicates troubleshooting.
Appreciate any help to understand how I can request the model/ollama to skip including the context in
its response to my prompts.
Feb 22
ai.configts
export const Al_ CONFIG: Al Config = {
api: {
base Url: “http://localhost:11434”,
endpoin
generate: “/api/generate”
//embeddings: “/api/embeddings”,
1,
1,
model: {
name: “mistral:7b-instruct”,
parameters: {
chunk Size: 6000,
prompt Timeout: 60000
1,
1,
prompts: {
i,
Wf.
retry: {
attempts: 3,
backoff: {
initial: 1000,
multiplier: 1.5,
max Delay: 10000,
i
i,
Ollama call:
private static async call Ollama(prompt: string): Promise {
const body ={
prompt,
model: Al_CONFIG.model.name,
options: {
num_ctx: 8192,
1,
stream: false,
}
const url = AI_CONFIG.api.base Url + AI_CONFIG.api.endpoints. generate;
const response = await fetch(url, {
method: OST!
headers: { "Content-Type": “application/json" },
body: JSON.stringify(body) ,
ni
if (!response.ok) {
const error Text = await response.text();
throw new Error(
“Ol Lama request failed [${response.status}]: ${error Text}*
i
return await response.text();
}
Prompt:
You are an expert accessibility auditor. Based on the aggregated axe-core results provided below,
generate a comprehensive accessibility analysis report.
YOU MUST OUTPUT ONLY A SINGLE, STRICTLY VALID JSON OBJECT AND NOTHING ELSE. Do not include
any extra text, commentary, or keys (such as “model”, “created_at”, “done”, “context”, or markdown
formatting like code fences).
The JSON object MUST follow EXACTLY this structure:
{
“aggregated Analysis”: {
“contextual Summary”: “
’,
“prioritizedissues”: [
{
“issue”: ,
“wcag Reference”: “<guideline reference(s)>”,
“remediation”: *”
}
]
}
}
ONLY OUTPUT THE JSON. NOTHING ELSE.
Ifyou cannot produce output in this format exactly, output nothing.
Below is the aggregated axe-core accessibility report data:
[LOG] Running Al Enhanced Analysis...
Raw Al response: {“model”:“mistral:7b-instruct”created_at”:*2025-02-
20T01:11:46.9791682",“response”:" It seems like you have provided a JSON object that contains a
list.. “context”:
[3,1027,781,2744,1228,1164,8351,3503,3800,5554,2610,29491,17926,1124,1040,15322,1369,6824,29474,
29501,3059,3671,4625,4392,29493,9038,1032,16081,3503,3800,6411,3032,29491,4372,4593,1119,11848,
1115,1032,3460,29493,20238,4484,10060,2696,1163,1476,4978,5285,1396,29493,1989,15526,29493,1210
,8916,29491,29473, 781,781,21966,4593,1032,10060,2696,1137,5436,1384,1431,1042,1066,1224,546...
ol 9 @
@ Your LLa MA model is generating extra text before and after the expected JSON output, and itis not c...
27 «1 r ) e
views link 4
John6666 Regular Feb 21
I think it’s because of the model. If you change the model, you can isolate the problem.
Ifyou want to output JSON, it might be easier to use this.
@ ollama.com
Structured outputs - Ollama Blog 15
0
fo} Ollama now supports structured outputs making it possible to constrain a
model's output to a specific format defined by a JSON schema. The Ollama
Python and Java Script libraries have been updated to support structured outputs.
9 @
@ akappa Feb 21
Thanks for the response. I will read up on the structured outputs feature.
I passed the expected JSON schema I want within the prompt. I have written a whole set of instructions
for the model to follow. However, the response is not consistent (as expected I guess). Keeping the
context object aside, the whole JSON response parsing is a whole other thing I had to deal with. I wrote
an entire parser to clean up the response I get from the model... maybe your post will address some of
that, have to test it.
parser to clean up the response:
© github.com/ganymedej3/project-bennu
i/llm_response_parser.ts
main
export interface LLMTop Level Response {
model: string;
response: string;
done: boolean;
i
[*e
- This parser tries to handle many “junk” patterns from the LLM:
-
- code fences or triple backticks
-
- Line-based // comments
-
- partial text beyond the iin JSON block
-
- bracket balancing for {... }or[... ]
-
- invalid escapes like _ or * that break JSON
*/
export class LLMResponse Parser {
- invalid escapes like _ or * that break JSON
- Remove triple backticks or code fences like json
+/
static remove Code Fences(input: string): string {
return input.replace(/”(\S+)?/g, "").trim();
This file has been truncated. show original
Here are all my prompts:
© github.com/ganymedej3/project-bennu
37220f2#5
7% I Any extraneous text or explanation will break the parser.
77 IMPORTANT: Double-check that your JSON is valid. No trailing commas, n
78*,
9
80 API_NEGATIVE_SCENARTOS: ~
81 You are an AI that enumerates negative or invalid scenarios for an API
82 Given:
83 - Endpoint name or short summary: “<<ENDPOINT_DESCRIPTION>>"
84 I- The number of scenarios to generate: <>
85
86 Return a JSON array of <> scenario objects. Each should describ
a7I{
88 "description": “short explanation of the invalid scenario",
89 “payload”: {...},
90 “expected Status": 4xx or 5x,
ol "reason": “why it fails or is invalid"
92I
93 (IMPORTANT: produce valid JSON, no code fences or extra text).
94 I (Important: use double quotes for all JSON fields, no single quotes!)
95 (Important: Do NOT wrap the output in any markdown code fences or trip
96 I (Important: Only output ONE JSON block. Do NOT append extra braces or
Sample:
API_DATA_GENERATION: ~
You are an AI that generates test data for a given API resource or endpoint.
Given:
- The endpoint name or resource type: "<<RESOURCE_NAME>>"
- The number of items to generate: <>
IMPORTANT INSTRUCTIONS for your output:
- Output ONLY valid JSON. No markdown code fences or triple backticks.
- No line-based comments like "// ...".
- Do NOT add any text after the JSON. No explanations or extra commentary.
- Return strictly one JSON array with <> items. For example:
C
{ “fieldi": "value", "field2": 123 },
{ "“field1": "another", "field2": 456 }
]
Any extraneous text or explanation will break the parser.
IMPORTANT: Double-check that your JSON is valid. No trailing commas, no code fenc
e: °o ©
(2) benstokes Feb 22
Thanks for sharing. It helps me a lot.
Best Regards!
Ig Likes.
9 @
‘ons in @ HO
views link 4
- Related topics
Topic Replies Views —_Activity
Your LLa MA model is generating extra text before and after the expected JSON output, and it
is not correctly evaluating responsesummary based on the specified factors: relevance and : © babe
word count
Mi Intermediate
Strange answer from api 0 far ipmaan
@ @iransformers
Inference API detailed request 5 23k — Sep 2020
Mi Beginners
Llama-2 7B-hf repeats context of question directly from input prompt, cuts off with newlines fa mr sm
Mransformers
`