-
Notifications
You must be signed in to change notification settings - Fork 530
Add @huggingface/ollama-utils
#1111
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Merged
Changes from all commits
Commits
Show all changes
15 commits
Select commit
Hold shift + click to select a range
676e2f3
init ollama-utils
ngxson 096231b
fix types
ngxson 98584ea
add test
ngxson 8cee153
add more tests
ngxson 631eb5a
test ok
ngxson 777c442
automap: preserve old list
ngxson 897190d
automap: sync
ngxson f259715
fix lint
ngxson 8db9fd9
Merge branch 'main' into xsn/ollama_utils
ngxson 31ad142
readme
ngxson 43d97b4
Apply suggestions from code review
ngxson 5d6e768
Merge branch 'main' into xsn/ollama_utils
ngxson 9ee4cdd
Apply suggestions from code review
ngxson 1724bfa
Merge branch 'main' into xsn/ollama_utils
ngxson b564f2c
fix corepack
ngxson File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1 @@ | ||
dist |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,5 @@ | ||
pnpm-lock.yaml | ||
# In order to avoid code samples to have tabs, they don't display well on npm | ||
README.md | ||
dist | ||
src/automap.ts |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,56 @@ | ||
# `@huggingface/ollama-utils` | ||
|
||
Various utilities for maintaining [Ollama compatibility with GGUF models on the Hugging Face Hub](https://huggingface.co/docs/hub/en/ollama). | ||
|
||
For now, we are exposing chat template conversion to the Go format used by Ollama. | ||
|
||
## Chat template converter | ||
|
||
```ts | ||
import { convertJinjaToGoTemplate } from "@huggingface/ollama-utils"; | ||
|
||
const MODEL_INFO_URL = "https://huggingface.co/api/models/bartowski/Llama-3.2-3B-Instruct-GGUF?expand[]=gguf"; | ||
const modelInfo = await (await fetch(MODEL_INFO_URL)).json(); | ||
console.log(modelInfo); | ||
/** | ||
* { | ||
* gguf: { | ||
* chat_template: "here is the Jinja chat template", | ||
* bos_token: "...", | ||
* eos_token: "...", | ||
* [...] | ||
* } | ||
* } | ||
*/ | ||
const convertedTemplate = convertJinjaToGoTemplate(modelInfo.gguf); | ||
if (convertedTemplate) { | ||
console.log(convertedTemplate.ollama); | ||
/** | ||
* { | ||
* template: "this is the converted template, compatible with Ollama", | ||
* tokens: [... list of special tokens], | ||
* params: { | ||
* stop: [... list of stop tokens or stop words] | ||
* } | ||
* } | ||
*/ | ||
} else { | ||
console.error("Conversion is not successful"); | ||
} | ||
``` | ||
|
||
## How can I add a custom template? | ||
|
||
Most templates will be converted automatically. You can debug the output template using: | ||
- This space to retrieve the converted template: https://huggingface.co/spaces/ngxson/debug_ollama_manifest | ||
- And this space to apply the Go template into a list of messages: https://huggingface.co/spaces/ngxson/ollama_template_test | ||
|
||
Please only add a new template when the conversion process above is not successful. Cases that are acceptable include: | ||
- The converted template is wrong | ||
- The Jinja template is not compatible with `@huggingface/jinja` | ||
- The Jinja template is not "linear," meaning it can modify the content of other messages or append dynamic postfixes. For instance, the DeepSeek template removes `<think>...</think>` from previous messages in a conversation, making it non-linear. Another example is a template that adds the EOS token `</s>` when `add_generation_prompt=False`. | ||
|
||
To add a new custom handler: | ||
1. Edit the list of `CUSTOM_TEMPLATE_MAPPING` inside `chat-template.ts` | ||
2. Add a new test case in `chat-template.spec.ts` | ||
3. Push your change to a new PR. |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,58 @@ | ||
{ | ||
"name": "@huggingface/ollama-utils", | ||
"packageManager": "[email protected]", | ||
"version": "0.0.1", | ||
"description": "Various utilities for maintaining Ollama compatibility with models on Hugging Face hub", | ||
"repository": "https://github.com/huggingface/huggingface.js.git", | ||
"publishConfig": { | ||
"access": "public" | ||
}, | ||
"main": "./dist/index.js", | ||
"module": "./dist/index.mjs", | ||
"types": "./dist/index.d.ts", | ||
"exports": { | ||
".": { | ||
"types": "./dist/index.d.ts", | ||
"require": "./dist/index.js", | ||
"import": "./dist/index.mjs" | ||
} | ||
}, | ||
"browser": { | ||
"./src/utils/FileBlob.ts": false, | ||
"./dist/index.js": "./dist/browser/index.js", | ||
"./dist/index.mjs": "./dist/browser/index.mjs" | ||
}, | ||
"engines": { | ||
"node": ">=20" | ||
}, | ||
"source": "index.ts", | ||
"scripts": { | ||
"lint": "eslint --quiet --fix --ext .cjs,.ts .", | ||
"lint:check": "eslint --ext .cjs,.ts .", | ||
"format": "prettier --write .", | ||
"format:check": "prettier --check .", | ||
"prepublishOnly": "pnpm run build", | ||
"build": "tsup src/index.ts --format cjs,esm --clean && tsc --emitDeclarationOnly --declaration", | ||
"build:automap": "tsx scripts/generate-automap.ts && prettier --write ./src/chat-template-automap.ts", | ||
"test": "vitest run", | ||
"check": "tsc" | ||
}, | ||
"files": [ | ||
"dist", | ||
"src", | ||
"tsconfig.json" | ||
], | ||
"keywords": [ | ||
"huggingface", | ||
"hub", | ||
"gguf" | ||
], | ||
"author": "Hugging Face", | ||
"license": "MIT", | ||
"dependencies": { | ||
"@huggingface/jinja": "workspace:^" | ||
}, | ||
"devDependencies": { | ||
"@types/node": "^20.12.8" | ||
} | ||
} |
Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.
Oops, something went wrong.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,207 @@ | ||
/** | ||
* Script for generating llm.ts | ||
* The source data is taken from llama.cpp | ||
*/ | ||
|
||
import type { GGUFParseOutput } from "../../gguf/src/gguf"; | ||
import { gguf } from "../../gguf/src/gguf"; | ||
import { appendFileSync, writeFileSync, existsSync } from "node:fs"; | ||
import path from "node:path"; | ||
|
||
const DEBUG = process.env.DEBUG; | ||
const RE_SPECIAL_TOKEN = /<[|_A-Za-z0-9]+>|\[[A-Z]+\]|<\uFF5C[\u2581A-Za-z]+\uFF5C>/g; | ||
const MAX_NUMBER_OF_TAGS_PER_MODEL = 5; | ||
const N_WORKERS = 16; | ||
const OUTPUT_FILE = path.join(__dirname, "../src/chat-template-automap.ts"); | ||
const BLACKLISTED_MODELS = (model: string, tag: string) => { | ||
// some models are know to give ServiceUnavailable | ||
return model === "library/deepseek-r1" && tag === "7b"; | ||
}; | ||
|
||
interface OutputItem { | ||
model: string; | ||
gguf: string; | ||
ollama: { | ||
template: string; | ||
tokens: string[]; | ||
// eslint-disable-next-line | ||
params?: any; | ||
}; | ||
} | ||
|
||
interface OllamaManifestLayer { | ||
digest: string; | ||
mediaType: string; | ||
size: number; | ||
} | ||
|
||
interface OllamaManifest { | ||
layers: OllamaManifestLayer[]; | ||
} | ||
|
||
const getSpecialTokens = (tmpl: string): string[] => { | ||
const matched = tmpl.match(RE_SPECIAL_TOKEN); | ||
const tokens = Array.from(matched || []); | ||
return Array.from(new Set(tokens)); // deduplicate | ||
}; | ||
|
||
(async () => { | ||
if (DEBUG) writeFileSync("ollama_tmp.jsonl", ""); // clear the file | ||
|
||
const models: string[] = []; | ||
const output: OutputItem[] = []; | ||
|
||
const html = await (await fetch("https://ollama.com/library")).text(); | ||
const matched = html.match(/href="\/library\/[^"]+/g); | ||
if (!matched) { | ||
throw new Error("cannot find any model url"); | ||
} | ||
for (let i = 0; i < matched.length; i++) { | ||
models.push(matched[i].replace('href="/', "")); | ||
} | ||
console.log({ models }); | ||
|
||
//////// Get tags //////// | ||
|
||
let nDoing = 0; | ||
let nAll = models.length; | ||
const modelsWithTag: string[] = []; | ||
const workerGetTags = async () => { | ||
while (true) { | ||
const model = models.shift(); | ||
if (!model) return; | ||
nDoing++; | ||
console.log(`Getting tags ${nDoing} / ${nAll}`); | ||
const html = await (await fetch(`https://ollama.com/${model}`)).text(); | ||
const matched = html.match(/href="\/library\/[^"]+/g); | ||
if (!matched) { | ||
throw new Error("cannot find any tag url"); | ||
} | ||
for (let i = 0; i < matched.length && i < MAX_NUMBER_OF_TAGS_PER_MODEL; i++) { | ||
const midAndTag: string = matched[i].replace('href="/', ""); | ||
if (midAndTag.match(/:/) && !midAndTag.match(/\/blobs/)) { | ||
modelsWithTag.push(midAndTag); | ||
} | ||
} | ||
} | ||
}; | ||
await Promise.all( | ||
Array(N_WORKERS) | ||
.fill(null) | ||
.map(() => workerGetTags()) | ||
); | ||
console.log({ modelsWithTag }); | ||
|
||
//////// merging with old file if necessary //////// | ||
|
||
const seenGGUFTemplate = new Set<string>(); | ||
if (existsSync(OUTPUT_FILE)) { | ||
const oldOutput = await import(OUTPUT_FILE); | ||
oldOutput.OLLAMA_CHAT_TEMPLATE_MAPPING.forEach((item: OutputItem) => { | ||
seenGGUFTemplate.add(item.gguf); | ||
output.push(item); | ||
}); | ||
} | ||
|
||
//////// Get template //////// | ||
|
||
nDoing = 0; | ||
nAll = modelsWithTag.length; | ||
const workerGetTemplate = async () => { | ||
while (true) { | ||
const modelWithTag = modelsWithTag.shift(); | ||
if (!modelWithTag) return; | ||
|
||
nDoing++; | ||
const [model, tag] = modelWithTag.split(":"); | ||
console.log(`Fetch template ${nDoing} / ${nAll} | model=${model} tag=${tag}`); | ||
const getBlobUrl = (digest: string) => `https://registry.ollama.com/v2/${model}/blobs/${digest}`; | ||
const manifest: OllamaManifest = await ( | ||
await fetch(`https://registry.ollama.com/v2/${model}/manifests/${tag}`) | ||
).json(); | ||
if (!manifest.layers) { | ||
console.log(" --> [X] No layers"); | ||
continue; | ||
} | ||
const layerModelUrl = manifest.layers.find((l) => l.mediaType.match(/\.model/)); | ||
if (!layerModelUrl) { | ||
console.log(" --> [X] No model is found"); | ||
continue; | ||
} | ||
const modelUrl = getBlobUrl(layerModelUrl.digest); | ||
let ggufData: GGUFParseOutput; | ||
if (BLACKLISTED_MODELS(model, tag)) { | ||
console.log(" --> [X] Blacklisted model, skip"); | ||
continue; | ||
} | ||
try { | ||
ggufData = await gguf(modelUrl); | ||
} catch (e) { | ||
console.log(" --> [X] FATAL: GGUF error", { model, tag, modelUrl }); | ||
throw e; // rethrow | ||
} | ||
const { metadata } = ggufData; | ||
const ggufTmpl = metadata["tokenizer.chat_template"]; | ||
if (ggufTmpl) { | ||
if (seenGGUFTemplate.has(ggufTmpl)) { | ||
console.log(" --> Already seen this GGUF template, skip..."); | ||
continue; | ||
} | ||
seenGGUFTemplate.add(ggufTmpl); | ||
console.log(" --> GGUF chat template OK"); | ||
const tmplBlob = manifest.layers.find((l) => l.mediaType.match(/\.template/)); | ||
if (!tmplBlob) continue; | ||
const ollamaTmplUrl = getBlobUrl(tmplBlob.digest); | ||
if (!ollamaTmplUrl) { | ||
console.log(" --> [X] No ollama template"); | ||
continue; | ||
} | ||
const ollamaTmpl = await (await fetch(ollamaTmplUrl)).text(); | ||
console.log(" --> All OK"); | ||
const record: OutputItem = { | ||
model: modelWithTag, | ||
gguf: ggufTmpl, | ||
ollama: { | ||
template: ollamaTmpl, | ||
tokens: getSpecialTokens(ggufTmpl), | ||
}, | ||
}; | ||
// get params | ||
const ollamaParamsBlob = manifest.layers.find((l) => l.mediaType.match(/\.params/)); | ||
const ollamaParamsUrl = ollamaParamsBlob ? getBlobUrl(ollamaParamsBlob.digest) : null; | ||
if (ollamaParamsUrl) { | ||
console.log(" --> Got params"); | ||
record.ollama.params = await (await fetch(ollamaParamsUrl)).json(); | ||
} | ||
output.push(record); | ||
if (DEBUG) appendFileSync("ollama_tmp.jsonl", JSON.stringify(record) + "\n"); | ||
} else { | ||
console.log(" --> [X] No GGUF template"); | ||
continue; | ||
} | ||
//console.log({modelUrl, ggufData}); | ||
//break; | ||
} | ||
}; | ||
|
||
await Promise.all( | ||
Array(N_WORKERS) | ||
.fill(null) | ||
.map(() => workerGetTemplate()) | ||
); | ||
|
||
console.log("DONE"); | ||
output.sort((a, b) => a.model.localeCompare(b.model)); | ||
|
||
writeFileSync( | ||
OUTPUT_FILE, | ||
` | ||
// This file is auto generated, please do not modify manually | ||
// To update it, run "pnpm run build:automap" | ||
|
||
import { OllamaChatTemplateMapEntry } from "./types"; | ||
|
||
export const OLLAMA_CHAT_TEMPLATE_MAPPING: OllamaChatTemplateMapEntry[] = ${JSON.stringify(output, null, "\t")}; | ||
`.trim() | ||
); | ||
})(); |
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
maybe we can do both in the same space?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I couldn't figure out how to embed my script into gradio space, but will have a look later
(The template debugging space is static btw)