feat: integrate ollama api with palette#42
feat: integrate ollama api with palette#42klown wants to merge 35 commits intoinclusive-design:mainfrom
Conversation
Also, add `stream` (`true` or `false`) parameter to `queryChat()` function.
- `CommandTeletgraphicCompletions` is component that calls ollama with a "bag of words" and a prompt to provide sentence completions - Introduced `sentenceCompletionsSignal` that will eventually handle changes to the completions.
- removed all `console.debug()` statements
- ISAAC demo branch: feat/find-by-gloss-p[us-modifiers
- ISAAC demo branch: feat/find-by-gloss-plus-modifiers
apps/ollama/ollama.js
Outdated
| @@ -1,5 +1,5 @@ | |||
| /* | |||
| * Copyright 2024 Inclusive Design Research Centre, OCAD University | |||
| * Copyright 2024-2025 Inclusive Design Research Centre, OCAD University | |||
There was a problem hiding this comment.
The copyright can be up to 2026 now. :)
src/client/ActionTextCell.ts
Outdated
| const gridStyles = generateGridStyle(columnStart, columnSpan, rowStart, rowSpan); | ||
|
|
||
| const sentenceClicked = () => { | ||
| speak(label.replace(/^[0-9]+./,"").trim()); // remove any leading "1. " |
There was a problem hiding this comment.
Probably improve the code comment to: Remove a leading list number, e.g., "1. "
There was a problem hiding this comment.
Okay, comment improved.
| * https://github.com/inclusive-design/adaptive-palette/blob/main/LICENSE | ||
| */ | ||
|
|
||
| import { VNode } from "preact"; |
There was a problem hiding this comment.
There isn't a test for this component. Is it intentional?
There was a problem hiding this comment.
No there should be a test. It's possible I misplaced it. If not, I will write one.
| if (streamResp) { | ||
| return await ollama.chat({ ...request, stream: true }); | ||
| } | ||
| else { | ||
| return await ollama.chat({ ...request, stream: false }); | ||
| } |
There was a problem hiding this comment.
Since streamResp is already a boolean, would it better to use it as the stream value:
return await ollama.chat({ ...request, stream: streamResp });
There was a problem hiding this comment.
That way of writing it doesn't work when running the tests. It fails with an error message that ends, Type 'boolean' is not assignable to type 'true', or Type 'boolean' is not assignable to type 'false'.
I wrote a lengthy comment at the beginning of the function, which shows the full error message, and explains the use of the if/else statement. The problem with your suggestion is that streamResp is of type boolean, which does not match either type true or false. I know that it should work, but it doesn't. How true and false can be interpreted as types is beyond me ...
There was a problem hiding this comment.
@klown, Gemini suggested this fix and it works:
return await ollama.chat({ ...request, stream: streamResp } as any);
Its explanation:
This error occurs because the Ollama library uses TypeScript function overloads to change the return type based on the value of stream.
The ollama.chat method is defined roughly like this:
- Overload 1: If stream is exactly true, it returns an AsyncIterator.
- Overload 2: If stream is exactly false (or missing), it returns a standard Promise.
When you pass a variable streamResp of type boolean, TypeScript doesn't know at compile-time whether it will be true or false. Since boolean is a union of true | false, it fails to match either specific overload.
Using as any here is generally safe because you are just telling the compiler to stop over-analyzing the overload.
There was a problem hiding this comment.
It's Gemini vs. StackOverflow!l That is, I found a StackOverflow article that recommends the same solution that I used.
Gemini's explanation is what I already understood, except for the use of any as the return type -- that's new, and does not match anything else I have found. I guess Gemini's training data is more extensive than my search.
While the solution works, it introduces a lint error. The linter objects to the use of any. As a compromise, I tried to use the actual return types instead of any:
return await ollama.chat({ ...request, stream: streamResp } as Promise<AbortableAsyncIterator<ChatResponse>> | Promise<ChatResponse>);
While it solves the lint error, and the code runs, the tests fail with an error about the two return types not overlapping, and a suggestion to use unknown:
return await ollama.chat({ ...request, stream: streamResp } as unknown);
But that didn't work either. So, it's a question of putting in another lint exception or leaving the code as is.
Footnote: does Gemini give any reason why this is needed only when running tests? As I noted, the error was non-existent when just running the code.
There was a problem hiding this comment.
You are right, this approach fails the linting check. I looked at other Gemini-suggested solutions. They all look like overkills. Seems better to stay with the if/else. :)
Gemini's explanation regarding why it works with vite but not jest:
- Vite: By default, Vite uses esbuild to transpile TypeScript. Esbuild is a "transpile-only" tool—it strips types and converts code to JavaScript without performing full type checking. Unless you are running a separate tsc --noEmit command, Vite will ignore this error.
- Jest (ts-jest): When running tests, ts-jest often performs a full type check or uses a stricter configuration. It identifies that boolean is not specific enough to satisfy either true or false literals in the overloaded signature.
| * You may obtain a copy of the License at | ||
| * https://github.com/inclusive-design/adaptive-palette/blob/main/LICENSE | ||
| */ | ||
| import { html } from "htm/preact"; |
There was a problem hiding this comment.
There isn't a test for this palette which is OK if this is a temporary UI solution.
There was a problem hiding this comment.
I went ahead and wrote a test. As a side effect, it revealed a few minor issues that I have since fixed. So, even if it is temporary, at least it's well described.
src/client/DialogPromptEntries.ts
Outdated
| // determine the <select>'s selected index. | ||
| const options = []; | ||
| const localStore = window.localStorage; | ||
| for (let i = 0; i < localStore.length; i++) { |
There was a problem hiding this comment.
It would be better to read localStorage only once when the component mounts. Can this be achieved by using useEffect?
There was a problem hiding this comment.
I think this has been overtaken by other modifications. The use of useState() to handle the prompts display has as a side effect removed this from the code.
src/client/DialogPromptEntries.ts
Outdated
| // `selectSelectedIndex` or (2) from the <select>'s current `selectedIndex` | ||
| // property. | ||
| let thePrompt = ""; | ||
| const theSelect = document.getElementById("promptSelect") as HTMLSelectElement; |
There was a problem hiding this comment.
It doesn't feel right using document.getElementById() to read and update directly in the real DOM. Can you use useState to track and update values.
The same comment applies to the read and update of promptName and systemPrompt below. I wonder if you could use onInput listeners to do the update when these values change.
There was a problem hiding this comment.
Thanks to your and Gemini's help this has been done.
| model: string, | ||
| // This should be a boolean type, but for Preact/html the type of the value is | ||
| // string. It should be "true" or "false". | ||
| stream: ("true" | "false") |
There was a problem hiding this comment.
It seems Preact has native boolean support by directly using stream: boolean, and the use of streamAsBoolean can be removed.
There was a problem hiding this comment.
Thanks for making me revisit this @cindyli. While I couldn't solve it back in the spring, I finally found the solution.
While it was always possible to declare the "stream" property as "boolean" (stream: boolean), it doesn't matter in some sense. If the render call is done like so:
render( html`<${CommandTelegraphicCompletions} model="llama3.1:latest" stream=false />`, ...
Then typeof props.stream returns 'string', not 'boolean', even though props.stream is declared a boolean. That was mystifying to me, frankly. However, the correct way to render the component is:
render( html`<${CommandTelegraphicCompletions} model="llama3.1:latest" stream=${false} />`, ...
In that case typeof props.stream is 'boolean', and the conditional to change the string to an actual boolean using streamAsBoolean can be removed.
There was a problem hiding this comment.
Smart fix. Thanks for sharing. Now I know. :)
There was a problem hiding this comment.
Yeah, there are so many little fiddly gotchas with Preact, html, TypeScript, and jest...
| changeEncodingContents.value.payloads.forEach( (value) => { | ||
| labelText.push(value.label); | ||
| }); | ||
| const systemPrompt = (document.getElementById(TEXTAREA_ID) as HTMLTextAreaElement).value; |
There was a problem hiding this comment.
Same concern about using document.getElementById to reach into the real DOM rather than using Preact way.
There was a problem hiding this comment.
In this case, I don't see what the Preact way is, exactly. The systemPrompt ig in another part of the DOM, or, from Preact's perspective, in another component, the DialogPromptEntries component. I thought of using useState(), but that is specific to the current CommandTelegraphicCompletions component. Another possiblity is to useContext(), but that seems to be for (re)rendering components, not handling the user clicking the "Telegraphic Completions" button. Perhaps using another signal is the appropriate route, but again, the user changing the prompt in the DialogPromptEntries component should not cause the CommandTelegraphicCompletions component to do anything, neither re-render nor ask for completions. It's only when the user asks for completions that the CommandTelegraphicCompletions do anything.
I'm leaving it as it is for now, but still thinking about it.
| <button onClick=${getTelegraphicCompletions}> | ||
| ${TELEGRPAHIC_BUTTON_LABEL} | ||
| </button> | ||
| <span style="visibility: hidden">F</span> |
There was a problem hiding this comment.
Why is there a text "F" in the <span>? If it's for spacing, is it possible to use margin?
There was a problem hiding this comment.
Actually, it was for debugging. But, removing it makes the buttons too close. Using margin ins the scss file is a good idea -- done.
When calling `render(...)`, the correct way to pass booleans is by using
${true} or ${false}, e.g.:
`render(html`<$ Component stream=${false} ...);`
|
|
|
|
||
| render(html`<${DialogPromptEntries} />`, document.getElementById("llm_prompt")); | ||
| render( | ||
| html`<${CommandTelegraphicCompletions} model="llama3.1:latest" stream=${false} />`, |
There was a problem hiding this comment.
Hardcoding the model name to llama3.1:latest is problematic. On my computer, the same model is installed as llama3.1:8b when downloaded from Ollama, so the app cannot find it and the UI gets stuck at “Working…”.
It's probably better to do what you used to do:
- Dynamically list available Ollama models in a dropdown and let users select one.
- Detect and report errors when Ollama is not installed or when no models are available, and disable buttons and the text area.
Or, check with the design team to clarify the expected workflow.
There was a problem hiding this comment.
Agreed, I have added another <select> element to allow users to choose among installed models and update the state of the buttons as appropriate.
|
Another issue I noticed is that when the input sentence is empty or null, clicking “Telegraphic Completions” still sends an empty string to the model, and came back with a response like: It would be better to add a client-side guard to validate that an input sentence exists before making the request, which would avoid unnecessary backend round trips. |
…nce completions palette
- compatible to browser's light/dark mode switch, - spacing (margin) between the two buttons.
The changes I made when I added the dropdown for selecting a language model includes disabling the "Telegraphic Completions" and "Cancel" buttons (part of the Also regarding the dark-mode issue, as noted in PR #44, dark-mode is disabled for now, and there is a new issue #51 that requires work to fix it. As I noted in PR #44, it should be merged first and then this one merged. So: ready for another round of reviews, thanks. |


npm testwithout errorsnpm run devwithout errorsnpm run serveAppsDemoswithout errorsDescription
This adds the ability for the adaptive-palette to interace with local LLMs using the ollama api.
At present, the query is limited to generate suggestions for full sentences. Also, since there is no UI to choose among LLMs, the LLM is "fixed" as llama3.1-8B.
There is a dialog and supporting software that permits users to edit and save the query they wish to send to llama3.1. These prompts are saved in the browser's local storage.
Steps to test
To query the LLM:
To modify or add prompts:
selectmenu,textarea, or completely change it,text inputfield to save the prompt with a new name or the same name to overwrite an existing prompt.Additional information
The UI is very minimal and invites improvement. The main point of this PR is to show how ollama can be integrated with the adaptive-palette.