-
-
Notifications
You must be signed in to change notification settings - Fork 1
feat: port generate() and ollama integration from PR #73 #154
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Changes from 9 commits
29ffa95
6fce59d
96cf24c
e70fd07
47309c1
6fd6ac3
697c298
61c8b4d
8a24fe4
63a3458
67ac3eb
7f165ed
9f2a96b
ab4e577
d9c56c7
6b8bee8
92818cd
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -16,9 +16,10 @@ on: | |
|
|
||
| jobs: | ||
| test: | ||
| timeout-minutes: 5 | ||
| timeout-minutes: 30 | ||
| runs-on: ${{ matrix.os }} | ||
| strategy: | ||
| fail-fast: false | ||
| matrix: | ||
| os: | ||
| - ubuntu-latest | ||
|
|
@@ -36,19 +37,84 @@ jobs: | |
| cache: "npm" | ||
| cache-dependency-path: package-lock.json | ||
| node-version: ${{ matrix.node }} | ||
|
|
||
| # Install Ollama on Linux | ||
| - name: Install Ollama (Linux) | ||
| if: runner.os == 'Linux' | ||
| run: | | ||
| curl -fsSL https://ollama.com/install.sh | sh | ||
|
|
||
| # Install Ollama on macOS | ||
| - name: Install Ollama (macOS) | ||
| if: runner.os == 'macOS' | ||
| run: | | ||
| brew install ollama | ||
|
|
||
| # Install Ollama on Windows | ||
| - name: Install Ollama (Windows) | ||
| if: runner.os == 'Windows' | ||
| shell: pwsh | ||
| run: | | ||
| # Download Ollama installer | ||
| Invoke-WebRequest -Uri "https://ollama.com/download/OllamaSetup.exe" -OutFile "OllamaSetup.exe" | ||
| # Install silently | ||
| Start-Process -FilePath ".\OllamaSetup.exe" -Args "/S" -Wait | ||
| # Add to PATH for this session | ||
| $env:PATH = "$env:LOCALAPPDATA\Programs\Ollama;$env:PATH" | ||
| echo "$env:LOCALAPPDATA\Programs\Ollama" | Out-File -FilePath $env:GITHUB_PATH -Encoding utf8 -Append | ||
|
|
||
| # Start Ollama server and pull model (Linux/macOS) | ||
| - name: Start Ollama and pull model (Linux/macOS) | ||
| if: runner.os != 'Windows' | ||
| run: | | ||
| # Check if Ollama is already running (systemd service may have started it) | ||
| if ! curl -s http://localhost:11434/api/tags > /dev/null 2>&1; then | ||
| # Start Ollama server in background | ||
| ollama serve & | ||
| # Wait for server to start | ||
| sleep 5 | ||
| fi | ||
| # Pull the default model used in tests | ||
| ollama pull qwen3:4b | ||
|
|
||
| # Start Ollama server and pull model (Windows) | ||
| - name: Start Ollama and pull model (Windows) | ||
| if: runner.os == 'Windows' | ||
| shell: pwsh | ||
| run: | | ||
| # Start Ollama server in background | ||
| Start-Process -FilePath "ollama" -ArgumentList "serve" -NoNewWindow | ||
| # Wait for server to start | ||
| Start-Sleep -Seconds 10 | ||
| # Pull the default model used in tests | ||
| ollama pull qwen3:4b | ||
|
|
||
| - run: npm ci | ||
| - run: npm run build # Automatically run tests because of the `postbuild` script in package.json | ||
|
|
||
| coverage: | ||
| name: Coverage Check | ||
| runs-on: ubuntu-latest | ||
| timeout-minutes: 15 | ||
| steps: | ||
|
Comment on lines
+70
to
71
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Consider aligning coverage timeout with test job. Coverage runs also install/pull Ollama; 15 minutes may be tight on slow runners. Aligning with the 30‑minute test timeout would reduce intermittent failures. ⏱️ Suggested adjustment- timeout-minutes: 15
+ timeout-minutes: 30🤖 Prompt for AI Agents |
||
| - uses: actions/checkout@v4 | ||
| - uses: actions/setup-node@v4 | ||
| with: | ||
| cache: "npm" | ||
| cache-dependency-path: package-lock.json | ||
| node-version: 20 | ||
|
|
||
| # Install and start Ollama for coverage tests | ||
| - name: Install Ollama | ||
| run: | | ||
| curl -fsSL https://ollama.com/install.sh | sh | ||
|
|
||
| - name: Start Ollama and pull model | ||
| run: | | ||
| ollama serve & | ||
| sleep 5 | ||
| ollama pull qwen3:4b | ||
|
Comment on lines
+80
to
+88
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. 🧩 Analysis chain🏁 Script executed: fd -type f -name "npm-test.yml"Repository: doc-detective/common Length of output: 234 🌐 Web query:
💡 Result: Ollama health / readiness endpoints (HTTP)Liveness (is the server up):
“Ready enough” (API responding):
Stronger readiness (optional; also checks model registry responding):
CI/CD + container orchestration patternsKubernetes probes (recommended split):
Docker / docker-compose healthcheck (important caveat):
Simple CI “wait until ready” (example)Use a loop against until curl -fsS http://localhost:11434/api/version >/dev/null; do sleep 1; doneSources: Ollama API docs for 🌐 Web query:
💡 Result: Running What the script does (service/autostart behavior)
Re-running the script
If you want to stop autostartsudo systemctl disable --now ollama(Official uninstall steps also remove the unit file.) [3] Sources: install script [1], GitHub issue about unit replacement [2], official Linux docs (customize/uninstall) [3]. 🏁 Script executed: fd -type f "npm-test.yml"Repository: doc-detective/common Length of output: 234 🏁 Script executed: cat .github/workflows/npm-test.yml | head -100 | tail -30Repository: doc-detective/common Length of output: 979 🌐 Web query:
💡 Result: On systemd-based Linux, the supported way to make a GitHub Actions self-hosted runner auto-start on boot is to install it as a service using the runner’s built-in # in the runner install dir (after ./config.sh has registered the runner)
sudo ./svc.sh install # or: sudo ./svc.sh install <username>
sudo ./svc.sh start
sudo ./svc.sh statusThis creates a If you prefer managing it yourself (instead of sudo systemctl daemon-reload
sudo systemctl enable --now <your-service>.service(Example unit structure shown here.) [2] Sources: [1] GitHub Docs, “Configuring the self-hosted runner application as a service” • [2] exe.dev example 🏁 Script executed: git ls-files | grep -E "\.github/workflows"Repository: doc-detective/common Length of output: 133 🌐 Web query:
💡 Result: Yes—GitHub Actions Ubuntu GitHub-hosted runners are full Azure VMs with Evidence: the official Notes/limits:
Citations:
Use The install script auto-starts Ollama via systemd on GitHub Actions runners; ✅ Suggested change - name: Start Ollama and pull model
run: |
- ollama serve &
- sleep 5
- ollama pull qwen3:4b
+ if ! curl -s http://localhost:11434/api/version > /dev/null 2>&1; then
+ ollama serve &
+ fi
+ for i in {1..30}; do
+ if curl -s http://localhost:11434/api/version > /dev/null 2>&1; then
+ break
+ fi
+ sleep 2
+ done
+ if ! curl -s http://localhost:11434/api/version > /dev/null 2>&1; then
+ echo "Ollama did not become ready in time" >&2
+ exit 1
+ fi
+ ollama pull qwen3:4b🤖 Prompt for AI Agents |
||
|
|
||
| - run: npm ci | ||
| - run: npm run dereferenceSchemas && npm run generate:types && npm run compile | ||
| - name: Run tests with coverage | ||
|
|
||
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1 @@ | ||
| Code Quality Reviewer Prompt Template | ||
|
||
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1 @@ | ||
| Implementer Prompt Template | ||
coderabbitai[bot] marked this conversation as resolved.
Outdated
Show resolved
Hide resolved
|
||
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1 @@ | ||
| Spec Reviewer Prompt Template | ||
|
||
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,8 @@ | ||
| - [ ] Implement `src/ai.ts` and `test/ai.test.ts` | ||
| - [ ] Create `test/ai.test.ts` with content from `.opencode/pr_content/test/ai.test.ts` (but corrected paths) | ||
| - [ ] Create `src/ai.ts` with basic scaffolding | ||
| - [ ] Implement `detectProvider` logic | ||
| - [ ] Implement `generate` logic | ||
| - [ ] Implement `generateWithSchemaValidation` logic | ||
| - [ ] Implement helpers (`simplifySchemaForOllama`, `fileToImagePart`, etc.) | ||
| - [ ] Verify with tests | ||
coderabbitai[bot] marked this conversation as resolved.
Outdated
Show resolved
Hide resolved
|
||
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,60 @@ | ||
| import { z } from "zod"; | ||
| export declare const DEFAULT_MODEL = "ollama/qwen3:4b"; | ||
| export declare const MAX_SCHEMA_VALIDATION_RETRIES = 3; | ||
| /** | ||
| * Maps our supported model enums to the model identifiers that platforms expect. | ||
| */ | ||
| export declare const modelMap: Record<string, string>; | ||
| interface DetectedProvider { | ||
| provider: "openai" | "anthropic" | "google" | "ollama" | null; | ||
| model: string | null; | ||
| apiKey?: string | null; | ||
| baseURL?: string; | ||
| } | ||
| /** | ||
| * Detects the provider, model, and API from a model string and environment variables. | ||
| */ | ||
| export declare const detectProvider: (config: any, model: string) => Promise<DetectedProvider>; | ||
| /** | ||
| * Simplifies a JSON schema for providers with limited schema support (e.g., Ollama). | ||
| * - Dereferences $ref pointers | ||
| * - Merges allOf schemas | ||
| * - Converts top-level anyOf (discriminated unions) into a single object with all options as optional properties | ||
| * - Simplifies nested anyOf by preferring object types | ||
| * - Removes unsupported keywords like pattern, components, etc. | ||
| */ | ||
| export declare const simplifySchemaForOllama: (schema: any) => any; | ||
| /** | ||
| * Extracts the API key for a provider from a Doc Detective config object. | ||
| */ | ||
| export declare const getApiKey: (config: any, provider: "openai" | "anthropic" | "google") => any; | ||
| export interface GenerateOptions { | ||
| prompt?: string; | ||
| messages?: any[]; | ||
| files?: any[]; | ||
| model?: string; | ||
| system?: string; | ||
| schema?: z.ZodSchema | any; | ||
| schemaName?: string; | ||
| schemaDescription?: string; | ||
| provider?: "openai" | "anthropic" | "ollama" | "google"; | ||
| config?: any; | ||
| apiKey?: string; | ||
| baseURL?: string; | ||
| temperature?: number; | ||
| maxTokens?: number; | ||
| } | ||
| /** | ||
| * Generates text or structured output using an AI model. | ||
| */ | ||
| export declare const generate: ({ prompt, messages, files, model, system, schema, schemaName, schemaDescription, provider, config, apiKey, baseURL, temperature, maxTokens, }: GenerateOptions) => Promise<{ | ||
| object: any; | ||
| usage: import("ai").LanguageModelUsage; | ||
| finishReason: import("ai").FinishReason; | ||
| } | { | ||
| text: string; | ||
| usage: import("ai").LanguageModelUsage; | ||
| finishReason: import("ai").FinishReason; | ||
| }>; | ||
| export {}; | ||
| //# sourceMappingURL=ai.d.ts.map |
Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🌐 Web query:
Ollama official installation method Linux macOS Windows CI checksum signature verification💡 Result:
Official install methods
Linux
curl -fsSL https://ollama.com/install.sh | shOLLAMA_VERSION=....) [1][2]curl -fsSL https://ollama.com/download/ollama-linux-amd64.tar.zst | sudo tar x -C /usrmacOS
ollama.dmg, mount it, then drag Ollama.app to /Applications. The app can create/link theollamaCLI into/usr/local/binif needed. [3]Windows
ollama-windows-amd64.zip(and optional...-rocm.zipfor AMD) and runollama serveif you want to run it as a service with your own tooling. [4]CI (common official approaches)
ollama serve/ollama run .... [1][2]ollama/ollama(CPU/GPU variants and examples are documented). [5]Checksum / signature verification
Verify release asset checksums (SHA-256)
Ollama’s GitHub Releases list a
sha256:value next to each downloadable asset (Linux.tar.zst, Windows.zip, macOS archives, etc.). Download the asset and compare its SHA-256 to the value shown on the release page. [6]Examples:
Verify “signature” / provenance on GitHub
On the Releases page, the release commit is shown as “signed with GitHub’s verified signature” (GPG key ID displayed). This is GitHub’s commit-signature verification for the release/tag metadata. [6]
Sources
install.sh(official Linux installer script). (ollama.com)ollama/ollamaimage usage). (docs.ollama.com)sha256:; release commit shows “Verified” signature). (github.com)Citations:
🌐 Web query:
Ollama install.sh script security best practices curl💡 Result:
Why
curl … | shis risky (specifically for Ollama’sinstall.sh)Ollama’s one-liner installs into system paths and can make privileged system changes: it writes under
/usr//bin, creates anollamasystem user, adds your current user to theollamagroup, creates/enables a systemd service, and may add NVIDIA CUDA repos + install drivers via your package manager. That’s a lot to run “blind” from the network. [1]Safer best practices (recommended order)
Ollama documents downloading a tarball and extracting it (plus optional ROCm bundle) instead of running
install.sh. [2]install.sh, download first, inspect, then run.Avoid executing streamed content; download to a file, review it, and only then execute. This is widely recommended as a mitigation for “curl | bash/sh” supply-chain and inspection/logging issues. [3]
Ollama’s install script supports installing a specific version via
OLLAMA_VERSION=…(so you don’t implicitly install “whatever is current today”). [2]If the vendor publishes hashes/signatures, verify them before running. If not, you can still do out-of-band verification (e.g., pin a known-good checksum in your automation) using patterns/tools like
checksum.sh. [4]Because the script can install system packages and drivers, consider running it in a fresh VM, disposable host, or tightly controlled build image first, then promote artifacts.
Practical “safer Ollama install.sh” pattern
Pinning via
OLLAMA_VERSIONis documented by Ollama. [2] The script you’re inspecting is the same one referenced from Ollama’s official install instructions. [5]If you’re writing automation (CI/provisioning)
Sources: Ollama install script and Linux install docs, plus general guidance on avoiding “curl | sh” without verification. [1][2][3][4][5][6]
🌐 Web query:
Ollama GitHub releases checksums GPG signatures verification💡 Result:
1) Verify Ollama GitHub Release checksums (SHA-256)
On each Ollama release page, every binary asset is listed with a
sha256:<hash>value next to it. Example: thev0.15.2release shows a SHA-256 for each asset (macOS/Linux/Windows). [1]Download the asset, then compute SHA-256 locally and compare:
macOS / Linux
Windows (PowerShell)
The computed hash must exactly match the
sha256:shown for that asset on the GitHub release page. [1]2) Verify GPG signatures (what exists on Ollama releases)
Ollama’s GitHub releases typically do not provide separate
.asc/detached-signature files for the downloadable binaries; instead, GitHub shows a “Verified” badge for the release’s underlying tag/commit, including a GPG key ID (for example,v0.15.2shows “signed with GitHub’s verified signature” and a GPG key ID). [1]Option A (on GitHub UI)
On the Releases page, look for the Verified badge next to the commit/tag and open it to see details (key ID, etc.). GitHub documents how to check commit/tag verification status. [4]
Option B (verify the signed tag locally with
git)git clone https://github.com/ollama/ollama.git cd ollama git fetch --tagsGitHub’s docs describe verifying signed tags with
git tag -v. [2]Sources
[1] Ollama GitHub Releases page (shows per-asset
sha256:values and “Verified” signature/key ID on the release commit/tag) (github.com)[2] GitHub Docs: “Signing tags” (includes
git tag -vverification) (docs.github.com)[4] GitHub Docs: “Checking your commit and tag signature verification status” (docs.github.com)
Citations:
Add version pinning and checksum verification to Ollama installation steps.
curl | shapproach is documented but should pin a version viaOLLAMA_VERSION=...to avoid unintended upgrades during CI runs. Consider downloading and inspecting the script before execution (best practice for CI automation).brew install, which is not in Ollama's official documentation.Checksums are available for all release assets on GitHub; add verification steps before installation.
🤖 Prompt for AI Agents