Skip to content
Open
Show file tree
Hide file tree
Changes from 7 commits
Commits
Show all changes
17 commits
Select commit Hold shift + click to select a range
29ffa95
feat: port generate() and ollama integration from PR #73
hawkeyexl Jan 27, 2026
6fce59d
chore: upgrade zod to v4 to support ollama-ai-provider-v2
hawkeyexl Jan 27, 2026
96cf24c
fix: remove test stubs, use real Ollama API calls in CI
hawkeyexl Jan 27, 2026
e70fd07
Add Windows Ollama support and remove test skipping
hawkeyexl Jan 27, 2026
47309c1
fix: check if Ollama already running before starting server
hawkeyexl Jan 27, 2026
6fd6ac3
fix: check Ollama availability at custom baseUrl before pulling
hawkeyexl Jan 27, 2026
697c298
fix: use dynamic import for chai in ollama.test.js
hawkeyexl Jan 27, 2026
61c8b4d
ci: add fail-fast: false to test matrix
hawkeyexl Jan 27, 2026
8a24fe4
ci: increase test timeout to 30 minutes
hawkeyexl Jan 27, 2026
63a3458
test: add comprehensive coverage tests for ai and ollama modules
hawkeyexl Jan 27, 2026
67ac3eb
ci: temporarily lower coverage thresholds for ai module
hawkeyexl Jan 27, 2026
7f165ed
ci: skip Ollama installation on macOS/Windows to prevent timeout
hawkeyexl Jan 28, 2026
9f2a96b
test: skip Ollama integration tests when Ollama unavailable
hawkeyexl Jan 28, 2026
ab4e577
test: skip Ollama default provider test when unavailable
hawkeyexl Jan 28, 2026
d9c56c7
test: auto-start Ollama server for AI tests with smart fallback strategy
hawkeyexl Jan 28, 2026
6b8bee8
test: add comprehensive unit tests for Ollama setup utilities
hawkeyexl Jan 28, 2026
92818cd
fix: Address PR review comments from coderabbit and copilot
hawkeyexl Jan 28, 2026
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
67 changes: 66 additions & 1 deletion .github/workflows/npm-test.yml
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ on:

jobs:
test:
timeout-minutes: 5
timeout-minutes: 15
runs-on: ${{ matrix.os }}
strategy:
matrix:
Expand All @@ -36,19 +36,84 @@ jobs:
cache: "npm"
cache-dependency-path: package-lock.json
node-version: ${{ matrix.node }}

# Install Ollama on Linux
- name: Install Ollama (Linux)
if: runner.os == 'Linux'
run: |
curl -fsSL https://ollama.com/install.sh | sh

# Install Ollama on macOS
- name: Install Ollama (macOS)
if: runner.os == 'macOS'
run: |
brew install ollama

# Install Ollama on Windows
- name: Install Ollama (Windows)
if: runner.os == 'Windows'
shell: pwsh
run: |
# Download Ollama installer
Invoke-WebRequest -Uri "https://ollama.com/download/OllamaSetup.exe" -OutFile "OllamaSetup.exe"
# Install silently
Start-Process -FilePath ".\OllamaSetup.exe" -Args "/S" -Wait
# Add to PATH for this session
$env:PATH = "$env:LOCALAPPDATA\Programs\Ollama;$env:PATH"
echo "$env:LOCALAPPDATA\Programs\Ollama" | Out-File -FilePath $env:GITHUB_PATH -Encoding utf8 -Append

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🌐 Web query:

Ollama official installation method Linux macOS Windows CI checksum signature verification

💡 Result:

Official install methods

Linux

  • Recommended (script):
    curl -fsSL https://ollama.com/install.sh | sh
    (You can pin a version via OLLAMA_VERSION=....) [1][2]
  • Manual (tarball):
    curl -fsSL https://ollama.com/download/ollama-linux-amd64.tar.zst | sudo tar x -C /usr
    (Also available for ARM64 and AMD/ROCm as separate downloads.) [1]

macOS

  • Recommended: download ollama.dmg, mount it, then drag Ollama.app to /Applications. The app can create/link the ollama CLI into /usr/local/bin if needed. [3]

Windows

  • Recommended: run the OllamaSetup.exe installer (installs per-user by default; no Administrator required). [4]
  • Standalone CLI option: download ollama-windows-amd64.zip (and optional ...-rocm.zip for AMD) and run ollama serve if you want to run it as a service with your own tooling. [4]

CI (common official approaches)

  • Linux CI runner: run the official Linux install script (above), then ollama serve / ollama run .... [1][2]
  • Container-based CI: use the official Docker image ollama/ollama (CPU/GPU variants and examples are documented). [5]

Checksum / signature verification

Verify release asset checksums (SHA-256)

Ollama’s GitHub Releases list a sha256: value next to each downloadable asset (Linux .tar.zst, Windows .zip, macOS archives, etc.). Download the asset and compare its SHA-256 to the value shown on the release page. [6]

Examples:

  • Linux
    sha256sum ollama-linux-amd64.tar.zst
  • macOS
    shasum -a 256 Ollama-darwin.zip
  • Windows (PowerShell)
    Get-FileHash .\ollama-windows-amd64.zip -Algorithm SHA256

Verify “signature” / provenance on GitHub

On the Releases page, the release commit is shown as “signed with GitHub’s verified signature” (GPG key ID displayed). This is GitHub’s commit-signature verification for the release/tag metadata. [6]


Sources

  • [1] Ollama Linux install docs. (docs.ollama.com)
  • [2] install.sh (official Linux installer script). (ollama.com)
  • [3] Ollama macOS install docs. (docs.ollama.com)
  • [4] Ollama Windows install docs. (docs.ollama.com)
  • [5] Ollama Docker docs (official ollama/ollama image usage). (docs.ollama.com)
  • [6] Ollama GitHub Releases (assets include per-file sha256:; release commit shows “Verified” signature). (github.com)

Citations:


🌐 Web query:

Ollama install.sh script security best practices curl

💡 Result:

Why curl … | sh is risky (specifically for Ollama’s install.sh)

Ollama’s one-liner installs into system paths and can make privileged system changes: it writes under /usr//bin, creates an ollama system user, adds your current user to the ollama group, creates/enables a systemd service, and may add NVIDIA CUDA repos + install drivers via your package manager. That’s a lot to run “blind” from the network. [1]

Safer best practices (recommended order)

  1. Prefer the “manual install” path (no script execution).
    Ollama documents downloading a tarball and extracting it (plus optional ROCm bundle) instead of running install.sh. [2]
  2. If you use install.sh, download first, inspect, then run.
    Avoid executing streamed content; download to a file, review it, and only then execute. This is widely recommended as a mitigation for “curl | bash/sh” supply-chain and inspection/logging issues. [3]
  3. Pin what you’re installing (reduce “latest changed under me”).
    Ollama’s install script supports installing a specific version via OLLAMA_VERSION=… (so you don’t implicitly install “whatever is current today”). [2]
  4. Verify integrity (checksum/signature/provenance) when available.
    If the vendor publishes hashes/signatures, verify them before running. If not, you can still do out-of-band verification (e.g., pin a known-good checksum in your automation) using patterns/tools like checksum.sh. [4]
  5. Run with least privilege / controlled environment.
    Because the script can install system packages and drivers, consider running it in a fresh VM, disposable host, or tightly controlled build image first, then promote artifacts.

Practical “safer Ollama install.sh” pattern

# 1) Download (don’t execute yet)
curl -fsSL https://ollama.com/install.sh -o ollama-install.sh

# 2) Inspect before running
less ollama-install.sh

# 3) (Optional but recommended) pin a specific version
#    (example version string shown in Ollama docs)
OLLAMA_VERSION=0.5.7 sh ollama-install.sh

Pinning via OLLAMA_VERSION is documented by Ollama. [2] The script you’re inspecting is the same one referenced from Ollama’s official install instructions. [5]

If you’re writing automation (CI/provisioning)

  • Vendor-safe alternative: mirror/cache the reviewed script (or better: use manual tarball install) to avoid “server changed overnight” risk. [6]
  • Do not pipe to shell in Dockerfiles; download + verify + run is the standard remediation scanners look for. [3]

Sources: Ollama install script and Linux install docs, plus general guidance on avoiding “curl | sh” without verification. [1][2][3][4][5][6]


🌐 Web query:

Ollama GitHub releases checksums GPG signatures verification

💡 Result:

1) Verify Ollama GitHub Release checksums (SHA-256)

On each Ollama release page, every binary asset is listed with a sha256:<hash> value next to it. Example: the v0.15.2 release shows a SHA-256 for each asset (macOS/Linux/Windows). [1]

Download the asset, then compute SHA-256 locally and compare:

macOS / Linux

# macOS
shasum -a 256 ollama-linux-amd64.tar.zst

# Linux (common)
sha256sum ollama-linux-amd64.tar.zst

Windows (PowerShell)

Get-FileHash .\ollama-windows-amd64.zip -Algorithm SHA256

The computed hash must exactly match the sha256: shown for that asset on the GitHub release page. [1]


2) Verify GPG signatures (what exists on Ollama releases)

Ollama’s GitHub releases typically do not provide separate .asc/detached-signature files for the downloadable binaries; instead, GitHub shows a “Verified” badge for the release’s underlying tag/commit, including a GPG key ID (for example, v0.15.2 shows “signed with GitHub’s verified signature” and a GPG key ID). [1]

Option A (on GitHub UI)

On the Releases page, look for the Verified badge next to the commit/tag and open it to see details (key ID, etc.). GitHub documents how to check commit/tag verification status. [4]

Option B (verify the signed tag locally with git)

  1. Fetch tags:
git clone https://github.com/ollama/ollama.git
cd ollama
git fetch --tags
  1. Verify the release tag signature:
git tag -v v0.15.2

GitHub’s docs describe verifying signed tags with git tag -v. [2]

Note: if git tag -v can’t verify, you usually need to import the signer’s public key (the key ID is shown in the GitHub “Verified” details). [1][2]


Sources

[1] Ollama GitHub Releases page (shows per-asset sha256: values and “Verified” signature/key ID on the release commit/tag) (github.com)
[2] GitHub Docs: “Signing tags” (includes git tag -v verification) (docs.github.com)
[4] GitHub Docs: “Checking your commit and tag signature verification status” (docs.github.com)

Citations:


Add version pinning and checksum verification to Ollama installation steps.

  • Linux: The official curl | sh approach is documented but should pin a version via OLLAMA_VERSION=... to avoid unintended upgrades during CI runs. Consider downloading and inspecting the script before execution (best practice for CI automation).
  • macOS: Use the official macOS installer (DMG, drag to Applications) instead of brew install, which is not in Ollama's official documentation.
  • Windows: Verify the installer integrity using SHA-256 checksum from Ollama's GitHub Releases before executing.

Checksums are available for all release assets on GitHub; add verification steps before installation.

🤖 Prompt for AI Agents
In @.github/workflows/npm-test.yml around lines 41 - 65, The CI installs for
Ollama (steps named "Install Ollama (Linux)", "Install Ollama (macOS)", and
"Install Ollama (Windows)") need version pinning and checksum verification: for
the Linux step, replace the blind curl|sh with downloading a specific
OLLAMA_VERSION release asset, verify its SHA-256 against the GitHub release
checksum, optionally inspect the script, then run it; for the macOS step, avoid
`brew install` and instead download the pinned DMG or installer for the
specified OLLAMA_VERSION, verify its checksum before mounting/installer actions;
for the Windows step, download the specific OllamaSetup.exe for the pinned
OLLAMA_VERSION, fetch the release SHA-256 from GitHub Releases and verify the
installer integrity before calling Start-Process, and only then update PATH for
the session. Ensure each step fails the job on checksum mismatch and logs the
version and checksum sources.

# Start Ollama server and pull model (Linux/macOS)
- name: Start Ollama and pull model (Linux/macOS)
if: runner.os != 'Windows'
run: |
# Check if Ollama is already running (systemd service may have started it)
if ! curl -s http://localhost:11434/api/tags > /dev/null 2>&1; then
# Start Ollama server in background
ollama serve &
# Wait for server to start
sleep 5
fi
# Pull the default model used in tests
ollama pull qwen3:4b

# Start Ollama server and pull model (Windows)
- name: Start Ollama and pull model (Windows)
if: runner.os == 'Windows'
shell: pwsh
run: |
# Start Ollama server in background
Start-Process -FilePath "ollama" -ArgumentList "serve" -NoNewWindow
# Wait for server to start
Start-Sleep -Seconds 10
# Pull the default model used in tests
ollama pull qwen3:4b

- run: npm ci
- run: npm run build # Automatically run tests because of the `postbuild` script in package.json

coverage:
name: Coverage Check
runs-on: ubuntu-latest
timeout-minutes: 15
steps:
Comment on lines +70 to 71
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Consider aligning coverage timeout with test job.

Coverage runs also install/pull Ollama; 15 minutes may be tight on slow runners. Aligning with the 30‑minute test timeout would reduce intermittent failures.

⏱️ Suggested adjustment
-    timeout-minutes: 15
+    timeout-minutes: 30
🤖 Prompt for AI Agents
In @.github/workflows/npm-test.yml around lines 98 - 99, The coverage workflow
currently sets timeout-minutes: 15 which can be too short when
installing/pulling Ollama; update the coverage job's timeout-minutes value to
match the test job (30 minutes) so the coverage run uses the same 30‑minute
timeout, locating and changing the timeout-minutes key in the coverage job block
(the section containing timeout-minutes and steps).

- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
cache: "npm"
cache-dependency-path: package-lock.json
node-version: 20

# Install and start Ollama for coverage tests
- name: Install Ollama
run: |
curl -fsSL https://ollama.com/install.sh | sh

- name: Start Ollama and pull model
run: |
ollama serve &
sleep 5
ollama pull qwen3:4b
Comment on lines +80 to +88
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

fd -type f -name "npm-test.yml"

Repository: doc-detective/common

Length of output: 234


🌐 Web query:

Ollama health check endpoint readiness API CI/CD

💡 Result:

Ollama health / readiness endpoints (HTTP)

Liveness (is the server up):

  • GET http://<host>:11434/ → returns Ollama is running (HTTP 200). This is commonly used as a basic health check. [3]

“Ready enough” (API responding):

  • GET http://<host>:11434/api/version → returns JSON with the Ollama version (HTTP 200). [1]

Stronger readiness (optional; also checks model registry responding):

  • GET http://<host>:11434/api/tags → returns list of local models (HTTP 200). (Often used in container healthchecks, but see note below about curl.) [4]

CI/CD + container orchestration patterns

Kubernetes probes (recommended split):

  • livenessProbe: GET /
  • readinessProbe: GET /api/version (or /api/tags if you want “model list is reachable” semantics)

Docker / docker-compose healthcheck (important caveat):

  • Many examples use curl -f http://localhost:11434/api/tags, but the official ollama/ollama image has had issues where curl is missing, causing the container to be marked unhealthy even when Ollama is running. [2]
  • Workarounds:
    • Use a tool that exists in the image (if available), or
    • Add a tiny sidecar healthcheck container, or
    • Build a small custom image layer that installs curl, or
    • Healthcheck from outside the container (e.g., your orchestrator/CI runner).

Simple CI “wait until ready” (example)

Use a loop against /api/version (fast + deterministic): [1]

until curl -fsS http://localhost:11434/api/version >/dev/null; do sleep 1; done

Sources: Ollama API docs for /api/version and base URL [1], Ollama “root” health response discussion/usage [3], Docker image curl missing issue [2], common /api/tags healthcheck usage in the wild [4].


🌐 Web query:

Ollama install.sh auto-start service behavior

💡 Result:

Running curl -fsSL https://ollama.com/install.sh | sh on Linux will set up and auto-start Ollama as a systemd service, if systemd is actually running.

What the script does (service/autostart behavior)

  • If systemctl exists, the script creates an ollama system user/group, adds your current user to the ollama group, and writes a systemd unit file for Ollama (so it runs ollama serve). The unit is installed as /etc/systemd/system/ollama.service. [1]
  • If systemctl is-system-running reports running or degraded, the script will:
    • systemctl daemon-reload
    • systemctl enable ollama (autostart at boot)
    • systemctl restart ollama (so it starts immediately) [1]
  • If systemd is not running (common in some containers and in WSL unless enabled), it prints a warning and does not start/enable the service. [1]

Re-running the script

  • Re-running install.sh can replace/overwrite an existing /etc/systemd/system/ollama.service (so local edits to the unit file may be lost). [2]
  • Official docs recommend customizing via systemctl edit ollama (override drop-in) rather than editing the unit directly. [3]

If you want to stop autostart

sudo systemctl disable --now ollama

(Official uninstall steps also remove the unit file.) [3]

Sources: install script [1], GitHub issue about unit replacement [2], official Linux docs (customize/uninstall) [3].


🏁 Script executed:

fd -type f "npm-test.yml"

Repository: doc-detective/common

Length of output: 234


🏁 Script executed:

cat .github/workflows/npm-test.yml | head -100 | tail -30

Repository: doc-detective/common

Length of output: 979


🌐 Web query:

GitHub Actions runner environment systemd Linux service autostart

💡 Result:

On systemd-based Linux, the supported way to make a GitHub Actions self-hosted runner auto-start on boot is to install it as a service using the runner’s built-in svc.sh script:

# in the runner install dir (after ./config.sh has registered the runner)
sudo ./svc.sh install            # or: sudo ./svc.sh install <username>
sudo ./svc.sh start
sudo ./svc.sh status

This creates a systemd unit (under /etc/systemd/system/) so the runner starts automatically when the machine boots. [1]

If you prefer managing it yourself (instead of svc.sh), you can create your own systemd unit and enable it with:

sudo systemctl daemon-reload
sudo systemctl enable --now <your-service>.service

(Example unit structure shown here.) [2]

Sources: [1] GitHub Docs, “Configuring the self-hosted runner application as a service” • [2] exe.dev example systemd unit for a GitHub Actions runner


🏁 Script executed:

git ls-files | grep -E "\.github/workflows"

Repository: doc-detective/common

Length of output: 133


🌐 Web query:

GitHub Actions Ubuntu hosted runner systemd available service

💡 Result:

Yes—GitHub Actions Ubuntu GitHub-hosted runners are full Azure VMs with systemd installed (and used as the init system), so systemctl is available and you can start/stop/query services (typically with sudo). (docs.github.com)

Evidence: the official actions/runner-images release notes for the Ubuntu runner images explicitly list a “Systemd version”, confirming it’s present on the hosted images. (github.com)

Notes/limits:

  • Any service changes you make (enable/start) only apply for that job run because hosted runners are ephemeral. (docs.github.com)
  • If you’re running Actions locally via Docker-based runners (e.g., act), systemd generally won’t work inside those containers, which is a common source of confusion. (actions-oss.github.io)

Citations:


Use /api/version endpoint for readiness check and avoid double-start.

The install script auto-starts Ollama via systemd on GitHub Actions runners; ollama serve & will conflict. Replace with a readiness check loop. Use /api/version (the recommended CI health-check endpoint) instead of /api/tags.

✅ Suggested change
       - name: Start Ollama and pull model
         run: |
-          ollama serve &
-          sleep 5
-          ollama pull qwen3:4b
+          if ! curl -s http://localhost:11434/api/version > /dev/null 2>&1; then
+            ollama serve &
+          fi
+          for i in {1..30}; do
+            if curl -s http://localhost:11434/api/version > /dev/null 2>&1; then
+              break
+            fi
+            sleep 2
+          done
+          if ! curl -s http://localhost:11434/api/version > /dev/null 2>&1; then
+            echo "Ollama did not become ready in time" >&2
+            exit 1
+          fi
+          ollama pull qwen3:4b
🤖 Prompt for AI Agents
In @.github/workflows/npm-test.yml around lines 80 - 88, The CI currently runs a
"Start Ollama and pull model" step that manually runs "ollama serve &", which
conflicts because the install script already starts Ollama via systemd; instead
remove the manual start and implement a readiness loop that repeatedly queries
the /api/version endpoint until it succeeds, then proceed to run "ollama pull
qwen3:4b"; update the step named "Start Ollama and pull model" to stop starting
the daemon and to wait for readiness by polling /api/version before pulling the
model to avoid double-start and to use the recommended CI health-check endpoint.


- run: npm ci
- run: npm run dereferenceSchemas && npm run generate:types && npm run compile
- name: Run tests with coverage
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
Code Quality Reviewer Prompt Template
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Placeholder file with no actual content.

This file contains only a title with no actual prompt template content. If this is intentional scaffolding for future work, consider adding a TODO comment or removing until the actual content is ready. If the content was meant to be included, please add the complete prompt template.

🤖 Prompt for AI Agents
In
@.opencode/skills/superpowers/subagent-driven-development/code-quality-reviewer-prompt.md
at line 1, The file contains only a title "Code Quality Reviewer Prompt
Template" with no body; either populate it with the actual prompt template
content or add a clear TODO/comment indicating intentional scaffolding, or
remove the file if unused; update the file header to include the real template
text (or a TODO placeholder) so reviewers know it's intentional and not an
accidental empty file.

Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
Implementer Prompt Template
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
Spec Reviewer Prompt Template
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Placeholder file with no actual content.

Same as the other prompt template files. Consider deferring these additions until the actual prompt templates are developed.

🤖 Prompt for AI Agents
In
@.opencode/skills/superpowers/subagent-driven-development/spec-reviewer-prompt.md
at line 1, This file is an empty placeholder; either remove the placeholder
(delete
.opencode/skills/superpowers/subagent-driven-development/spec-reviewer-prompt.md)
from the PR or replace it with a clear TODO placeholder header and minimal
metadata (e.g., "TODO: implement spec reviewer prompt template") so it isn't
committed as an empty artifact; if you must keep it, also add a comment in the
repo README or an index referencing the pending templates so reviewers know it's
intentionally deferred.

8 changes: 8 additions & 0 deletions .opencode/todos/port-ai-js.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
- [ ] Implement `src/ai.ts` and `test/ai.test.ts`
- [ ] Create `test/ai.test.ts` with content from `.opencode/pr_content/test/ai.test.ts` (but corrected paths)
- [ ] Create `src/ai.ts` with basic scaffolding
- [ ] Implement `detectProvider` logic
- [ ] Implement `generate` logic
- [ ] Implement `generateWithSchemaValidation` logic
- [ ] Implement helpers (`simplifySchemaForOllama`, `fileToImagePart`, etc.)
- [ ] Verify with tests
60 changes: 60 additions & 0 deletions dist/ai.d.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,60 @@
import { z } from "zod";
export declare const DEFAULT_MODEL = "ollama/qwen3:4b";
export declare const MAX_SCHEMA_VALIDATION_RETRIES = 3;
/**
* Maps our supported model enums to the model identifiers that platforms expect.
*/
export declare const modelMap: Record<string, string>;
interface DetectedProvider {
provider: "openai" | "anthropic" | "google" | "ollama" | null;
model: string | null;
apiKey?: string | null;
baseURL?: string;
}
/**
* Detects the provider, model, and API from a model string and environment variables.
*/
export declare const detectProvider: (config: any, model: string) => Promise<DetectedProvider>;
/**
* Simplifies a JSON schema for providers with limited schema support (e.g., Ollama).
* - Dereferences $ref pointers
* - Merges allOf schemas
* - Converts top-level anyOf (discriminated unions) into a single object with all options as optional properties
* - Simplifies nested anyOf by preferring object types
* - Removes unsupported keywords like pattern, components, etc.
*/
export declare const simplifySchemaForOllama: (schema: any) => any;
/**
* Extracts the API key for a provider from a Doc Detective config object.
*/
export declare const getApiKey: (config: any, provider: "openai" | "anthropic" | "google") => any;
export interface GenerateOptions {
prompt?: string;
messages?: any[];
files?: any[];
model?: string;
system?: string;
schema?: z.ZodSchema | any;
schemaName?: string;
schemaDescription?: string;
provider?: "openai" | "anthropic" | "ollama" | "google";
config?: any;
apiKey?: string;
baseURL?: string;
temperature?: number;
maxTokens?: number;
}
/**
* Generates text or structured output using an AI model.
*/
export declare const generate: ({ prompt, messages, files, model, system, schema, schemaName, schemaDescription, provider, config, apiKey, baseURL, temperature, maxTokens, }: GenerateOptions) => Promise<{
object: any;
usage: import("ai").LanguageModelUsage;
finishReason: import("ai").FinishReason;
} | {
text: string;
usage: import("ai").LanguageModelUsage;
finishReason: import("ai").FinishReason;
}>;
export {};
//# sourceMappingURL=ai.d.ts.map
1 change: 1 addition & 0 deletions dist/ai.d.ts.map

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

Loading
Loading