goagent is a compact, vendor‑agnostic command‑line tool for running non‑interactive, tool‑using agents against any OpenAI‑compatible Chat Completions API. It executes a small, auditable allowlist of local tools (argv only; no shell), streams JSON in/out, and prints a concise final answer.
- Why use it: deterministic, portable, and safe by default. Works with hosted providers and with local endpoints like
http://localhost:1234/v1
. - Who it’s for: engineers who want a minimal agent runner with clear guarantees and zero vendor lock‑in.
- What makes it different: strict "argv‑only" tool execution, explicit allowlists, and a pragmatic default LLM policy for predictable behavior across providers.
- At a glance
- Features
- Installation
- Quick start
- Usage
- Configuration
- Examples
- Security
- Troubleshooting
- Tests
- Documentation
- Diagrams
- Contributing
- Development tooling
- Support
- Roadmap
- Project status
- License and credits
- Changelog
- More examples
- CI quality gates
- State persistence (-state-dir)
- Minimal, portable, vendor‑agnostic: works with any OpenAI‑compatible endpoint
- Deterministic and auditable: argv‑only tool execution, JSON stdin/stdout, strict timeouts
- Safe by default: explicit allowlist of tools; no shell evaluation
- Batteries included: a small toolbelt for filesystem, process, network, and image tasks
- OpenAI‑compatible
POST /v1/chat/completions
vianet/http
(no SDK) - Explicit tools allowlist:
tools.json
with JSON Schema parameters (see Tools manifest reference) - Deterministic execution: argv‑only tools, JSON stdin/stdout, per‑call timeouts
- Predictable error surface: tool errors mapped as structured JSON
- Observability & hygiene: audit logging with redaction; transcript size controls
- Go 1.24+ on Linux, macOS, or Windows
- Network access to an OpenAI‑compatible API
- For development and examples:
ripgrep
(rg),jq
, andgolangci-lint
-
Download a binary: see Releases
-
With Go (adds
agentcli
to yourGOBIN
):
go install github.com/hyperifyio/goagent/cmd/agentcli@latest
- Build from source:
git clone https://github.com/hyperifyio/goagent
cd goagent
make bootstrap tidy build build-tools
Verify installation:
./bin/agentcli --version
Developer prerequisites (examples):
# ripgrep
# - Ubuntu/Debian
sudo apt-get update && sudo apt-get install -y ripgrep
# - macOS (Homebrew)
brew install ripgrep
# - Windows (Chocolatey)
choco install ripgrep -y
# golangci-lint (pinned; installs into ./bin via Makefile)
make install-golangci
./bin/golangci-lint version
# jq (used by examples and runbooks)
# - Ubuntu/Debian
sudo apt-get install -y jq
# - macOS (Homebrew)
brew install jq
# - Windows (Chocolatey)
choco install jq -y
# make (Windows, for running Makefile targets used in docs)
# - Windows (Chocolatey)
choco install make -y
Configuration precedence is: flags > environment > built‑in defaults.
Environment variables:
OAI_BASE_URL
— API base (defaulthttps://api.openai.com/v1
). Helper scripts will also readLLM_BASE_URL
if present.OAI_MODEL
— model ID (defaultoss-gpt-20b
). Helper scripts will also readLLM_MODEL
if present.OAI_API_KEY
— API key when required. The CLI also acceptsOPENAI_API_KEY
for compatibility.OAI_HTTP_TIMEOUT
— HTTP timeout for chat requests (e.g.,90s
). Mirrors-http-timeout
.OAI_PREP_HTTP_TIMEOUT
— HTTP timeout for pre-stage; overrides inheritance from-http-timeout
.
The CLI resolves values independently for chat (main), pre-stage, and image flows, with inheritance when explicit values are not provided.
Endpoints and API keys:
Setting | Resolution order |
---|---|
Chat base URL | -base-url → OAI_BASE_URL → default https://api.openai.com/v1 |
Pre-stage base URL | -prep-base-url → OAI_PREP_BASE_URL → inherit Chat base URL |
Image base URL | -image-base-url → OAI_IMAGE_BASE_URL → inherit Chat base URL |
Chat API key | -api-key → OAI_API_KEY → OPENAI_API_KEY |
Pre-stage API key | -prep-api-key → OAI_PREP_API_KEY → inherit Chat API key |
Image API key | -image-api-key → OAI_IMAGE_API_KEY → inherit Chat API key |
Models and sampling:
Setting | Resolution order |
---|---|
Chat model | -model → OAI_MODEL → default oss-gpt-20b |
Pre-stage model | -prep-model → OAI_PREP_MODEL → inherit Chat model |
Image model | -image-model → OAI_IMAGE_MODEL → default gpt-image-1 |
Chat temperature vs top-p | One‑knob rule: if -top-p is set, omit temperature ; otherwise send -temp (default 1.0) when supported |
Pre-stage temperature vs top-p | One‑knob rule applies independently with -prep-temp /-prep-top-p |
HTTP controls:
Setting | Resolution order |
---|---|
Chat HTTP timeout | -http-timeout → OAI_HTTP_TIMEOUT → fallback to -timeout if set |
Pre-stage HTTP timeout | -prep-http-timeout → OAI_PREP_HTTP_TIMEOUT → inherit Chat HTTP timeout |
Image HTTP timeout | -image-http-timeout → OAI_IMAGE_HTTP_TIMEOUT → inherit Chat HTTP timeout |
Chat HTTP retries | -http-retries → OAI_HTTP_RETRIES → default (e.g., 2) |
Pre-stage HTTP retries | -prep-http-retries → OAI_PREP_HTTP_RETRIES → inherit Chat HTTP retries |
Image HTTP retries | -image-http-retries → OAI_IMAGE_HTTP_RETRIES → inherit Chat HTTP retries |
Chat HTTP retry backoff | -http-retry-backoff → OAI_HTTP_RETRY_BACKOFF → default |
Pre-stage HTTP retry backoff | -prep-http-retry-backoff → OAI_PREP_HTTP_RETRY_BACKOFF → inherit Chat backoff |
Image HTTP retry backoff | -image-http-retry-backoff → OAI_IMAGE_HTTP_RETRY_BACKOFF → inherit Chat backoff |
Install the CLI and point it to a reachable OpenAI‑compatible API (local or hosted):
export OAI_BASE_URL=http://localhost:1234/v1
export OAI_MODEL=oss-gpt-20b
make build build-tools # skip if installed via go install / release binary
Create a minimal tools.json
next to the binary (Unix/macOS):
{
"tools": [
{
"name": "get_time",
"description": "Return current time for an IANA timezone (default UTC). Accepts 'timezone' (canonical) and alias 'tz'.",
"schema": {
"type": "object",
"properties": {
"timezone": {"type": "string", "description": "e.g. Europe/Helsinki"},
"tz": {"type": "string", "description": "Alias for timezone (deprecated)"}
},
"required": ["timezone"],
"additionalProperties": false
},
"command": ["./tools/bin/get_time"],
"timeoutSec": 5
}
]
}
On Windows, use a .exe
suffix for tool binaries:
{
"tools": [
{
"name": "get_time",
"schema": {"type":"object","properties":{"timezone":{"type":"string"}},"required":["timezone"],"additionalProperties":false},
"command": ["./tools/bin/get_time.exe"],
"timeoutSec": 5
}
]
}
Run the agent:
./bin/agentcli \
-prompt "What's the local time in Helsinki? If tools are available, call get_time." \
-tools ./tools.json \
-debug
Expected behavior: the model may call get_time
; the CLI executes ./tools/bin/get_time
(or get_time.exe
on Windows) with JSON on stdin, appends the result as a tool
message, calls the API again, then prints a concise final answer.
Tip: run ./bin/agentcli -h
for the complete help output.
Flags are order‑insensitive. You can place -prompt
and other flags in any order; precedence remains flag > environment > default.
-prompt string User prompt (required)
-tools string Path to tools.json (optional)
-system string System prompt (default: helpful and precise)
-base-url string OpenAI‑compatible base URL (env OAI_BASE_URL; scripts accept LLM_BASE_URL fallback)
-api-key string API key if required (env OAI_API_KEY; falls back to OPENAI_API_KEY)
-model string Model ID (env OAI_MODEL; scripts accept LLM_MODEL fallback)
-max-steps int Maximum reasoning/tool steps (default 8)
A hard ceiling of 15 is enforced; exceeding the cap
terminates with: "needs human review".
-http-timeout duration HTTP timeout for chat completions (env OAI_HTTP_TIMEOUT; default falls back to -timeout)
-prep-http-timeout duration HTTP timeout for pre-stage (env OAI_PREP_HTTP_TIMEOUT; default falls back to -http-timeout)
-prep-model string Pre-stage model ID (env OAI_PREP_MODEL; inherits -model if unset)
-prep-base-url string Pre-stage base URL (env OAI_PREP_BASE_URL; inherits -base-url if unset)
-prep-api-key string Pre-stage API key (env OAI_PREP_API_KEY; falls back to OAI_API_KEY/OPENAI_API_KEY; inherits -api-key if unset)
-prep-http-retries int Pre-stage HTTP retries (env OAI_PREP_HTTP_RETRIES; inherits -http-retries if unset)
-prep-http-retry-backoff duration Pre-stage HTTP retry backoff (env OAI_PREP_HTTP_RETRY_BACKOFF; inherits -http-retry-backoff if unset)
-prep-dry-run Run pre-stage only, print refined Harmony messages to stdout, and exit 0
-print-messages Pretty-print the final merged message array to stderr before the main call
-http-retries int Number of retries for transient HTTP failures (timeouts, 429, 5xx). Uses jittered exponential backoff. (default 2)
-http-retry-backoff duration Base backoff between HTTP retry attempts (exponential with jitter). (default 300ms)
-tool-timeout duration Per-tool timeout (default falls back to -timeout)
-timeout duration [DEPRECATED] Global timeout; prefer -http-timeout and -tool-timeout
-temp float Sampling temperature (default 1.0)
-top-p float Nucleus sampling probability mass (conflicts with -temp; omits temperature when set)
-prep-top-p float Nucleus sampling probability mass for pre-stage (conflicts with -temp; omits temperature when set)
-prep-profile string Pre-stage prompt profile (deterministic|general|creative|reasoning); sets temperature when supported (conflicts with -prep-top-p)
-prep-enabled Enable pre-stage (default true). When false, skip pre-stage and proceed directly to main call.
-debug Dump request/response JSON to stderr
-verbose Also print non-final assistant channels (critic/confidence) to stderr
-channel-route name=stdout|stderr|omit
Override default channel routing (final→stdout, critic/confidence→stderr); repeatable
-quiet Suppress non-final output; print only final text to stdout
-capabilities Print enabled tools and exit
-print-config Print resolved config and exit
-dry-run Print intended state actions (restore/refine/save) and exit without writing state
--version | -version Print version and exit
Run ./bin/agentcli -h
to see the built‑in help.
The following flags control the Images API behavior used by the assistant when generating images. Precedence is always: flags > environment > inheritance > default.
Flag | Environment | Default / Inheritance | Description |
---|---|---|---|
-image-base-url string |
OAI_IMAGE_BASE_URL |
Inherits -base-url |
Image API base URL |
-image-api-key string |
OAI_IMAGE_API_KEY |
Inherits -api-key ; falls back to OPENAI_API_KEY |
API key for Images API |
-image-model string |
OAI_IMAGE_MODEL |
gpt-image-1 |
Images model ID |
-image-http-timeout duration |
OAI_IMAGE_HTTP_TIMEOUT |
Inherits -http-timeout |
HTTP timeout for image requests |
-image-http-retries int |
OAI_IMAGE_HTTP_RETRIES |
Inherits -http-retries |
Retry attempts for transient image HTTP errors |
-image-http-retry-backoff duration |
OAI_IMAGE_HTTP_RETRY_BACKOFF |
Inherits -http-retry-backoff |
Base backoff for image HTTP retries |
-image-n int |
OAI_IMAGE_N |
1 |
Number of images to generate |
-image-size string |
OAI_IMAGE_SIZE |
1024x1024 |
Size WxH |
-image-quality string |
OAI_IMAGE_QUALITY |
standard |
standard or hd |
-image-style string |
OAI_IMAGE_STYLE |
natural |
natural or vivid |
-image-response-format string |
OAI_IMAGE_RESPONSE_FORMAT |
url |
url or b64_json |
-image-transparent-background |
OAI_IMAGE_TRANSPARENT_BACKGROUND |
false |
Request transparent background when supported |
- The default
-temp 1.0
is standardized for broad provider/model parity and GPT‑5 compatibility. - The one‑knob rule applies: if you set
-top-p
, the agent omitstemperature
; otherwise it sendstemperature
(default 1.0) and leavestop_p
unset. - The one‑knob rule applies for both stages: if you set
-top-p
(or-prep-top-p
), the agent omitstemperature
for that stage; otherwise it sendstemperature
(default 1.0) when supported. Pre‑stage profiles are available via-prep-profile
, e.g.deterministic
sets temperature to 0.1 when supported. - See the policy for details and rationale: ADR‑0004: Default LLM policy.
List enabled tools from a manifest without running the agent. The output includes a prominent header warning, and certain tools like img_create
are annotated with an extra warning because they make outbound network calls and can save files:
./bin/agentcli -tools ./tools.json -capabilities
Run against a GPT‑5 compatible endpoint without tuning sampling knobs. The CLI sends temperature: 1.0
by default for models that support it.
./bin/agentcli -prompt "Say ok" -model gpt-5 -base-url "$OAI_BASE_URL" -api-key "$OAI_API_KEY" -max-steps 1 -debug
# stderr will include a request dump containing "\"temperature\": 1"
Minimal JSON transcript showing correct tool‑call sequencing:
[
{"role":"user","content":"What's the local time in Helsinki?"},
{
"role":"assistant",
"content":null,
"tool_calls":[
{
"id":"call_get_time_1",
"type":"function",
"function":{
"name":"get_time",
"arguments":"{\"timezone\":\"Europe/Helsinki\"}"
}
}
]
},
{
"role":"tool",
"tool_call_id":"call_get_time_1",
"name":"get_time",
"content":"{\"timezone\":\"Europe/Helsinki\",\"iso\":\"2025-08-17T12:34:56Z\",\"unix\":1755424496}"
},
{"role":"assistant","content":"It's 15:34 in Helsinki."}
]
Notes:
- For parallel tool calls (multiple entries in
tool_calls
), append onerole:"tool"
message perid
before calling the API again. Order of tool messages is not significant as long as eachtool_call_id
is present exactly once. - Transcript hygiene: when running without
-debug
, the CLI replaces any single tool message content larger than 8 KiB with{"truncated":true,"reason":"large-tool-output"}
before sending to the API. Use-debug
to inspect full payloads during troubleshooting.
See examples/tool_calls.md
for a self-contained, test-driven worked example that:
- Exercises default temperature 1.0
- Demonstrates a two-tool-call interaction with matching
tool_call_id
- Captures a transcript via
-debug
showing request/response JSON dumps
Run the example test:
go test ./examples -run TestWorkedExample_ToolCalls_TemperatureOne_Sequencing -v
Use one provider for chat and another for image generation by overriding the image backend only:
export OAI_BASE_URL=https://api.example-chat.local/v1
export OAI_API_KEY=chat-key
./bin/agentcli \
-prompt "Create a simple logo" \
-tools ./tools.json \
-image-base-url https://api.openai.com/v1 \
-image-api-key "$OPENAI_API_KEY" \
-image-model gpt-image-1 \
-image-size 1024x1024
See also ADR‑0005 for the pre‑stage flow and channel routing details: docs/adr/0005-harmony-pre-processing-and-channel-aware-output.md
.
Inspect message arrays deterministically without running the full loop:
# Pre-stage only: print refined Harmony messages and exit
./bin/agentcli -prompt "Say ok" -prep-dry-run | jq .
# Before the main call: pretty-print merged messages to stderr, then proceed
./bin/agentcli -prompt "Say ok" -print-messages 2> >(jq .)
Build the exec tool and run a simple command (Unix):
make build-tools
echo '{"cmd":"/bin/echo","args":["hello"]}' | ./tools/bin/exec
# => {"exitCode":0,"stdout":"hello\n","stderr":"","durationMs":<n>}
Timeout example:
echo '{"cmd":"/bin/sleep","args":["2"],"timeoutSec":1}' | ./tools/bin/exec
# => non-zero exit, stderr contains "timeout"
The following examples assume make build-tools
has produced binaries into tools/bin/*
.
make build-tools
printf 'hello world' > tmp_readme_demo.txt
echo '{"path":"tmp_readme_demo.txt"}' | ./tools/bin/fs_read_file | jq .
rm -f tmp_readme_demo.txt
make build-tools
echo -n 'hello ' | base64 > b64a.txt
echo -n 'world' | base64 > b64b.txt
echo '{"path":"tmp_append_demo.txt","contentBase64":"'"$(cat b64a.txt)"'"}' | ./tools/bin/fs_append_file | jq .
echo '{"path":"tmp_append_demo.txt","contentBase64":"'"$(cat b64b.txt)"'"}' | ./tools/bin/fs_append_file | jq .
cat tmp_append_demo.txt; rm -f tmp_append_demo.txt b64a.txt b64b.txt
make build-tools
echo -n 'hello world' | base64 > b64.txt
echo '{"path":"tmp_write_demo.txt","contentBase64":"'"$(cat b64.txt)"'"}' | ./tools/bin/fs_write_file | jq .
cat tmp_write_demo.txt; rm -f tmp_write_demo.txt b64.txt
make build-tools
echo '{"path":"tmp_mkdirp_demo/a/b/c","modeOctal":"0755"}' | ./tools/bin/fs_mkdirp | jq .
ls -ld tmp_mkdirp_demo/a/b/c
echo '{"path":"tmp_mkdirp_demo/a/b/c","modeOctal":"0755"}' | ./tools/bin/fs_mkdirp | jq .
rm -rf tmp_mkdirp_demo
make build-tools
printf 'temp' > tmp_rm_demo.txt
echo '{"path":"tmp_rm_demo.txt"}' | ./tools/bin/fs_rm | jq .
mkdir -p tmp_rm_dir/a/b && touch tmp_rm_dir/a/b/file.txt
echo '{"path":"tmp_rm_dir","recursive":true}' | ./tools/bin/fs_rm | jq .
rm -rf tmp_rm_dir
make build-tools
printf 'payload' > tmp_move_src.txt
echo '{"from":"tmp_move_src.txt","to":"tmp_move_dst.txt"}' | ./tools/bin/fs_move | jq .
printf 'old' > tmp_move_dst.txt; printf 'new' > tmp_move_src.txt
echo '{"from":"tmp_move_src.txt","to":"tmp_move_dst.txt","overwrite":true}' | ./tools/bin/fs_move | jq .
rm -f tmp_move_src.txt tmp_move_dst.txt
make build-tools
mkdir -p tmp_listdir_demo/a b && touch tmp_listdir_demo/.hidden tmp_listdir_demo/a/afile tmp_listdir_demo/bfile
echo '{"path":"tmp_listdir_demo"}' | ./tools/bin/fs_listdir | jq '.entries | map(.path)'
jq -n '{path:"tmp_listdir_demo",recursive:true,globs:["**/*"],includeHidden:false}' | ./tools/bin/fs_listdir | jq '.entries | map(select(.type=="file") | .path)'
rm -rf tmp_listdir_demo
make build-tools
cat > /tmp/demo.diff <<'EOF'
--- /dev/null
+++ b/tmp_patch_demo.txt
@@ -0,0 +1,2 @@
+hello
+world
EOF
jq -n --arg d "$(cat /tmp/demo.diff)" '{unifiedDiff:$d}' | ./tools/bin/fs_apply_patch | jq .
printf 'hello
world
' | diff -u - tmp_patch_demo.txt && echo OK
make build-tools
printf 'abcdef' > tmp_edit_demo.txt
echo -n 'XY' | base64 > b64.txt
jq -n --arg b "$(cat b64.txt)" '{path:"tmp_edit_demo.txt",startByte:2,endByte:4,replacementBase64:$b}' | ./tools/bin/fs_edit_range | jq .
cat tmp_edit_demo.txt # => abXYef
rm -f tmp_edit_demo.txt b64.txt
make build-tools
printf 'hello world' > tmp_stat_demo.txt
echo '{"path":"tmp_stat_demo.txt","hash":"sha256"}' | ./tools/bin/fs_stat | jq .
rm -f tmp_stat_demo.txt
Generate images via an OpenAI‑compatible Images API and save files into your repository (default) or return base64 on demand.
Quickstart (Unix/macOS/Windows via make build-tools
):
make build-tools
Minimal tools.json
entry (copy/paste next to your binary):
{
"tools": [
{
"name": "img_create",
"description": "Generate image(s) with OpenAI Images API and save to repo or return base64",
"schema": {
"type": "object",
"required": ["prompt"],
"properties": {
"prompt": {"type": "string"},
"n": {"type": "integer", "minimum": 1, "maximum": 4, "default": 1},
"size": {"type": "string", "pattern": "^\\d{3,4}x\\d{3,4}$", "default": "1024x1024"},
"model": {"type": "string", "default": "gpt-image-1"},
"return_b64": {"type": "boolean", "default": false},
"save": {
"type": "object",
"required": ["dir"],
"properties": {
"dir": {"type": "string"},
"basename": {"type": "string", "default": "img"},
"ext": {"type": "string", "enum": ["png"], "default": "png"}
},
"additionalProperties": false
}
},
"additionalProperties": false
},
"command": ["./tools/bin/img_create"],
"timeoutSec": 120,
"envPassthrough": ["OAI_API_KEY", "OAI_BASE_URL", "OAI_IMAGE_BASE_URL", "OAI_HTTP_TIMEOUT"]
}
]
}
Run the agent with a prompt that instructs the assistant to call img_create
and save under assets/
:
export OAI_BASE_URL=${OAI_BASE_URL:-https://api.openai.com/v1}
export OAI_API_KEY=your-key
./bin/agentcli \
-tools ./tools.json \
-prompt "Generate a tiny illustrative image using img_create and save it under assets/ with basename banner" \
-debug
# Expect: one or more PNGs under assets/ (e.g., assets/banner_001.png) and a concise final message on stdout
Notes:
- By default, the tool writes image files and does not include base64 in transcripts, avoiding large payloads.
- To return base64 instead, pass
{ "return_b64": true }
to the tool; base64 is elided in stdout unlessIMG_CREATE_DEBUG_B64=1
orDEBUG_B64=1
is set. - Windows: the built binary is
./tools/bin/img_create.exe
andtools.json
should reference the.exe
. - See Troubleshooting for network/API issues and timeouts:
docs/runbooks/troubleshooting.md
.
make build-tools
mkdir -p tmp_search_demo && printf 'alpha\nbeta\ngamma\n' > tmp_search_demo/sample.txt
jq -n '{query:"^ga",globs:["**/*.txt"],regex:true}' | ./tools/bin/fs_search | jq '.matches'
rm -rf tmp_search_demo
- Tools are an explicit allowlist from
tools.json
- No shell interpretation; commands executed via argv only
- JSON contract on stdin/stdout; strict timeouts per call
- Treat model output as untrusted input; never pass it to a shell
See the full threat model in docs/security/threat-model.md
.
- Enabling
exec
grants arbitrary command execution and may allow full network access. Treat this as remote code execution. - Run the CLI and tools in a sandboxed environment (container/jail/VM) with least privilege.
- Keep
tools.json
minimal and audited. Do not pass secrets via tool arguments; prefer environment variables or CI secret stores. - Audit log redaction: set
GOAGENT_REDACT
to mask sensitive values in audit entries.OAI_API_KEY
/OPENAI_API_KEY
are always masked if present.
Persist and restore execution state to make repeated runs deterministic and faster.
- Enable by passing
-state-dir <dir>
(orAGENTCLI_STATE_DIR
). The directory must be private (0700
). - On first run, the CLI saves a snapshot
state-<RFC3339UTC>-<8charSHA>.json
and a pointer filelatest.json
. - On subsequent runs with the same scope, the CLI restores prompts/settings and skips pre-stage unless
-state-refine
is provided. - Partition contexts with
-state-scope
(orAGENTCLI_STATE_SCOPE
); when unset, a default scope is derived from model, base URL, and toolset. - Inspect actions without touching disk using
-dry-run
.
Examples:
# First run saves a snapshot
./bin/agentcli -prompt "Say ok" -tools ./tools.json -state-dir "$PWD/.agent-state"
# Restore and skip pre-stage
./bin/agentcli -prompt "Say ok" -tools ./tools.json -state-dir "$PWD/.agent-state"
# Refine existing state with inline text
./bin/agentcli -prompt "Say ok" -state-dir "$PWD/.agent-state" -state-refine -state-refine-text "Tighten tone"
# Use a custom scope to keep contexts separate
./bin/agentcli -prompt "Say ok" -state-dir "$PWD/.agent-state" -state-scope docs-demo
See ADR‑0012 for rationale and details: docs/adr/0012-state-dir-persistence.md
.
Common issues and deterministic fixes are documented with copy‑paste commands in docs/runbooks/troubleshooting.md
.
Start with the Documentation index for design docs, ADRs, and references:
- Tools manifest reference
- Research tools reference
- CLI reference
- Interface: code.sandbox.js.run
- Architecture: Module boundaries
- Security: Threat model
- ADR‑0005: Harmony pre‑processing and channel‑aware output
- Pre-stage Harmony output contract
- ADR‑0006: Image generation tool (img_create)
- ADR‑0010: Adopt SearXNG & network research toolbelt (CLI-only)
- Agent loop:
docs/diagrams/agentcli-seq.md
- Toolbelt interactions:
docs/diagrams/toolbelt-seq.md
- Pre‑stage flow:
docs/diagrams/harmony-prep-seq.md
Run the full test suite (offline):
go test ./...
Lint, vet, and formatting checks:
make lint
make fmt # apply gofmt -s -w to the repo
Guarded logs cleanup:
# Only removes ./logs when ./logs/STATE trimmed equals DOWN
make clean-logs
# End-to-end verification of the guard logic (creates temp dirs)
make test-clean-logs
Reproducible builds: the Makefile
uses -trimpath
and stripped -ldflags
with VCS stamping disabled so two clean builds produce identical binaries. Verify locally by running two consecutive make clean build build-tools
and comparing sha256sum
outputs.
Contributions are welcome! See CONTRIBUTING.md
for workflow, coding standards, and how to run quality gates locally. Please also read CODE_OF_CONDUCT.md
.
Useful local helpers during development:
make check-tools-paths
— enforce canonicaltools/cmd/NAME
sources andtools/bin/NAME
invocations (requiresrg
)make verify-manifest-paths
— ensure relativetools.json
commands use./tools/bin/NAME
(absolute allowed in tests)make build-tool NAME=<name>
— build a single tool binary intotools/bin/NAME
make check-go-version
— fail fast if your local Go major.minor differs fromgo.mod
If your local toolchain does not match, you will see:
Go toolchain mismatch: system X.Y != go.mod X.Y
Remediation: install the matching Go version shown by go.mod
(e.g., from the official downloads) or switch via your version manager, then rerun make check-go-version
.
This repository pins the toolchain for deterministic results:
- CI uses the Go version declared in
go.mod
across all OS jobs. - Linting is performed with a pinned
golangci-lint
version managed by theMakefile
.
See ADR‑0003 for the full policy and rationale: docs/adr/0003-toolchain-and-lint-policy.md.
- Open an issue on the tracker: Issues
- For security concerns, avoid posting secrets in logs. If a private report is needed, open an issue with minimal detail and a maintainer will reach out.
- Follow updates from the author on LinkedIn: Jaakko Heusala
Planned improvements and open ideas are tracked in FUTURE_CHECKLIST.md
. Larger architectural decisions are recorded under docs/adr/
(see ADR‑0001 and ADR‑0002). Contributions to the roadmap are welcome via issues and PRs.
Experimental, but actively maintained. Interfaces may change before a stable 1.0.
MIT license. See LICENSE
.
Maintainers and authors:
- Hyperify.io maintainers
- Primary author: Jaakko Heusala
Acknowledgements: inspired by OpenAI‑compatible agent patterns; built for portability and safety.
See CHANGELOG.md
for notable changes and release notes.
See examples/unrestricted.md
for copy‑paste prompts demonstrating exec
+ file tools to write, build, and run code in a sandboxed environment.