Skip to content

3. Harbor CLI Reference

av edited this page Feb 8, 2026 · 26 revisions

Compose Setup commands

harbor up <services>

Alias: harbor u, harbor start, harbor s

Starts selected services. See the list of available services here. Run harbor defaults to see the default list of services that will be started. When starting additional services, you might need to harbor down first, so that all the services can pick updated configuration. API-only services can be started without stopping the main stack.

# Start with default services
harbor up

# Start with additional services
# See service descriptions in the Services Overview section
harbor up searxng

# Start with multiple additional services
harbor up webui ollama searxng llamacpp tts tgi lmdeploy litellm

up supports a few additional behaviors, see below.

Tail logs

You can instruct Harbor to start tailing logs of the services that are started.

# Starts tailing logs as soon as "docker compose up" is done
harbor up webui --tail
# Alias
harbor up webui -t

Open

You can instruct Harbor to also open the service that is started with up, once the docker compose up is done.

# Start default services + searxng and open
# searxng in the default browser
harbor up searxng --open
# Alias
harbor up searxng -o

You can also configure Harbor to automatically run harbor open for the current default UI service. This is useful if you always want to have the UI open when you start Harbor. The behavior can be enabled by setting ui.autoopen config field to true.

# Enable auto-open
harbor config set ui.autoopen true
# Disable auto-open (default)
harbor config set ui.autoopen false

You can switch the default UI service with the ui.main config field.

# Set the default UI service
harbor config set ui.main hollama

Skip defaults

You can instruct Harbor to only start the services you specify and skip the default ones.

# Start only the services you explicitly specify
harbor up --no-defaults searxng

Auto-tunnel

You can configure Harbor to automatically start tunnels for given services when running up. This is managed by harbor tunnels command.

# Add webui to the list of services that will be tunneled
# whenever `harbor up` is run
harbor tunnels add webui

[!WARN] Exposing your services to the internet is dangerous. Be safe! It's a bad idea to expose a service to the Internet without any authentication.

Capabilities detection

By default, Harbor will try to infer some capabilities of the host (and match related cross files), such as Nvidia GPU availability (nvidia capability) or presence of modern Docker Compose features (mdc capability).

If this behavior is undesirable or you want to provide a manual list of capabilities, you can disable the automatic detection.

# Disable automatic capability detection
harbor config set capabilities.autodetect false

It's also possible to provide a manual list of capabilities to use instead of the detected ones.

# Provide a default capabilities list manually,
# as a colon-separated list
harbor config set capabilities.default 'rocm;cdi'

harbor down

Alias: harbor d

Stops all currently running services.

# Stop all services
harbor down

# Pass down options to docker-compose
harbor down --remove-orphans

harbor restart <services>

Alias: harbor r

Restarts Harbor stack. Very useful for adjusting the configuration on the fly.

# Restart everything
harbor restart

# Restart a specific service only
harbor restart tabbyapi
# 🚩 Restarting a single service might be
# finicky, if something doesn't look right
# try down/up cycle instead

harbor pull <service|model>

Pulls the latest images for the selected services or models. Accepts:

  • Harbor service handle (e.g. ollama, webui, etc.)
  • Ollama model name (e.g. gemma3n:e4b-it-q8_0, hf.co/bartowski/SicariusSicariiStuff_Impish_LLAMA_4B-GGUF:Q8_0)
  • llama.cpp HuggingFace model specifier (e.g. unsloth/GLM-4.7-Flash-GGUF:Q8_0)
# Pull the latest images for the default services
harbor pull

# Pull the latest images for additional services
harbor pull searxng

# Do not pull default services alongside the specified ones
harbor pull --no-defaults searxng

# Pull Ollama model from native registry
harbor pull gemma3n:e4b-it-q8_0

# Pull Ollama model from HuggingFace
harbor pull hf.co/bartowski/SicariusSicariiStuff_Impish_LLAMA_4B-GGUF:Q8_0

# Pull llama.cpp model from HuggingFace (with optional tag)
# Downloads the model to llama.cpp cache using ephemeral server
harbor pull microsoft/Phi-3.5-mini-instruct-gguf
harbor pull microsoft/Phi-3.5-mini-instruct-gguf:Q4_K_M

Note

When pulling a llama.cpp model, Harbor starts an ephemeral llama.cpp server that downloads the model to the cache and then exits. The model format must be supported by llama.cpp's HuggingFace integration (typically GGUF files).

harbor build <services>

Builds the images for the selected services. Mostly relevant for services that have their Dockerfile local in the Harbor repository.

# HF Downloader is an example of a service that
# has a local Dockerfile
harbor build hfdownload

harbor ps

Proxy to docker-compose ps command. Displays the status of all services.

harbor ps

harbor logs

Alias: harbor l

Tails logs for all or selected services.

harbor logs

# Show logs for a specific service
harbor logs webui

# Show logs for multiple services
harbor logs webui ollama

# Filter specific logs with grep
harbor logs webui | grep ERROR

# Start tailing logs after "harbor up"
harbor up llamacpp --tail

# Show last 1000 lines in the initial tail chunk
harbor logs -n 1000

Additionally, harbor logs accepts all the options that docker-compose logs does.

harbor exec <service> <command>

Allows executing arbitrary commands in the container running given service. Useful for inspecting service at runtime or performing some custom operations that aren't natively covered by Harbor CLI.

# This is the same folder as "harbor/open-webui"
harbor exec webui ls /app/backend/data

# Check the processes in searxng container
harbor exec searxng ps aux

exec offers plenty of flexibility. Some useful examples below.

Launch an interactive shell in the running container with one of the services.

# Launch "bash" in the ollama service
harbor exec ollama bash

# You are then landed in the interactive
# container shell
$ root@279a3a523a0b:/#

Access useful scripts and CLIs from the llamacpp.

# See .sh scripts from the llama.cpp
harbor exec llamacpp ls ./scripts
# Run one of the bundled CLI tools
harbor exec llamacpp ./llama-bench --help

Ensuring that the service is running might not be convenient. See harbor run and harbor cmd.

harbor run <service> <command>

Runs (in the order of precedence):

  • One of configured aliases
  • A command in the Harbor services

Aliases

# Configure and run an alias to quickly edit
harbor alias set env 'code $(harbor home)/.env
harbor run env

Aliases take precedence over services in case of a name conflict. See the harbor aliases reference for more details.

Services

Unlike harbor exec, harbor run starts a new container with the given command. This is useful for running one-off commands or scripts that don't require the service to be running. Note that the command accepts the service handle, not the container name, main container for the service will be used.

# Run a one-off command in the litellm service
harbor run litellm --help

This command has a pretty rigid structure, it doesn't allow you to override the entypoint or run an interactive shell. See harbor exec and harbor cmd for more flexibility.

harbor run litellm --help
# Will run the same command as
$(harbor cmd litellm) run litellm --help

harbor shell <service>

Launch interactive shell in the service's container. Useful for debug and inspection.

# Tries to launch with "bash" shell by default
harbor shell tabbyui

# You can switch to another shell by supplying
# an additional argument (must be available in the container)
harbor shell tabbyui sh
harbor shell tabbyui ash
harbor shell tabbyui fish
harbor shell tabbyui zsh

harbor cmd <services>

Prepares the same docker compose call that is used by the Harbor itself, you can then use it to run arbitrary Docker commands.

# Will print docker compose command
# that is used to start these services
harbor cmd webui litellm vllm

It's most useful to be combined with eval of the returned command.

$(harbor cmd litellm) run litellm --help
# Unlike exec, this doesn't require service to be running
$(harbor cmd litellm) run -it --entrypoint bash litellm

# Note, this is not an equivalent of `harbor down`,
# It'll only shut down default services.
$(harbor cmd) down

# Harbor has a special wildcard notation for compose commands.
# Note the quotes around the wildcard (otherwise it'll be expanded by the shell)
$(harbor cmd "*") down
# And now, this is an equivalent of
harbor down

harbor eject

Renders Harbor's Docker Compose configuration into a standalone config that can be moved and used elsewhere. Accepts the same options as harbor up.

# Eject with default services
harbor eject

# Eject with additional services
harbor eject searxng

# Likely, you want the output to be saved in a file
harbor eject searxng llamacpp > docker-compose.harbor.yml

harbor home

Prints the path to the Harbor's home directory, where the Harbor CLI is located and where the configuration and data are stored.

harbor home

Most notably, you can use this command to refer to Harbor's workspace for other commands and services that might require it.

# For example - see all files in the Harbor workspace
ls $(harbor home)

# Or, inspect a folder used by a specific service
ls $(harbor home)/services/ollama

harbor doctor

Runs a diagnostic script to check if all requirements are met for Harbor to run properly.

Will check things like relevant Docker and Docker Compose versions, the presence of required directories, and other things that might prevent Harbor CLI or the Harbor App running as expected.

harbor doctor

Setup Management Commands

harbor ollama <command>

Runs Ollama CLI in the container against the Harbor configuration.

# All Ollama commands are available
harbor ollama --version

# Show currently cached models
harbor ollama list

# Pull a model
harbor ollama pull llama3.2

# Run a model interactively
harbor ollama run llama3.2

# Remove a model
harbor ollama rm llama3.2

# See for more commands
harbor ollama --help

harbor ollama ctx

Get/set the context length for Ollama (sets OLLAMA_CONTEXT_LENGTH environment variable).

# Show current context length
harbor ollama ctx

# Set context length to 8192 tokens
harbor ollama ctx 8192

# Set to 128k for large context models
harbor ollama ctx 131072

Configuration

# Configure ollama version, accepts a docker tag
harbor config set ollama.version 0.3.7-rc5-rocm

# Or use latest
harbor config set ollama.version latest

harbor llamacpp <command>

Runs CLI tasks specific to managing llamacpp service.

harbor llamacpp models

List models currently loaded by the llama.cpp server.

# List loaded models
harbor llamacpp models

harbor llamacpp model

Get/set the model to run via HuggingFace URL.

# Show the model currently configured to run
harbor llamacpp model

# Set a new model to run via a HuggingFace URL
# ⚠️ Note, other kinds of URLs are not supported
harbor llamacpp model https://huggingface.co/user/repo/blob/main/file.gguf
# Above command is an equivalent of
harbor config set llamacpp.model https://huggingface.co/user/repo/blob/main/file.gguf
# And will translate to a --hf-repo and --hf-file flags for the llama.cpp CLI runtime

harbor llamacpp gguf

Get/set the path to GGUF file to run (alternative to model URL).

# Show the current GGUF path
harbor llamacpp gguf

# Set path to local GGUF file
harbor llamacpp gguf /models/model.gguf

harbor llamacpp args

Get/set extra arguments to pass to the llama.cpp CLI.

# Show current arguments
harbor llamacpp args

# Set extra arguments
harbor llamacpp args '-c 4096 -n 512'

harbor tgi <command>

Runs CLI tasks specific to managing text-generation-inference service.

harbor tgi model

Get/set the model repository to run.

# Show the model currently configured to run
harbor tgi model

# Set model repository
harbor tgi model meta-llama/Llama-3.2-3B-Instruct

harbor tgi quant

Get/set the quantization mode. Must match the contents of the model repository.

# Show current quantization
harbor tgi quant

# Set quantization (awq, eetq, exl2, gptq, marlin, bitsandbytes, bitsandbytes-nf4, bitsandbytes-fp4, fp8)
harbor tgi quant awq

harbor tgi revision

Get/set the model revision/branch to use.

# Show current revision
harbor tgi revision

# Set revision
harbor tgi revision 4.0bpw

harbor tgi args

Get/set extra arguments to pass to the TGI CLI.

# Show current arguments
harbor tgi args

# Set extra arguments
harbor tgi args '--max-input-length 4096'

Example Configuration

# Unlike llama.cpp, a few more parameters are needed,
# example of setting them below
harbor tgi model TheBloke/Llama-2-7B-AWQ
harbor tgi quant awq
harbor tgi revision 4.0bpw

# Alternatively, configure all in one go
harbor config set tgi.model.specifier '--model-id repo/model --quantize awq --revision 3_5'

harbor litellm <command>

Runs CLI tasks specific to managing litellm service.

# change default username and password to use litellm UI
harbor litellm username admin
harbor litellm password admin

# Open LiteLLM UI in the browser
harbor litellm ui
# Note that it's different from the main litellm endpoint
# that can be opened/accessed with general commands:
harbor open litellm
harbor url litellm

harbor hf

Runs HuggingFace CLI in the container against the hosts' HuggingFace cache.

# All HF commands are available
harbor hf --help

# Show current cache status
harbor hf scan-cache

Harbor's hf CLI is expanded with some additional commands for convenience.

harbor hf parse-url <url>

Parses the HuggingFace URL and prints the repository and file names. Useful for setting the model in the llamacpp service.

# Get repository and file names from the HuggingFace URL
harbor hf parse-url https://huggingface.co/user/repo/blob/main/file.gguf
# > Repository: user/repo
# > File: file.gguf

harbor hf token

Manage HF token for accessing private/gated models.

# Set the token
harbor hf token <token>

# Show the token
harbor hf token

harbor hf cache

Get/set the location of HuggingFace cache directory.

# Show current cache location
harbor hf cache

# Set cache location
harbor hf cache /path/to/cache

harbor hf path <user/repo>

Resolve the path to a model directory in HF cache. Useful for finding where a model is stored locally.

# Get the path to a model in cache
harbor hf path meta-llama/Llama-2-7b-hf

harbor hf dl

This is a proxy for the awesome HuggingFaceModelDownloader CLI pre-configured to run in the same way as the other Harbor services.

# See the original help
harbor hf dl --help

# EXL2 example
#
# -s ./hf - Save the model to global HuggingFace cache (mounted to ./hf)
# -c 10   - make download go brr with 10 concurrent connections
# -m      - model specifier in user/repo format
# -b      - model revision/branch specifier (where applicable)
harbor hf dl -c 10 -m turboderp/TinyLlama-1B-exl2 -b 2.3bpw -s ./hf

# GGUF example
#
# -s ./llama.cpp - Save the model to global llama.cpp cache (mounted to ./llama.cpp)
# -c 10          - make download go brr with 10 concurrent connections
# -m             - model specifier in user/repo format
# :Q2_K          - file filter postfix - will only download files with this postfix
harbor hf dl -c 10 -m TheBloke/TinyLlama-1.1B-Chat-v1.0-GGUF:Q2_K -s ./llama.cpp

harbor hf download

HuggingFace's own download utility. Works great when you want to download things for tgi, aphrodite, tabbyapi, vllm, etc.

# Download the model to the global HuggingFace cache
harbor hf download user/repo

# Set the token for private/gated models
harbor hf token <token>
harbor hf download user/private-repo

# Download a specific file
harbor hf download user/repo file

Tip

You can use harbor find to locate downloaded files on your system.

harbor hf find <query>

A shortcut from the terminal to the HuggingFace model search. It will open the search results in the default browser.

# Search for the models with the query
harbor hf find gguf gemma-2
# will open this URL
# https://huggingface.co/models?sort=trending&search=gguf%20gemma-2

# Search for the models with the query
harbor hf find exl2 gemma-2-2b
# will open this URL
# https://huggingface.co/models?sort=trending&search=exl2%20gemma-2-2b

harbor vllm

Runs CLI tasks specific to managing vllm service.

harbor vllm model

Get/set the model currently configured to run.

# Show the model currently configured to run
harbor vllm model
# Set a new model to run via a repository specifier
harbor vllm model user/repo

harbor vllm args

Manage extra arguments to pass to the vllm engine.

# See the list of arguments in
# the official CLI
harbor run vllm --help

# Show the current arguments
harbor vllm args

# Set new arguments
harbor vllm args '--served-model-name vllm --device cpu'

harbor vllm attention

Select one of the attention backends. See VLLM_ATTENTION_BACKEND in the official env var docs for reference.

# Show the current attention backend
harbor vllm attention

# Set a new attention backend
harbor vllm attention 'ROCM_FLASH'

harbor vllm version

Get/set VLLM version docker tag.

# Show the current version
harbor vllm version

# Set a specific version
harbor vllm version v0.6.0

harbor webui

Runs CLI tasks specific to managing webui service.

harbor webui version

Get/set current version of the WebUI. Accepts a docker tag from the GHCR registry

# Show the current version
harbor webui version

# Set a new version
harbor webui version dev-cuda

harbor webui secret <secret>

Get/Set the secret JWT key for the webui service. Allows Open WebUI JWT tokens to remain valid between Harbor restarts.

# Show the current secret
harbor webui secret

# Set a new secret
harbor webui secret sk-203948

harbor webui name <name>

Get/Set the name of the service for Open WebUI (by default "Harbor").

# Show the current name
harbor webui name

# Set a new name
harbor webui name "Pirate Harbor"

harbor webui log <level>

Get/Set the log level for the webui service. Allows to control the verbosity of the logs. See Official logging documentation.

# INFO is the default log level
harbor webui log

# Set to DEBUG for more visibility
harbor webui log DEBUG

harbor openai <command>

Manage OpenAI-related configurations for related services.

One unusual thing is that Harbor allows setting up multiple OpenAI APIs and Keys. This is mostly useful for the services that support such a configuration, for example LiteLLM or Open WebUI.

When setting one or more Keys/URLs - the first one will be propagated to serve as "default" for services that require strictly one url/key pair.

harbor openai keys

Manage OpenAI API keys for the services that require them.

# Show the current API keys
harbor openai keys
harbor openai keys ls

# Add a new API key
harbor openai keys add <key>

# Remove an API key
harbor openai keys rm <key>
# Remove by index (zero-based)
harbor openai keys rm 0

# Underlying config option
harbor config get openai.keys

When settings API keys, the first one is also propagated to be the "default" one, for services that require strictly one key.

harbor openai urls

Manage OpenAI API URLs for the services that require them.

# Show the current URLs
harbor openai urls
harbor openai urls ls

# Add a new URL
harbor openai keys add <url>

# Remove a URL
harbor openai keys rm <url>
# Remove by index (zero-based)
harbor openai keys rm 0

# Underlying config option
harbor config get openai.urls

When settings API URLs, the first one is also propagated to be the "default" one, for services that require strictly one URL.

harbor tabbyapi <command>

Manage TabbyAPI-related configurations for related services.

harbor tabbyapi model

Get/Set the model currently configured to run.

# Show the model currently configured to run
harbor tabbyapi model

# Set a new model to run via a repository specifier
harbor tabbyapi model user/repo
# For example:
harbor tabbyapi model Annuvin/gemma-2-2b-it-abliterated-4.0bpw-exl2

harbor tabbyapi args

Manage extra arguments to pass to the tabbyapi engine. See the arguments in official Configuration Wiki.

# Show the current arguments
harbor tabbyapi args

# Set new arguments
harbor tabbyapi args --log-prompt true

You can find some other items not listed above running the tabbyapi CLI with Harbor:

harbor run tabbyapi --help

harbor tabbyapi apidoc

When tabbyapi is running - will open the Docs Swagger UI in the default browser.

harbor tabbyapi apidoc

harbor plandex <command>

Tip

Similarly to the official Plandex CLI, also available with pdx alias.

Access Plandex CLI for interactions with the self-hosted Plandex instance.

See the service guide for some additional details on the Plandex service setup.

# Access Plandex own CLI
harbor pdx --help

Whenever you're running harbor pdx, the tool will have access to the current folder as if it was called directly in the terminal.

harbor plandex health

Pings the Plandex server to check if it's up and running, using the official /health endpoint.

# Check the Plandex server health
harbor pdx health # OK

harbor plandex pwd

Allows you to verify which specific folder will be mounted to the Plandex containers as the workspace.

# Show the folder that will be mounted to the Plandex CLI
# against the current location
harbor pdx pwd

harbor mistralrs <command>

A CLI to manage the mistralrs service.

Everything except the commands specified below is passed to the original mistralrs-server CLI.

harbor mistralrs health

Pings the MistralRS server to check if it's up and running, using the official /health endpoint.

# Check the MistralRS server health
harbor mistralrs health # OK

harbor mistralrs docs

Open official service docs in the default browser (when the service is running).

# Open MistralRS docs in the browser
harbor mistralrs docs

harbor mistralrs model

Get/Set the model currently configured to run. See a more detailed guide in the mistralrs service guide.

# Show the model currently configured to run
harbor mistralrs model

# Set a new model to run via a repository specifier
# For "plain" models:
harbor mistralrs model user/repo
# For "gguf" models:
harbor mistralrs model "container/folder -f model.gguf"
# See the guide above for a more detailed overview

harbor mistralrs args

Manage extra arguments to pass to the mistralrs engine. See the full list with harbor mistralrs --help.

# Show the current arguments
harbor mistralrs args

# Set new arguments
harbor mistralrs args "--no-paged-attn --throughput"
# Reset the arguments to the default
harbor mistralrs args ""

harbor mistralrs type

Get/Set the model type currently configured to run.

# Show the model type currently configured to run
harbor mistralrs type

# Set a new model type to run
harbor mistralrs type gguf
harbor mistralrs type plain
# See the service guide for setup on both

harbor mistralrs arch

For plain type, allows to set the architecture of the model. See the official reference.

# Show the model architecture currently configured to run
harbor mistralrs arch

# Set a new model architecture to run
harbor mistralrs arch mistral
harbor mistralrs arch gemma2

harbor mistralrs isq

For plain type, allows to set the in situ quantization.

# Show the ISQ status currently configured to run
harbor mistralrs isq

# Set a new ISQ status to run
harbor mistralrs isq Q2K

harbor mistralrs version

Get/set mistral.rs version docker tag.

# Show the current version
harbor mistralrs version

# Set version (0.3, 0.4, etc.)
harbor mistralrs version 0.4

harbor opint <command>

Configure and run Open Interpreter CLI. (Almost) everything except the commands specified below is passed to the original interpreter CLI.

harbor opint backend

Get/set the backend service to use for Open Interpreter (e.g., ollama, vllm, litellm).

# Show the current backend
harbor opint backend

# Set backend service
harbor opint backend ollama
harbor opint backend vllm

harbor opint model

Get/Set the model currently configured to run.

# Show the model currently configured to run
harbor opint model

# Set a new model to run
# must match the "id" of a model of a backend
# that'll be used to serve interpreter requests
harbor opint model <model>

# For example, for ollama
harbor opint model codestral

harbor opint args

Manage extra arguments to pass to the Open Interpreter engine.

# Show the current arguments
harbor opint args

# Set new arguments
harbor opint args "--no-paged-attn --throughput"

harbor opint cmd

Overrides the whole command that will be run in the Open Interpreter container. Useful for running something completely custom.

[!WARN] Resets "model" and "args" to empty strings.

# Set the command to run in the Open Interpreter container
harbor opint cmd "--profile agentic_code_expert.py"

harbor opint pwd

Prints the directory that will be mounted to the Open Interpreter container as the workspace.

# Show the folder that will be mounted
# to the Open Interpreter CLI
harbor opint pwd

harbor opint profiles

Alias: harbor opint --profiles Alias: harbor opint -p

Works identically (hopefully) to the interpreter --profiles - open the directory storing custom profiles for the Open Interpreter.

harbor opint models

Alias: harbor opint --local_models

Open the directory containing local models for Open Interpreter.

harbor opint --os

OS Mode is not supported as there's no established way to have full OS host control from within a container.

harbor aphrodite <command>

Manage Aphrodite-related configurations. Aphrodite is a high-performance vLLM fork optimized for inference.

harbor aphrodite model

Get/set the model currently configured to run.

# Show the model currently configured to run
harbor aphrodite model

# Set a new model to run via a repository specifier
harbor aphrodite model user/repo

harbor aphrodite args

Manage extra arguments to pass to the Aphrodite engine.

# Show the current arguments
harbor aphrodite args

# Set new arguments
harbor aphrodite args '--max-model-len 4096'

harbor aphrodite version

Get/set the Aphrodite version docker tag.

# Show the current version
harbor aphrodite version

# Set a specific version
harbor aphrodite version latest

harbor cmdh <command>

Manage cmdh (Command-H) service configuration. Command-H helps generate CLI commands using AI.

harbor cmdh model

Get/set the cmdh model to use.

# Show the current model
harbor cmdh model

# Set a new model
harbor cmdh model qwen2.5-coder:7b

harbor cmdh host

Get/set the cmdh LLM host provider.

# Show the current host
harbor cmdh host

# Set host to ollama or OpenAI
harbor cmdh host ollama
harbor cmdh host OpenAI

harbor cmdh key

Get/set the cmdh OpenAI API key (when using OpenAI host).

# Show the current key
harbor cmdh key

# Set a new key
harbor cmdh key sk-...

harbor cmdh url

Get/set the cmdh OpenAI API URL (when using OpenAI host).

# Show the current URL
harbor cmdh url

# Set a new URL
harbor cmdh url https://api.openai.com/v1

harbor fabric <command>

Manage Fabric service configuration. Fabric is a CLI tool for applying AI patterns to text.

See Fabric Documentation for pattern details.

harbor fabric model

Get/set the Fabric model to use.

# Show the current model
harbor fabric model

# Set a new model
harbor fabric model gpt-4

harbor fabric patterns

Open the Fabric patterns directory in your file manager.

# Open patterns directory
harbor fabric patterns
harbor fabric --patterns

Usage Examples

# List available patterns
harbor fabric -l

# Use a pattern with piped input
echo "Explain quantum computing" | harbor fabric --pattern explain

# Apply pattern to a file
harbor fabric --pattern summarize < document.txt

harbor parler <command>

Manage Parler TTS service configuration.

harbor parler model

Get/set the Parler TTS model.

# Show the current model
harbor parler model

# Set a new model
harbor parler model parler-tts/parler-tts-mini-v1

harbor parler voice

Get/set the voice description for Parler TTS.

# Show the current voice
harbor parler voice

# Set a new voice description
harbor parler voice "A female speaker with a slightly low-pitched voice"

harbor airllm <command>

Manage AirLLM service configuration. AirLLM enables running large models with limited VRAM.

harbor airllm model

Get/set the model to run.

# Show the current model
harbor airllm model

# Set a new model
harbor airllm model meta-llama/Llama-2-70b-hf

harbor airllm ctx

Get/set the context length for AirLLM.

# Show the current context length
harbor airllm ctx

# Set context length
harbor airllm ctx 4096

harbor airllm compression

Get/set the compression level for AirLLM.

# Show the current compression
harbor airllm compression

# Set compression (4bit, 8bit, or none)
harbor airllm compression 4bit
harbor airllm compression 8bit
harbor airllm compression none

harbor txtai <command>

Manage txtai service configuration for semantic search and RAG.

harbor txtai cache

Get/set the location of global txtai cache.

# Show the current cache location
harbor txtai cache

# Set cache location
harbor txtai cache /path/to/cache

harbor txtai rag model

Get/set the txtai RAG model repository to run.

# Show the current RAG model
harbor txtai rag model

# Set RAG model
harbor txtai rag model user/repo

harbor txtai rag embeddings

Get/set the path to the embeddings file.

# Show the current embeddings path
harbor txtai rag embeddings

# Set embeddings path
harbor txtai rag embeddings /path/to/embeddings

harbor aider <command>

Access Aider AI coding assistant. Aider helps you edit code using AI.

See Aider Documentation for detailed usage.

harbor aider model

Get/set the Aider model to use.

# Show the current model
harbor aider model

# Set a new model
harbor aider model gpt-4

Usage Examples

# Start Aider in current directory
harbor aider

# Start with specific files
harbor aider file1.py file2.py

# Use with a specific model
harbor aider --model gpt-4-turbo

harbor chatui <command>

Manage HuggingFace ChatUI service configuration.

harbor chatui version

Get/set the ChatUI version docker tag.

# Show the current version
harbor chatui version

# Set a new version
harbor chatui version latest

harbor chatui model

Get/set the Ollama model to target.

# Show the current model
harbor chatui model

# Set model ID
harbor chatui model llama3.2

harbor comfyui <command>

Manage ComfyUI service configuration for Stable Diffusion workflows.

harbor comfyui version

Get/set the ComfyUI version docker tag.

# Show the current version
harbor comfyui version

# Set a new version
harbor comfyui version latest

harbor comfyui user

Get/set the ComfyUI username for authentication.

# Show the current username
harbor comfyui user

# Set username
harbor comfyui user admin

harbor comfyui password

Get/set the ComfyUI password for authentication.

# Show the current password
harbor comfyui password

# Set password
harbor comfyui password secret123

harbor comfyui auth

Enable/disable ComfyUI authentication.

# Show auth status
harbor comfyui auth

# Enable authentication
harbor comfyui auth true

# Disable authentication
harbor comfyui auth false

harbor comfyui workspace sync

Sync installed custom nodes to persistent storage.

# Sync custom nodes
harbor comfyui workspace sync

harbor comfyui workspace open

Open folder containing ComfyUI workspace in the File Manager.

# Open workspace folder
harbor comfyui workspace open

harbor comfyui workspace clear

Clear ComfyUI workspace, including all configurations and models.

# Clear workspace (prompts for confirmation)
harbor comfyui workspace clear

harbor comfyui output

Open folder containing ComfyUI output in the File Manager.

# Open output folder
harbor comfyui output

harbor aichat <command>

Manage aichat service. AIChat is a versatile CLI AI assistant.

See AIChat Documentation for details.

harbor aichat model

Get/set the model to run.

# Show the current model
harbor aichat model

# Set a new model
harbor aichat model gemma2:9b

harbor aichat workspace

Open the aichat workspace directory.

# Open workspace directory
harbor aichat workspace

Usage Examples

# Start interactive chat
harbor aichat

# Execute a single query
harbor aichat "Explain Docker volumes"

# Use with pipes
echo "Translate this to French" | harbor aichat

harbor omnichain <command>

Manage omnichain service configuration.

harbor omnichain workspace

Open the omnichain workspace directory.

# Open workspace
harbor omnichain workspace

harbor sglang <command>

Manage SGLang service configuration. SGLang is a fast inference engine.

harbor sglang model

Get/set the sglang model repository to run.

# Show the current model
harbor sglang model

# Set a new model
harbor sglang model meta-llama/Llama-3.2-3B-Instruct

harbor sglang args

Get/set extra args to pass to the sglang CLI.

# Show the current arguments
harbor sglang args

# Set new arguments
harbor sglang args '--tp 2 --mem-fraction-static 0.8'

harbor jupyter <command>

Manage Jupyter service configuration.

harbor jupyter workspace

Open the Jupyter workspace directory.

# Open workspace
harbor jupyter workspace

harbor jupyter image

Get/set the Jupyter image to run.

# Show the current image
harbor jupyter image

# Set a custom image
harbor jupyter image jupyter/tensorflow-notebook

harbor jupyter deps

Manage extra dependencies to install in the Jupyter image.

# List current dependencies
harbor jupyter deps ls

# Add a dependency
harbor jupyter deps add pandas

# Remove a dependency
harbor jupyter deps rm pandas

harbor ol1 <command>

Manage OL1 service configuration for o1-style reasoning.

harbor ol1 model

Get/set the OL1 model repository to run.

# Show the current model
harbor ol1 model

# Set a new model
harbor ol1 model user/repo

harbor ol1 args

Manage OL1 arguments as a dictionary.

# List current arguments
harbor ol1 args ls

# Set an argument
harbor ol1 args set key value

# Remove an argument
harbor ol1 args rm key

harbor ktransformers <command>

Manage KTransformers service configuration. KTransformers optimizes inference with kernel-level optimizations.

harbor ktransformers model

Get/set the --model_path for KTransformers.

# Show the current model
harbor ktransformers model

# Set model path
harbor ktransformers model /models/Qwen2-7B

harbor ktransformers gguf

Get/set the --gguf_path for KTransformers.

# Show the current GGUF path
harbor ktransformers gguf

# Set GGUF path
harbor ktransformers gguf /models/model.gguf

harbor ktransformers version

Get/set KTransformers version.

# Show the current version
harbor ktransformers version

# Set version
harbor ktransformers version 0.1.0

harbor ktransformers image

Get/set KTransformers docker image.

# Show the current image
harbor ktransformers image

# Set custom image
harbor ktransformers image custom/ktransformers:latest

harbor ktransformers args

Get/set extra args to pass to KTransformers.

# Show current args
harbor ktransformers args

# Set args
harbor ktransformers args '--max_tokens 2048'

harbor openhands <command>

Access OpenHands (formerly OpenDevin) AI coding agent. Provides autonomous software development capabilities.

# Run OpenHands in current directory
harbor openhands

# The workspace is mounted at /opt/workspace_base

harbor stt <command>

Manage Speech-to-Text service configuration.

harbor stt model

Get/set the STT model to run.

# Show the current model
harbor stt model

# Set a new model
harbor stt model openai/whisper-large-v3

harbor stt version

Get/set the STT docker tag.

# Show the current version
harbor stt version

# Set version
harbor stt version latest

harbor speaches <command>

Manage Speaches service configuration (combined STT/TTS).

harbor speaches stt_model

Get/set the STT model to run.

# Show the current STT model
harbor speaches stt_model

# Set STT model
harbor speaches stt_model openai/whisper-base

harbor speaches tts_model

Get/set the TTS model to run.

# Show the current TTS model
harbor speaches tts_model

# Set TTS model
harbor speaches tts_model facebook/mms-tts-eng

harbor speaches tts_voice

Get/set the TTS voice to use.

# Show the current voice
harbor speaches tts_voice

# Set voice
harbor speaches tts_voice en-US-Neural2-A

harbor speaches version

Get/set the Speaches version docker tag.

# Show the current version
harbor speaches version

# Set version
harbor speaches version latest

harbor boost <command>

Manage Boost service configuration. Boost provides advanced reasoning modules for LLMs.

harbor boost urls

Manage OpenAI API URLs to boost.

# List URLs
harbor boost urls ls

# Add a URL
harbor boost urls add https://api.openai.com/v1

# Remove a URL
harbor boost urls rm https://api.openai.com/v1

harbor boost keys

Manage OpenAI API keys to boost.

# List keys
harbor boost keys ls

# Add a key
harbor boost keys add sk-...

# Remove a key
harbor boost keys rm sk-...

harbor boost modules

Manage Boost modules to enable.

# List modules
harbor boost modules ls

# Add a module
harbor boost modules add klmbr

# Remove a module
harbor boost modules rm klmbr

harbor boost klmbr

Manage KLMBR (Knowledge-enhanced Language Model with Bayesian Reasoning) module.

# Access KLMBR module configuration
harbor boost klmbr

harbor boost rcn

Manage RCN (Recursive Cognitive Network) module.

# Access RCN module configuration
harbor boost rcn

harbor boost g1

Manage G1 reasoning module.

# Access G1 module configuration
harbor boost g1

harbor boost r0

Manage R0 reasoning module.

# Access R0 module configuration
harbor boost r0

harbor nexa <command>

Access Nexa SDK CLI. Nexa provides efficient model inference.

See Nexa Documentation for details.

harbor nexa model

Get/set the Nexa model to use.

# Show the current model
harbor nexa model

# Set a new model
harbor nexa model gemma-2b

Usage Examples

# Run Nexa CLI
harbor nexa

# Generate text
harbor nexa gen "Once upon a time"

harbor repopack <command>

Access Repopack CLI. Repopack helps package repository contents for AI context.

See Repopack Documentation for details.

# Pack current directory
harbor repopack

# Pack with specific output
harbor repopack -o output.txt

harbor k6 <command>

Access K6 load testing CLI with Grafana visualization.

See K6 Documentation for test script details.

# Run a load test
harbor k6 script.js

# Run with specific options
harbor k6 run --vus 10 --duration 30s script.js

When running K6 tests, Harbor automatically displays the Grafana dashboard URL.

harbor promptfoo <command>

Access Promptfoo CLI for LLM testing and evaluation.

See Promptfoo Documentation for details.

harbor promptfoo view

Open the Promptfoo UI in browser.

# Open Promptfoo UI
harbor promptfoo view
harbor promptfoo open
harbor promptfoo o

Usage Examples

# Initialize a new config
harbor promptfoo init

# Run evaluations
harbor promptfoo eval

# View results in UI
harbor promptfoo view

harbor webtop <command>

Manage Webtop service (full Linux desktop in browser).

harbor webtop reset

Delete Webtop workspace and reset to fresh state.

# Reset Webtop (stops service and clears data)
harbor webtop reset

harbor langflow <command>

Manage Langflow service configuration for visual LLM workflow building.

harbor langflow ui

Open Langflow UI in browser.

# Open Langflow UI
harbor langflow ui
harbor langflow open

harbor langflow url

Get the Langflow URL.

# Print Langflow URL
harbor langflow url

harbor langflow version

Get/set the Langflow version docker tag.

# Show the current version
harbor langflow version

# Set version
harbor langflow version 1.0.0

harbor langflow auth username

Get/set the Langflow superuser username.

# Show username
harbor langflow auth username

# Set username
harbor langflow auth username admin

harbor langflow auth password

Get/set the Langflow superuser password.

# Show password
harbor langflow auth password

# Set password
harbor langflow auth password secret123

harbor kobold <command>

Manage KoboldCPP service configuration.

harbor kobold model

Get/set the Kobold model repository to run.

# Show the current model
harbor kobold model

# Set a new model
harbor kobold model user/repo

harbor kobold args

Get/set Kobold arguments.

# Show current args
harbor kobold args

# Set args
harbor kobold args '--contextsize 4096'

harbor morphic <command>

Manage Morphic service configuration (AI-powered search interface).

harbor morphic model

Get/set the default model for Morphic.

# Show the current model
harbor morphic model

# Set model
harbor morphic model llama3.2

harbor morphic tool_model

Get/set the tool calling model for Morphic.

# Show the current tool model
harbor morphic tool_model

# Set tool model
harbor morphic tool_model qwen2.5-coder

harbor gptme <command>

Access GPTme AI assistant CLI.

See GPTme Documentation for details.

harbor gptme model

Get/set the GPTme model repository to run.

# Show the current model
harbor gptme model

# Set model
harbor gptme model gpt-4

Usage Examples

# Start interactive session
harbor gptme

# Execute with specific prompt
harbor gptme "Explain Docker networking"

harbor mcp <command>

Manage Model Context Protocol tools.

harbor mcp inspector

Launch MCP Inspector for debugging MCP servers.

# Run MCP Inspector
harbor mcp inspector

harbor modularmax <command>

Manage ModularMax service configuration (Modular MAX Engine inference).

harbor modularmax model

Get/set the ModularMax model repository to run.

# Show the current model
harbor modularmax model

# Set model
harbor modularmax model meta-llama/Llama-3.2-1B-Instruct

harbor modularmax args

Get/set extra args to pass to the ModularMax CLI.

# Show current args
harbor modularmax args

# Set args
harbor modularmax args '--max-length 2048'

harbor photoprism <command>

Manage PhotoPrism service and run PhotoPrism CLI commands.

See PhotoPrism CLI Documentation for available commands.

harbor photoprism model

Get/set the vision model for Ollama integration.

# Show the current vision model
harbor photoprism model

# Set vision model (for use with Ollama)
harbor photoprism model llava

PhotoPrism CLI Commands

# List configured vision models
harbor photoprism vision ls

# Run caption generation
harbor photoprism vision run -m caption

# Run label generation
harbor photoprism vision run -m labels

# Reset user password
harbor photoprism passwd admin

# List users
harbor photoprism users ls

Note: PhotoPrism must be running to execute CLI commands.

harbor lmeval <command>

Manage LM Eval Harness for evaluating language models.

See lm-evaluation-harness for task details.

harbor lmeval results

Open the results directory in file manager.

# Open results folder
harbor lmeval results

harbor lmeval cache

Open the cache directory in file manager.

# Open cache folder
harbor lmeval cache

harbor lmeval type

Get/set the evaluation type.

# Show current type
harbor lmeval type

# Set type
harbor lmeval type local

harbor lmeval model

Get/set the model for evaluation.

# Show current model
harbor lmeval model

# Set model
harbor lmeval model meta-llama/Llama-3.2-3B

harbor bench <command>

Manage Harbor's integrated benchmark suite.

harbor bench results

Open the benchmark results directory.

# Open results folder
harbor bench results

harbor bench tasks

Get/set benchmark tasks to run.

# Show current tasks
harbor bench tasks

# Set tasks
harbor bench tasks "hellaswag,winogrande"

harbor bench debug

Get/set debug mode for benchmarks.

# Show debug status
harbor bench debug

# Enable debug
harbor bench debug true

harbor bench model

Get/set the model to benchmark.

# Show current model
harbor bench model

# Set model
harbor bench model gpt-4

harbor bench api

Get/set the API endpoint for benchmarks.

# Show current API
harbor bench api

# Set API
harbor bench api http://localhost:8000/v1

harbor bench key

Get/set the API key for benchmarks.

# Show current key
harbor bench key

# Set key
harbor bench key sk-...

harbor parllama <command>

Access Parllama CLI (Ollama GUI client).

# Launch Parllama
harbor parllama

harbor oterm <command>

Access oterm CLI (Ollama terminal UI).

# Launch oterm
harbor oterm

Harbor CLI Commands

harbor open <service>

Opens the service URL in the default browser. In case of API services, you'll see the response from the service main endpoint.

# Without any arguments, will open
# the service from main.ui config field
harbor open

# `harbor open` will now open hollama
# by default
harbor config set main.ui hollama

# Open a specific service
# using its handle
harbor open ollama

Additionally, harbor open can be configured to open a custom URL for a given handle. This is done by using <service>.open_url config field. For example, to open Ollama /api/ps instead of the default / endpoint, you can run:

# Set the new config
harbor config set ollama.open_url http://localhost:33821/api/ps

# Now, running `harbor open ollama` will open the `/api/ps` endpoint
harbor open ollama

Note that custom open_url configs might be reset during Harbor updates.

harbor url <service>

Prints the URL of the service to the terminal.

# With default settings, this will print
# http://localhost:33831
harbor url llamacpp

Harbor will try to determine multiple additional URLs for the service:

# URL on local host
harbor url ollama

# URL on LAN
harbor url --lan ollama
harbor url --addressable ollama
harbor url --a ollama

# URL on Docker's intranet
harbor url -i ollama
harbor url --internal ollama

harbor qr <service>

Generates a QR code for the service URL and prints it in the terminal.

# This service will open by default
harbor config get ui.main

# Generate a QR code for default UI
harbor qr

# Generate a QR code for a specific service
# Makes little sense for non-UI services.
harbor qr ollama

Example

Example QR code in the terminal

harbor tunnel <service>

Alias: harbor t

Opens a cloudflared tunnel to the local instance of the service. Useful for sharing the service with others or accessing it from a remote location.

[!WARN] Exposing your services to the internet is dangerous. Be safe! It's a bad idea to expose a service without any authentication whatsoever.

# Open a tunnel to the default UI service
harbor tunnel

# Open a tunnel to a specific service
harbor tunnel ollama

# Stop all running tunnels
harbor tunnel down
harbor tunnel stop
harbor t s
harbor t d

The command will print the URL of the tunnel as well as the QR code for it.

harbor tunnels

tunnels diagram screenshot (nothing important)

Let's say that you are absolutely certain that you want a tunnel to be available all the time you run Harbor. You can set up a list of services that will be tunneled automatically.

# See list config docs
harbor tunnels --help

# Show the current list of services
harbor tunnels
harbor tunnels ls

# Add a new service to the list
harbor tunnels add ollama

# Remove a service from the list
harbor tunnels rm ollama
# Remove by index (zero-based)
harbor tunnels rm 0

# Remove all services from the list
# Don't confuse with stopping the tunnels (see above)
harbor tunnels rm
harbor tunnels clear

# Stop all running tunnels
harbor tunnel down
harbor tunnel stop
harbor t s
harbor t d

You can also edit this setting directly in the .env:

HARBOR_SERVICES_TUNNELS="webui"

Whenever a harbor up is run - these tunnels will be established, Harbor will print their URLs as well as QR codes in the terminal.

harbor link

Alias: harbor ln

Creates a symlink to the harbor.sh script in the user's home bin directory. This allows you to run the script from any directory.

# Puts the script in the bin directory
harbor ln

If you're me and have to run harbor hundreds of times a day, ln comes with a --short option.

# Also links the short alias
harbor ln --short

Configuration

You can adjust where harbor is linked and the names for the symlinks:

# Assuming it's not linked yet

# See the defaults
./harbor.sh config get cli.path
./harbor.sh config get cli.name
./harbor.sh config get cli.short

# Customize
./harbor.sh config set cli.path ~/bin
./harbor.sh config set cli.name ai
./harbor.sh config set cli.short ai

# Link
./harbor.sh ln --short

# Use
ai up
ai down

harbor unlink

An antipode to harbor link. Removes previously added symlinks. Note that this uses current links configuration, so if it was changed since the link was added, it might not work as expected.

# Removes the symlink(s)
harbor unlink

harbor defaults

Displays or sets the list of default services that will be started when running harbor up. Will include one LLM backend and one LLM frontend out of the box.

# Show the current default services
harbor defaults
harbor defaults ls

# Add a new default service
harbor defaults add tts

# Remove a default service
harbor defaults rm tts
# Remove by index (zero-based)
harbor defaults rm 0

# Remove all services from the default list
harbor defaults rm

# This is an alias for the
# services.default config field
harbor config set services.default 'webui ollama searxng'

# You can also configure it
# via the .env file
cat $(harbor home)/.env | grep HARBOR_SERVICES_DEFAULT

harbor aliases

Allows configuring additional aliases for the harbor run command. Any arbitrary shell command can be added as an alias. Aliases are managed in a key-value format, where the key is the alias name and the value is the command.

# Show the current list of aliases
harbor aliases
# Show aliases help
harbor aliases --help
# Same as above
harbor alias
harbor a

The alias is managed by harbor config internally, and is linked to the aliases config field.

# Will be empty, unless some aliases are configured
harbor config get aliases
# Placement in the `.env`:
cat $(harbor home)/.env | grep HARBOR_ALIASES

harbor aliases ls

Lists all the currently set aliases.

harbor aliases ls

# Running without any args
# defaults to "ls" behavior
harbor aliases
harbor alias
harbor a

harbor aliases set <alias> <command>

Adds a new alias to the list.

# Note the single quotes on the outside
# and double quotes on the inside
harbor alias set echo 'echo "I like $PWD!"'

You can then see the set alias:

harbor alias
echo: echo "I like $PWD!"

harbor alias get echo
# echo "I like $PWD!"

You can run aliases with harbor run:

harbor run echo
# I like /home/user/harbor

harbor aliases get <alias>

Obtain a command for a specific alias.

harbor alias get echo

harbor aliases rm <alias>

Removes an alias from the list.

harbor alias rm echo

harbor help

Print basic help information to the console.

harbor help
harbor --help

harbor version

Prints the current version of the Harbor script.

harbor version
harbor --version

harbor config

# Show the help for the config command
harbor config --help

Allows working with the harbor configuration via the CLI. Mostly useful for the automation and scripting, as the configuration can also be managed via the .env file variables.

Translating CLI config fields to .env file variables:

# All three version are pointing to the same
# environment variable in the .env file
webui.host.port -> HARBOR_WEBUI_HOST_PORT
webui_host_port -> HARBOR_WEBUI_HOST_PORT
WEBUI_HOST_PORT -> HARBOR_WEBUI_HOST_PORT

harbor config list

Alias: harbor config ls

# Show the current configuration
harbor config list

This will print all the configuration options and their values. List could be quite long, so it's handy to pipe it to grep or less.

# Show the current configuration
harbor config list | grep WEBUI

You will see that configuration options have a namespace hierarchy, for example - everything related to the webui service will be under the WEBUI_ namespace.

Unprefixed variables will either be global or will be related to the Harbor CLI itself.

harbor config get <key>

# Get a specific configuration value
# All versions below are equivalent and will return the same value
harbor config get webui.host.port
harbor config get webui.host_port
harbor config get WEBUI_HOST.PORT
harbor config get webui.HOST_PORT

harbor config set <key> <value>

# Set a new configuration value
harbor config set webui.host.port 8080

harbor config reset

Resets the current .env configuration to its original form, based on the default.env file.

# You'll be asked to confirm the reset
harbor config reset

harbor config update

Will merge default.env with the current local .env in order to add new configuration options. Typically used after updating Harbor when new variables are added. Most likely, you won't need to run this manually, as it's done automatically after harbor update.

This process won't overwrite user-defined variables, only add new ones.

# Merge the default.env with the current .env
harbor config update

harbor config extra options

All subcommands support some extra options listed below.

# Point to another .env file
harbor config ls --env-file /path/to/another.env

# Mute logging
harbor config ls --silent

# Use custom prefix for env vars instead of "HARBOR_"
harbor config ls --prefix "HARBOR_WEBTOP_"

harbor profile

Alias: harbor profiles, harbor p

Allows creating and managing configuration profiles. It's attached to the .env file under the hood and allows you to switch between different configurations easily.

# Show the help for the profile command
harbor profile --help

Note

There are a few considerations when using profiles. Please read below.

  • When the profile is loaded, modifications are not saved by default and will be lost when switching to another profile (or reloading the current one). Use harbor profile save <name> to persist the changes after making them
  • Profiles are stored in the Harbor workspace and can be shared between different Harbor instances
  • Profiles are not versioned and are not guaranteed to work between different Harbor versions
  • You can also edit profiles as .env files in the workspace, it's not necessary to use the CLI
  • Profiles can be partial, meaning that you can only specify the options you want to change in a profile, without needing to include everything

harbor profile list

Alias: harbor profile ls

Lists currently saved profiles.

harbor profile list
harbor profile ls

harbor profile add <name>

Alias: harbor profile save

Creates the new profile from the current configuration.

# Create a new profile named "dev"
harbor profile add dev

harbor profile use <name>

Alias: harbor profile load, harbor profile set

Loads the profile with the given name.

# Load the "dev" profile
harbor profile use dev

It's also possible to "import" a remote profile from a URL:

# Load the profile from a remote URL
harbor profile use https://example.com/path/to/harbor-profile.env

harbor profile remove <name>

Alias: harbor profile rm

Removes the profile with the given name.

# Remove the "dev" profile
harbor profile remove dev

harbor env

This is a helper command to similar configuration experience provided by the harbor config to the service-specific environment variables, that are not directly managed by the Harbor CLI.

This command writes to the override.env file for a given service, you can also do that manually, if more convenient.

# List current override env vars
# Note, that it doesn't include the ones from main "harbor config"
harbor env <service>

# Get a specific env var
harbor env <service> <key>

# Set a new env var
harbor env <service> <key> <value>

The <key> supports same naming convention as used by the harbor config command.

# All keys below are equivalent
# and will write to the same env var: "N8N_SECURE_COOKIE"
harbor env n8n N8N_SECURE_COOKIE # original notation
harbor env n8n n8n_secure_cookie # underscore notation
harbor env n8n n8n.secure_cooke  # mixed dot/underscore notation

Examples

# Show the current environment variables for the "n8n" service
harbor env n8n

# Get a specific environment variable
# for the dify service (LOG_LEVEL under the hood)
harbor env dify log.level

# Set a brand new environment variable for the service
# All three are equivalent
harbor env cmdh NODE_ENV development
harbor env cmdh node_env development
harbor env cmdh node.env development

harbor history

Harbor remembers a number of most recently executed CLI commands. You can search/re-run the commands via the harbor history command.

This is an addition to the native history in your shell, that'll persist longer and is specific to the Harbor CLI.

asciinema recording of the history command

Use history.size config option to adjust the number of commands stored in the history.

# Get/set current history size
harbor history size
harbor history size 50

# Same, but with harbor config
harbor config get history.size
harbor config set history.size 50

History is stored in the .history file in the Harbor workspace, you can also edit/access it manually.

# Using a built-in helper
harbor history ls | grep ollama

# Manually, using the file
cat $(harbor home)/.history | grep ollama

You can clear the history with the harbor history clear command.

# Clear the history
harbor history clear

# Empty
harbor history ls

harbor dive <image>

Launched a Docker container with the Dive CLI to inspect the given image layers and sizes.

Might be integrated with service handles in the future.

# Dive into the latest image of the webui service
harbor dive ghcr.io/open-webui/open-webui

harbor update

Pulls the latest version of the Harbor script from the repository.

# Pull the latest version of the Harbor script
harbor update

Note

Updates implementation is likely to change in the future Harbor versions.

harbor how

Note

Harbor needs to be running with ollama backend to use the how command.

Harbor can actually tell you how to do things. It's a bit of a gimmick, but it's also surprisingly useful and fun.

# Ok, I'm cheesing a bit here, this is one of the examples
$ harbor how to ping a service from another service?
✔ Retrieving command... to ping a service from another service?
desired command: harbor exec webui curl $(harbor url -i ollama)
assistant message: The command 'harbor exec webui curl $(harbor url -i ollama)' will ping the Ollama service from within the WebUI service's container. This can be useful for checking network connectivity or testing service communication.

# But this is for real
$ harbor how to filter webui error logs with grep?
✔ Retrieving command... to filter webui error logs with grep?
setup commands: [ harbor logs webui -f ]
desired command: harbor logs webui | grep error
assistant message: You can filter webui error logs with grep like this. Note: the '-f' option is for follow and will start tailing new logs after current ones.

# And this is a bit of a joke
$ harbor how to make a sandwich?
✔ Retrieving command... to make a sandwich?
desired command: None (harbor is a CLI for managing LLM services, not making sandwiches)
assistant message: Harbor is specifically designed to manage and run Large Language Model services, not make physical objects like sandwiches. If you're hungry, consider opening your fridge or cooking an actual meal!

# And this is surprisingly useful
$ harbor how to run a command in the ollama container?
✔ Retrieving command... to run a command in the ollama container?
setup commands: [ docker exec -it ollama bash ]
desired command: harbor exec ollama <command>
assistant message: You can run any command in the running Ollama container. Make sure that command is valid and doesn't try to modify the container's state, because it might affect the behavior of Harbor services.

harbor find

A simple wrapper around the find command that allows you to search for files in the service's cache directories. Uses a substring match on a file path.

# Find all GGUFs
harbor find .gguf

# Use wildcards for more complex searches
harbor find Q8_0*.gguf

# Find all files from bartowski repos
harbor find bartowski

# Find all .safetensors files
harbor find .safetensors

harbor top

An alias for nvtop on the host system. Will display the GPU usage and processes running on the GPU, including those in the containers of the Harbor services.

Screenshot of nvtop

# Show the GPU usage
harbor top

harbor size

Walks all CACHE and WORKSPACE directories from harbor config ls and prints their sizes, additionally displays a size for $(harbor home) directory.

# Show the sizes of the cache and workspace directories
harbor size

Harbor size:
----------------------------------
/home/user/.cache/huggingface: 277G
/home/user/.cache/llama.cpp: 64G
/home/user/.ollama: 241G
/home/user/.cache/vllm: 8.0K
/home/user/.cache/txtai: 92K
/home/user/.cache/nexa: 1.9G
/home/user/.parllama: 80K
./lmeval/cache: 2.5M
./langfuse/data: 89M
./comfyui/workspace: 33G
./omnichain: 108K
./jupyter/workspace: 1.5M
./n8n: 48M
./promptfoo/data: 356K
./webtop/data: 152M
./flowise/data: 176K
./langflow: 3.1M
./optillm/data: 4.0K
./kobold/data: 5.3G
./agent: 6.6M
/home/user/code/harbor: 72G

harbor dev <script>

Launch development scripts from .scripts folder in the Harbor's workspace. Requires deno to be installed and available in the system's PATH.

# Scaffold a template for a new service
harbor dev scaffold <service>

# Seed release values
harbor dev seed

Clone this wiki locally