This module is the API server of Wren Engine. It's built on top of FastAPI. It provides several APIs for SQL queries. A SQL query will be planned by wren-core, transpiled by sqlglot, and then executed by ibis to query the database.
just docker-build # current platform (fast: Rust built locally)
just docker-build linux/amd64 # single platform
just docker-build linux/amd64,linux/arm64 --push # multi-arch (requires --push)| Scenario | Rust compilation | Speed |
|---|---|---|
| Target matches host platform | Built locally via maturin + zig |
Fast (reuses host cargo cache) |
| Cross-platform or multi-arch | Built inside Docker via BuildKit cache mounts | Slow on first build, incremental after |
Local build prerequisites (single-platform matching your host):
brew install zig
rustup target add aarch64-unknown-linux-gnu # Apple Silicon
rustup target add x86_64-unknown-linux-gnu # Intel MacOnce set up, only the first build is slow. Subsequent builds reuse the host cargo cache and take a few minutes.
Note: Multi-arch builds (
linux/amd64,linux/arm64) always build Rust inside Docker and require--pushto export the image (Docker cannot load multi-arch images locally).
You can follow the steps below to run the Java engine and ibis.
Wren Engine is migrating to wren-core. However, we still recommend starting the Java engine to enable the query fallback mechanism.
Create compose.yaml file and add the following content, edit environment variables if needed (see Environment Variables).
services:
ibis:
image: ghcr.io/canner/wren-engine-ibis:latest
ports:
- "8000:8000"
environment:
- WREN_ENGINE_ENDPOINT=http://java-engine:8080
java-engine:
image: ghcr.io/canner/wren-engine:latest
expose:
- "8080"
volumes:
- ./etc:/usr/src/app/etcCreate etc directory and create config.properties inside the etc directory
mkdir etc
cd etc
vim config.propertiesAdd the following content to the config.properties file
node.environment=productionRun the docker compose
docker compose upSet up OpenTelemetry Envrionment Variables to enable tracing log. See Tracing with Jaeger to start up a Jaeger Server.
Requirements:
- Python 3.11
- casey/just
- poetry
- Rust and Cargo
Clone the repository
git clone git@github.com:Canner/wren-engine.gitStart Java engine for the feature rewriting-queries
cd example
docker compose --env-file .env upNavigate to the ibis-server directory
cd ibis-serverCreate .env file and fill in the environment variables (see Environment Variables)
vim .envInstall the dependencies
just installRun the server
just runSpark-related tests require a local Spark cluster with Spark Connect enabled. Our GitHub CI already handles this automatically, but you must start Spark manually when running tests locally.
Prerequisites
- Docker & Docker Compose
- Python dependencies installed (just install)
- Java engine running (same as other integration tests)
- Start the Spark Cluster From the ibis-server directory:
cd tests/routers/v3/connector/spark
docker compose up -d --build
Wait until Spark Connect is ready. You should see this in the logs:
docker logs spark-connect
# Spark Connect server started
- Spark Master: spark://localhost:7077
- Spark Connect: localhost:15002
- Run Spark Tests Go back to the ibis-server directory and run:
just test spark
- Stop the Spark Cluster (Cleanup) After tests finish:
cd tests/routers/v3/connector/spark
docker compose down -v
Doris-related tests require a running Apache Doris instance. Our GitHub CI already handles this automatically, but you must start Doris manually when running tests locally.
Prerequisites
- Docker & Docker Compose
- Python dependencies installed (
just install) pymysqlinstalled in the dev environment (already included in dev dependencies)
- Start the Doris Container
From the ibis-server directory:
cd tests/routers/v3/connector/doris
docker compose up -dThe container uses apache/doris:4.0.3-all-slim (all-in-one image with FE + BE).
⚠️ The all-in-one Doris image requires sufficient memory (at least 8 GB recommended). If you seeMEM_ALLOC_FAILEDerrors, increase Docker's memory limit.
Wait until Doris is healthy. Check the status:
mysql -h 127.0.0.1 -P 9030 -uroot -e "SHOW BACKENDS\G" | grep "Alive"
# Alive: true- Update Connection Info (if needed)
The default connection in tests/routers/v3/connector/doris/conftest.py:
DORIS_HOST = "127.0.0.1"
DORIS_PORT = 9030
DORIS_USER = "root"
DORIS_PASSWORD = ""Adjust these values if your Doris instance has different credentials.
If you already have a remote Doris cluster, update the connection constants in conftest.py:
DORIS_HOST = "<your-doris-host>"
DORIS_PORT = 9030
DORIS_USER = "<user>"
DORIS_PASSWORD = "<password>"Go back to the ibis-server directory and run:
just test dorisAfter tests finish:
cd tests/routers/v3/connector/doris
docker compose down -vInstall the dependencies
just installLaunch a CLI with an active Wren session using the following command:
python -m wren local_file <mdl_path> <connection_info_path>
This will create an interactive CLI environment with a wren.session.Context instance for querying your database.
Session created: Context(id=1352f5de-a8a7-4342-b2cf-015dbb2bba4f, data_source=local_file)
You can now interact with the Wren session using the 'wren' variable:
> task = wren.sql('SELECT * FROM your_table').execute()
> print(task.results)
> print(task.formatted_result())
Python 3.11.11 (main, Dec 3 2024, 17:20:40) [Clang 16.0.0 (clang-1600.0.26.4)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
(InteractiveConsole)
>>>
Launch a Jupyter notebook server with Wren engine dependencies using Docker:
docker run --rm -p 8888:8888 ghcr.io/canner/wren-engine-ibis:latest jupyter
Explore the demo notebooks to learn how to use the Wren session context:
http://localhost:8888/lab/doc/tree/notebooks/demo.ipynb
We use OpenTelemetry as its tracing framework. Refer to OpenTelemetry zero-code instrumentation to install the required dependencies. Then, use the following just command to start the Ibis server, which exports tracing logs to the console:
just run-trace-console
OpenTelemetry zero-code instrumentation is highly configurable. You can set the necessary exporters to send traces to your tracing services.
Metrics we are tracing right now
- Follow the Jaeger official documentation to start Jaeger in a container. Use the following command:
docker run --rm --name jaeger \
-p 16686:16686 \
-p 4317:4317 \
-p 4318:4318 \
-p 5778:5778 \
-p 9411:9411 \
jaegertracing/jaeger:2.5.0
pip install opentelemetry-distro opentelemetry-exporter-otlp
opentelemetry-bootstrap -a install
- Use the following
justcommand to start theibis-serverand export tracing logs to Jaeger:
just run-trace-otlp
- Open the Jaeger UI at http://localhost:16686 to view the tracing logs for your requests.
Please see CONTRIBUTING.md for more information.
Wren engine is migrating to v3 API (powered by Rust and DataFusion). However, there are some SQL issues currently. If you find the migration message in your log, we hope you can provide the message and related information to the Wren AI Team. Just raise an issue on GitHub or contact us in the Discord channel.
The message would look like the following log:
2025-03-19 22:49:08.788 | [62781772-7120-4482-b7ca-4be65c8fda96] | INFO | __init__.dispatch:14 - POST /v3/connector/postgres/query
2025-03-19 22:49:08.788 | [62781772-7120-4482-b7ca-4be65c8fda96] | INFO | __init__.dispatch:15 - Request params: {}
2025-03-19 22:49:08.789 | [62781772-7120-4482-b7ca-4be65c8fda96] | INFO | __init__.dispatch:22 - Request body: {"connectionInfo":"REDACTED","manifestStr":"eyJjYXRhbG9nIjoid3JlbiIsInNjaGVtYSI6InB1YmxpYyIsIm1vZGVscyI6W3sibmFtZSI6Im9yZGVycyIsInRhYmxlUmVmZXJlbmNlIjp7InNjaGVtYSI6InB1YmxpYyIsIm5hbWUiOiJvcmRlcnMifSwiY29sdW1ucyI6W3sibmFtZSI6Im9yZGVya2V5IiwidHlwZSI6InZhcmNoYXIiLCJleHByZXNzaW9uIjoiY2FzdChvX29yZGVya2V5IGFzIHZhcmNoYXIpIn1dfV19","sql":"SELECT orderkey FROM orders LIMIT 1"}
2025-03-19 22:49:08.804 | [62781772-7120-4482-b7ca-4be65c8fda96] | WARN | connector.query:61 - Failed to execute v3 query, fallback to v2: DataFusion error: ModelAnalyzeRule
caused by
Schema error: No field named o_orderkey.
Wren engine is migrating to Rust version now. Wren AI team are appreciate if you can provide the error messages and related logs for us.
- Identify the Issue: Look for the migration message in your log files.
- Gather Information: Collect the error message and any related logs.
- Report the Issue:
- GitHub: Open an issue on our GitHub repository and include the collected information.
- Discord: Join our Discord channel and share the details with us.
Providing detailed information helps us to diagnose and fix the issues more efficiently. Thank you for your cooperation!