Brick Assistant is an AI-powered tool designed to help you query and interact with your building datasources using natural language
The Brick Assistant can answer questions like:
- "What is the average temperature in the building X over the last week?"
- "How many VAVs are there in the building?"
- "What is the building with the biggest area?"
- "List all the AHUs in the building and their associated zones."
This project uses UV for dependency management.
Clone the repository.
Then:
- Install UV.
- Sync dependencies:
uv syncSyncing ensures that all project dependencies are installed and up-to-date with the lockfile. If the project virtual environment (.venv) does not exist, it will be created.
Given that the project also import its own package and modules, you need to install it in editable mode:
uv pip install -e .from compiled_graphs import wuerth_vanilla_graph_devRDF, wuerth_vanilla_graph_devRDF_compiled
g = wuerth_vanilla_graph_devRDF
question = """what building has the smallest area"""
answers = g.run(input_data = {"user_prompt":question}, stream=True)
for answer in answers:
print(answer)This project was built to be modular and extensible. Let's dive in into the main components:
configs.py: This module contains configuration settings for the project, including API keys and other constants. The necessary fields are visibile in the ''.env.example file which must be copied and renamed to .env and filled with the appropriate values.
llm_models.py: This module contains helper functions to initialize and configure LLM models.
Currently, it supports OpenAI models, but it can be easily extended to include other providers.
This package contains the nodes of the graph along with the tools and edges that connect them.
Edges are embedded directly in tool definitions via the new LangGraph Command module, allowing the agent to reason about data in a structured way.
-
functions.py
Core function nodes of the graph.
Each function represents a specific operation (e.g., database query, data processing).
Workflow:- Receive input (from a previous node or user).
- Process it (logic, query, or calculation).
- Return output (to the next node or the user).
-
prompts.py
Prompt templates to guide AI responses. -
rdf_query.py
Handles SPARQL query generation.- Uses a library of predefined queries (expandable).
- The LLM only decides which query to run and with which parameters.
- Queries executed via
rdflibwith a safe-lock mechanism to prevent concurrent graph access.
-
tools.py
Early prototype of aBrickExplorationtool for graph exploration & querying.- Not used in the current implementation (kept for reference).
- May be reintroduced if predefined queries are insufficient.
Chore module of the project. This is where the workflow is defined and all the other pieces are glued toghether.
-
abstract_rdf.py
This is the skeleton of our graph. Its main purpose is to make the tools available for the actual graph.
Specifically it instantiates the SQL set of tools and the RDF query tool. The workflow is yet not defined here to leave the possibility to build different graphs with different workflows using the same set of tools. -
wuerth_graph_rdf.py
This is the actual graph that is used in the project. It inherits from the abstract_rdf.py and defines the workflow using the tools defined there.
This module contains evaluation scripts to test the performance and accuracy of the Brick Assistant.
-
dataset_ttl.py
A series of questions and answers to test the graph on some specific predifined questions, regarding the Würth buildings in this case. -
grader.py
A simple evaluation script that uses a separate LLM to grade the answers provided by the Brick Assistant against the reference answers in the dataset in the form of a pass/fail grade where the LLM is isntructed as if it was a teacher correcting a student's exam.
▶️ To actually perform the evaluation, it is needed to launch the eval scrip namedeval_rdf.pywhich will load the graph, the dataset and run the grader on each question/answer pair.
-
Set up environment variables
- Copy the provided
.env.examplefile to.env. - Fill in the required values.
- Currently, the only supported LLM provider is OpenAI, so make sure to set your OpenAI API key in the
.envfile.
💡 You can also add your LangSmith API key (recommended) for debugging and tracing graph execution.
Get it from LangSmith after creating an account. - Copy the provided
-
Run the assistant
You have two options:a. Run with LangGraph Studio
- After activating the virtual environment:
langgraph dev
- Or without activating the environment, using
uv:uv run langgraph dev
This will start a local web server for LangGraph Studio where you can:
- Interact with the graph visually.
- Send prompts.
- View real-time execution traces.
b. Run directly in Python
- You can also run the assistant from a Python script or a Jupyter Notebook.
- Usage examples are also provided in the notebook:
basic_usage_examples.ipynb.
- After activating the virtual environment:
📦 Packaging and File Paths When preparing to make Brick Assistant installable as a package, it’s critical to respect file paths that the system relies on. This ensures the assistant can work not only with Würth data but also with any dataset.
Key paths to maintain:
- Database connection string:
database_uri: str = Field(..., description="Database connection URI")
- Metadata file:
METADATA_FILE = "data/metadataloc.json"
- TTL files path:
TTL_FILES_PATH = Path("data/ttl_files")
- TTL File Naming Convention:
All TTL files must follow a strict naming convention to ensure consistent file resolution:file_path = f"data/ttl_files/bui_{building_name.upper()}.ttl"
bui_is a required prefix.{building_name}is a placeholder for the building name in uppercase..ttlis the required file extension. Example: For a building named "xyz", the corresponding TTL file should be namedbui_XYZ.ttl.
-
📚 Expand RDF query library
Cover a broader range of use cases with additional predefined queries. -
🛠️ Introduce fallback mechanism
Re-enable theBrickExplorationtool when predefined queries cannot answer a user’s question. -
👩🏫 Add human-in-the-loop feedback
Incorporate user feedback for continuous refinement and improvement. -
🤖 Support additional LLM providers
Extend compatibility beyond OpenAI for more flexibility. -
📦 Make it a package
-
🌐 Update the web-app depending on the assistant
Currently it uses the old token-expensive, inefficient version, which can be found here. -
🧪 MCP server exploration See how to build an mcp server on top of the assistant.


