- x86 or ARM based host device
- Minimum 16GB VRAM capable host machine
- Minimum ~40GB SSD space
- SysML v2 related functionalities are not available on ARM devices.
This framework offers two deployment types: Development and Headless.
- Development deployment is designed for power users who want to tinker with the framework using LM Studio's graphical user interface
- Headless deployment is designed for production use and public access
This deployment type supports macOS operating system on x86 or ARM architecture.
Follow the instructions below to prepare your host device:
- Download and install LM Studio
- Download a compatible model from LM Studio repositories
Suggested model: mradermacher/grok-3-reasoning-gemma3-12b-distilled-hf
This deployment type supports Ubuntu 24.04 LTS operating system on x86 architecture.
Follow the instructions below to prepare your host machine:
- SSH into the machine and run the following commands:
apt update -y
apt upgrade -y
apt install -y git
git clone https://github.com/saracoglum98/fu27soma-ma.git
cd fu27soma-ma
chmod +x manage.sh
chmod +x init.sh
./init.sh headless
- Reboot the host machine
reboot
You can use the alias llm-se to manage the codebase. Run llm-se help to see the available options.
Usage: llm-se [command]
Commands:
help Show this help message
build Build all services
seed Seed sample data
start Start all services
stop Stop all services
restart Restart all services
status Show the status of all services
destroy Destroy all services
If you want to do further configuration for each microservice, you can examine the .env file and Dockerfile files for each layer under the layers folder. However, this is not recommended unless you know what you are doing.
The .env file must be edited depending on the deployment type.
NEXT_PUBLIC_HOST="localhost"
LMSTUDIO_HOST="host.docker.internal"
Assuming the Public IP of your host machine is 123.123.123.123, set the related environment variables as shown below.
NEXT_PUBLIC_HOST="123.123.123.123"
LMSTUDIO_HOST="123.123.123.123"
Run one of the following commands to build the framework. The build process can take 5 to 30 minutes, depending on your host machine and internet connection.
llm-se build
llm-se build headless
After a successful build, you will see a message similar to the following.
🪜 Preparing to build
🌍 Creating network
🛠️ Setting environment variables
🚀 Building knowledge
🚀 Building llm
🚀 Building communication
🚀 Building management
🚀 Building sysml
💨 Initializing services
🧹 Clearing build related files
⌛️ Build took 1 minutes 21 seconds
🎉 All services are running
🌐 Access the application at http://localhost:3000
Run the following command to import sample data into the framework:
llm-se seed
You can now access the application using:
- Development deployment: http://localhost:3000
- Headless deployment: http://{YOUR_IP}:3000