Skip to content

Local AI dev environment for Intel Arc / Meteor Lake: Ollama + Vulkan helpers, agent-ready ke_* scripts, examples.

License

Notifications You must be signed in to change notification settings

klever-engineering/ke-local-dev-env

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 
 
 

Repository files navigation

KE Local Dev Env – Intel Arc / Meteor Lake Coding Agents

This repository contains a small, opinionated setup to run local AI-assisted development workflows on a laptop like:

  • Lenovo ThinkBook 16 G7 IML
  • Intel® Core™ Ultra 7 155H
  • Intel Arc integrated GPU (Meteor Lake-P)
  • Intel Meteor Lake NPU (Intel AI Boost)
  • Ubuntu Linux

The goal is to:

  • Use Ollama with Vulkan (Intel Arc GPU) when possible.
  • Expose simple, well-documented shell helpers (ke_*) that coding agents (Claude Code CLI, Gemini CLI, Codex CLI, GitHub Copilot, etc.) can rely on.
  • Provide an AGENT.md so agents understand your environment and tools.
  • Offer examples for:
    • Local LLM via Ollama
    • Docker + Vulkan (Intel Arc)
    • LM Studio GPU usage
    • CLI coding agents orchestration

1. Repository structure

.
├── .ke-dev-env.example     # Shell helpers and environment variables
├── AGENT.md                # Description of the environment for AI agents
├── README.md               # This file
└── examples
    ├── 01-ollama-local-code
    │   └── README.md       # Using ke_local_code + Ollama
    ├── 02-docker-vulkan
    │   ├── Dockerfile      # Minimal Vulkan-capable base image
    │   └── README.md       # How to run containers with GPU
    ├── 03-lmstudio
    │   └── README.md       # LM Studio config for Intel Arc
    └── 04-agents-cli
        └── README.md       # Claude Code, Gemini CLI, Codex + ke_* helpers

2. Prerequisites

On the host (Ubuntu):

  • Git
  • Bash or Zsh
  • Docker + Docker Compose
  • Ollama (>= 0.13.x)
  • Vulkan runtime:
    • mesa-vulkan-drivers
    • vulkan-tools
  • Optionally:
    • LM Studio
    • Claude Code CLI
    • Gemini CLI
    • codex-cli / GitHub Copilot

Install Vulkan basics on Ubuntu:

sudo apt update
sudo apt install mesa-vulkan-drivers vulkan-tools
vulkaninfo | grep -i 'deviceName' -m 5

You should see an entry like:

deviceName     : Intel(R) Arc Graphics

3. Setup (from git clone)

3.1. Clone and install the local env file

git clone https://github.com/your-user/ke-local-dev-env.git
cd ke-local-dev-env

cp .ke-dev-env.example ~/.ke-dev-env

Now edit ~/.ke-dev-env and adjust at least:

  • OLLAMA_MODELS
    The directory where your Ollama models live.
    Example: /media/youruser/DATA/ollama-models

  • Any project-specific behaviour inside:

    • ke_run_tests
    • ke_run_app

3.2. Source the env from your shell

For bash:

echo 'source ~/.ke-dev-env' >> ~/.bashrc
source ~/.bashrc

For zsh:

echo 'source ~/.ke-dev-env' >> ~/.zshrc
source ~/.zshrc

4. Local quickstart (from a downloaded zip)

If you downloaded a .zip of this repository instead of cloning via Git:

  1. Unzip and enter the directory:

    unzip ke-local-dev-env.zip
    cd ke-local-dev-env
  2. Copy the example env file:

    cp .ke-dev-env.example ~/.ke-dev-env
  3. Edit ~/.ke-dev-env:

    • Set OLLAMA_MODELS to your actual models directory (for example /media/youruser/DATA/ollama-models).
    • Adjust ke_run_tests and ke_run_app to match your usual project commands if needed.
    • Optionally change the default model used by ke_local_code.
  4. Hook it into your shell:

    For bash:

    echo 'source ~/.ke-dev-env' >> ~/.bashrc
    source ~/.bashrc

    For zsh:

    echo 'source ~/.ke-dev-env' >> ~/.zshrc
    source ~/.zshrc
  5. Open a new terminal and test:

    ke_hw_info        # Show CPU / GPU / NPU summary
    ke_run_tests      # Run tests in a project with pytest
    ke_run_app        # Run your Docker Compose app
    ke_local_code "Explain what this environment does."

If these commands work without errors, your local environment is ready for your coding agents.


5. Quick usage

Once the shell has sourced ~/.ke-dev-env, you should have:

ke_hw_info        # Show basic CPU/GPU/NPU info
ke_local_code     # Use a local code-focused LLM via Ollama
ke_run_tests      # Run your test suite (pytest by default)
ke_run_app        # Run your app via docker compose

5.1. Test Ollama with Vulkan

Start the Ollama server:

export OLLAMA_VULKAN=1
ollama serve

Check the logs: you should see a line with a non-zero "total vram" value, indicating the GPU backend is active.

Then, in another terminal:

ke_local_code "Write a small Python function that adds two numbers and explain it."

You can customize the default model used by ke_local_code in ~/.ke-dev-env.


6. Examples

Check the examples folder for concrete, documented scenarios:

  1. Ollama local code:
    How to use ke_local_code and small code models (e.g. codellama:7b-code) in a local loop.

  2. Docker + Vulkan:
    How to run containers that see your Intel Arc GPU through /dev/dri and Vulkan.

  3. LM Studio:
    How to configure LM Studio to use Vulkan on Intel Arc, including suggestions for model sizes and quantization.

  4. CLI coding agents:
    How to integrate Claude Code, Gemini CLI, and codex-cli with ke_* helpers so they act as “brains in the cloud, hands on your machine”.


7. Adapting for other hardware

If you publish this as a template, consider creating branches or variants:

  • nvidia-cuda
  • amd-rocm
  • cpu-only
  • intel-npu-openvino

Each variant can keep the same AGENT.md shape but adapt:

  • Hardware description
  • How to enable GPU/NPU
  • Default models and quantization

8. License

This project is licensed under the MIT License. See LICENSE for details.

9. Disclaimer

This repository was generated using ChatGPT 5.1 Extended Thinking based on a laptop-class configuration with an Intel® Core™ Ultra 7 155H, Intel Arc integrated GPU (Meteor Lake-P), Intel NPU (AI Boost), and Ubuntu Linux.

About

Local AI dev environment for Intel Arc / Meteor Lake: Ollama + Vulkan helpers, agent-ready ke_* scripts, examples.

Resources

License

Security policy

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published