Fully automated installation scripts for ComfyUI optimized for Intel Arc GPUs (A-Series) and Intel Core Ultra iGPUs with XPU backend, Triton acceleration, and GGUF quantized model support.
- ✅ One-click installation - Automated setup with dependency resolution
- ✅ Intel Arc optimized - Native XPU backend with PyTorch nightly builds
- ✅ Triton acceleration - 6-11x faster GGUF model loading and inference
- ✅ Isolated environment - Clean Python venv, no conflicts with other AI tools
- ✅ Essential nodes included - ComfyUI-Manager, GGUF, VideoHelper, Impact Pack
- ✅ Always up-to-date - Scripts pull latest ComfyUI and PyTorch versions
- ✅ No manual patching - Automatic XPU detection and optimization
| Component | Minimum | Recommended |
|---|---|---|
| GPU | Intel Arc A310 | Intel Arc A770 16GB |
| iGPU | Intel Core Ultra 5 | Intel Core Ultra 7/9 |
| RAM | 16GB | 32GB+ |
| Storage | 50GB free | 100GB+ SSD |
- Windows 10/11 (64-bit)
- Python 3.10 or 3.11 - Download
- Git for Windows - Download
- Visual Studio Build Tools 2022 - Download
- Required for Triton GGUF acceleration
- Select: "Desktop development with C++"
- Latest Intel Graphics Drivers - Download
Clone this repository or download as ZIP:
git clone https://github.com/ai-joe-git/ComfyUI-Intel-Arc-Clean-Install-Windows-venv-XPU-.git
cd ComfyUI-Intel-Arc-Clean-Install-Windows-venv-XPU-Run scripts in this order:
1. INSTALL_ComfyUI_Intel_Arc_XPU.bat # Core installation
2. INSTALL_Custom_Nodes.bat # Essential nodes
3. INSTALL_GGUF_Triton_Patch.bat # GGUF acceleration
4. START_ComfyUI.bat # Launch ComfyUIOpen your browser: http://127.0.0.1:8188
- ComfyUI - Latest from official repository
- PyTorch 2.11+ XPU Nightly - Intel Arc optimized builds
- Triton XPU 3.6+ - GPU kernel acceleration
- ComfyUI Frontend - Latest official UI
- ComfyUI-Manager - Node package manager
- ComfyUI-GGUF - Quantized model support (Q4_0, Q8_0, etc.)
- ComfyUI-VideoHelperSuite - Video generation tools
- ComfyUI-Impact-Pack - Utility nodes
- rgthree-comfy - Workflow optimization tools
- GGUF Triton Patch - Accelerated dequantization
- Q4_0 models: ~11x faster
- Q8_0 models: ~6x faster
- Q4_1 models: ~8x faster
INSTALL_ComfyUI_Intel_Arc_XPU.bat
What it does:
- ✓ Verifies Python 3.10/3.11 and Git installation
- ✓ Checks for Visual Studio Build Tools (C++ compiler)
- ✓ Clones ComfyUI to
C:\ComfyUI - ✓ Creates isolated Python virtual environment
- ✓ Installs PyTorch XPU nightly builds
- ✓ Installs Triton XPU for acceleration
- ✓ Installs ComfyUI dependencies
- ✓ Verifies XPU device detection
Expected output:
PyTorch: 2.11.0.dev20260118+xpu
XPU available: True
Device: Intel(R) Arc(TM) A770 Graphics (16GB)
INSTALL_Custom_Nodes.bat
What it does:
- Clones essential custom nodes to
C:\ComfyUI\custom_nodes\ - Installs node-specific dependencies
- Updates existing nodes if already installed
Nodes installed:
- ComfyUI-Manager (ltdrdata)
- ComfyUI-GGUF (city96)
- ComfyUI-VideoHelperSuite (Kosinkadink)
- ComfyUI-Impact-Pack (ltdrdata)
- rgthree-comfy (rgthree)
INSTALL_GGUF_Triton_Patch.bat
What it does:
- ✓ Verifies ComfyUI-GGUF node is installed
- ✓ Downloads latest Triton patch from this repo
- ✓ Applies patch to enable GPU-accelerated dequantization
- ✓ Verifies Triton kernels are active
Performance improvements:
| Model Type | Without Triton | With Triton | Speedup |
|---|---|---|---|
| Q4_0 GGUF | Slow PyTorch | Triton kernel | ~11x faster |
| Q8_0 GGUF | Slow PyTorch | Triton kernel | ~6x faster |
| Q4_1 GGUF | Slow PyTorch | Triton kernel | ~8x faster |
| Q4_K_M | PyTorch | PyTorch | No change* |
*K-quants (Q4_K_M, Q5_K_M, Q6_K) not yet accelerated by this patch.
START_ComfyUI.bat
What it does:
- Initializes Visual Studio C++ environment for Triton
- Sets Intel XPU environment variables
- Activates Python virtual environment
- Launches ComfyUI with optimized flags
Startup flags:
--lowvram- Efficient memory management for 8-16GB GPUs--bf16-unet- BFloat16 precision (faster, lower VRAM)--async-offload- Asynchronous model offloading--disable-smart-memory- Predictable memory behavior
| GPU Model | VRAM | Performance | Notes |
|---|---|---|---|
| Arc A770 LE | 16GB | Excellent | Best for video generation |
| Arc A770 | 8GB | Very Good | Recommended for most workflows |
| Arc A750 | 8GB | Very Good | Great price/performance |
| Arc A580 | 8GB | Good | Budget option |
| Arc A380 | 6GB | Fair | Entry level |
| Arc A310 | 4GB | Limited | Simple workflows only |
| CPU Series | iGPU | Performance | Notes |
|---|---|---|---|
| Core Ultra 9 | Intel Arc iGPU | Good | Meteor Lake/Arrow Lake |
| Core Ultra 7 | Intel Arc iGPU | Good | Best laptop option |
| Core Ultra 5 | Intel Arc iGPU | Fair | Budget laptop |
- 11th/12th Gen Intel Core with Iris Xe
- Limited support, may fallback to CPU
- Not recommended for production use
- Intel UHD Graphics (10th Gen and older)
- CPU-only mode (extremely slow)
Run UPDATE_ComfyUI.bat to update:
- ComfyUI core
- PyTorch XPU nightly
- Triton XPU
- All custom nodes
- Python dependencies
The script safely updates while preserving your models and workflows.
| Hardware | Time | GGUF Triton | Notes |
|---|---|---|---|
| Arc A770 16GB | 25:32 | Enabled | Q8_0 FLUX + Qwen |
| Arc A770 8GB | ~30:00 | Enabled | --lowvram required |
| Arc A750 8GB | ~32:00 | Enabled | Comparable to A770 8GB |
| Core Ultra 7 | ~45:00 | Enabled | iGPU only |
| Hardware | Time | VRAM Used |
|---|---|---|
| Arc A770 16GB | ~45s | 14GB |
| Arc A770 8GB | ~60s | 7.8GB (offloading) |
| Arc A750 8GB | ~65s | 7.8GB (offloading) |
Benchmarks with GGUF Q8_0 models and Triton acceleration enabled.
Solution:
# Verify XPU is detected
python -c "import torch; print(torch.xpu.is_available())"If False:
- Update Intel Graphics drivers
- Reinstall PyTorch XPU:
pip install --pre torch --index-url https://download.pytorch.org/whl/nightly/xpu --force-reinstall
Solution:
- Install Visual Studio Build Tools 2022
- Select "Desktop development with C++"
- Restart your PC
- Run
START_ComfyUI.bat(notpython main.pydirectly)
Solutions:
- Use
--lowvramflag (already in START script) - Try GGUF quantized models (Q4_0, Q8_0)
- Reduce resolution or batch size
- Close other GPU applications
Verify Triton:
cd C:\ComfyUI
call comfyui_venv\Scripts\activate.bat
python -c "from custom_nodes.ComfyUI-GGUF.dequant import HAS_TRITON; print('Triton:', HAS_TRITON)"If False:
pip install pytorch-triton-xpu --force-reinstallChecklist:
- ✓ Triton patch applied? Check ComfyUI console for "Triton available, enabling optimized kernels"
- ✓ Using GGUF Q8_0/Q4_0 models for acceleration?
- ✓ GPU utilization at 100%? Check Task Manager
- ✓ Power plan set to "High Performance"?
- ✓ Latest Intel Graphics drivers installed?
| Use Case | Model Format | Why |
|---|---|---|
| Best Quality | GGUF Q8_0 | Minimal quality loss, 6x faster with Triton |
| Balanced | GGUF Q4_K_M | Good quality, smaller size |
| Maximum Speed | GGUF Q4_0 | 11x faster with Triton, acceptable quality |
| Full Precision | FP16/BF16 | Highest quality, largest size, slowest |
For 16GB Arc GPUs:
- Remove
--lowvramfrom START script for fastest performance - Can run most models without offloading
For 8GB Arc GPUs:
- Keep
--lowvramflag (default) - Use GGUF quantized models
- Avoid loading multiple large models simultaneously
For 4-6GB Arc GPUs:
- Add
--novramfor maximum offloading - Use Q4_0 GGUF models
- Lower resolution workflows only
- Use GGUF models - Faster loading with Triton acceleration
- Enable caching - Triton compiles kernels once, then caches
- Batch processing - Process multiple frames/images together
- Lower steps - 6-8 steps often sufficient with good samplers
After installation:
C:\ComfyUI\
├── comfyui_venv\ # Python virtual environment
├── custom_nodes\ # Custom nodes
│ ├── ComfyUI-Manager\
│ ├── ComfyUI-GGUF\ # Quantized models (Triton patched)
│ ├── ComfyUI-VideoHelperSuite\
│ ├── ComfyUI-Impact-Pack\
│ └── rgthree-comfy\
├── models\ # Place models here
│ ├── checkpoints\
│ ├── clip\
│ ├── vae\
│ ├── loras\
│ └── unet\
├── input\ # Input images/videos
├── output\ # Generated outputs
├── user\ # User settings
├── sycl_cache\ # XPU kernel cache
└── main.py # ComfyUI entry point
- CivitAI - User-uploaded models
- HuggingFace - Official model hub
- ComfyUI Workflows - Shared workflows
Contributions welcome! Please:
- Test on Intel Arc hardware
- Document any issues or improvements
- Submit PR with clear description
MIT License - See LICENSE for details
- Scripts: ai-joe-git
- ComfyUI: comfyanonymous
- GGUF Node: city96
- Intel XPU Community: Everyone testing and sharing knowledge
If these scripts helped you, please:
- ⭐ Star this repository
- 🐛 Report issues on GitHub
- 💬 Share your results in Discussions
- 📢 Help other Intel Arc users!
Last Updated: January 2026
ComfyUI Version: 0.9.2+
PyTorch XPU Version: 2.11.0.dev+
Tested Hardware: Arc A770, A750, A580, Core Ultra 7/9
- ✅ Triton XPU integration for GGUF acceleration
- ✅ Automated patch installer with GitHub download
- ✅ Visual Studio Build Tools detection
- ✅ PyTorch 2.11+ nightly XPU builds
- ✅ Streamlined installation process
- ✅ Performance improvements for Q8_0/Q4_0 models
- 🔄 K-quant Triton kernels (Q4_K_M, Q5_K_M)
- 🔄 Automatic model downloader
- 🔄 ComfyUI Portable build option
- 🔄 Docker container for Intel Arc
If you experience issues with PyTorch or want to update to the latest nightly build:
- PyTorch XPU not detecting your Intel Arc GPU
- After updating Intel Graphics drivers
- ComfyUI showing "Device: cpu" instead of "xpu"
- Upgrading to latest PyTorch nightly build
- Fixing corrupted PyTorch installation
- Close ComfyUI if running
- Run
REPAIR_PyTorch_XPU.bat - Wait for installation to complete (~5-10 minutes)
- Verify XPU is detected in the output
- Run
START_ComfyUI.batto test
PyTorch Version: 2.11.0.dev20260119+xpu XPU Available: True GPU Device: Intel(R) Arc(TM) A770 Graphics GPU Count: 1 Triton Version: 3.6.0
text
If you see this, PyTorch XPU is working correctly! ✅
Made with ❤️ for the Intel Arc community