This project demonstrates a simple Foreign Function Interface (FFI) integration with the llama.cpp library in a Flutter application.
- LlamaFFI Class (
lib/llama_ffi.dart) - Main FFI wrapper for llama.cpp - Diagnostics Tool (
lib/llama_diagnostics.dart) - Helps diagnose setup issues - Flutter Integration (
lib/main.dart) - Demo app showing FFI usage - Example Scripts - Standalone examples for testing
-
Visual C++ Redistributable 2022 (x64) - REQUIRED
- Download from: https://docs.microsoft.com/en-us/cpp/windows/latest-supported-vc-redist
- Install the x64 version
-
Compatible llama.dll
- ✅ You already have:
llama.dll(1.44MB) - ✅ Model file:
Llama-3.2-3B-F1-Reasoning-Instruct-Q4_K_M.gguf(2.09GB)
- ✅ You already have:
Problem: DLL loading fails with error code 126 (module not found)
Root Cause (via dumpbin /dependents analysis):
Your llama.dll depends on missing companion DLLs:
- ❌
ggml.dll- Core machine learning library - ❌
ggml-base.dll- Base ggml components
- ❌
MSVCP140.dll- Visual C++ 2015-2022 C++ Runtime - ❌
VCRUNTIME140.dll- Visual C++ 2015-2022 Runtime - ❌
api-ms-win-crt-*.dll- Universal C Runtime components
Solutions:
- Get complete llama.cpp package (contains all DLLs)
- Install Visual C++ Redistributable 2022 x64
# After fixing dependencies, test the integration:
dart run ./scripts/diagnostics/check_dependencies.dartflutter pub getdart run ./scripts/diagnostics/diagnostics.dartdart run ./scripts/examples/example_usage.dartflutter runlib/
├── llama_ffi.dart # Main FFI wrapper
├── llama_diagnostics.dart # Diagnostics tool
└── main.dart # Flutter app with FFI demo
# Root files
├── llama.dll # llama.cpp library (1.44MB) ✅ PRESENT
├── ggml.dll # Core ML library ❌ MISSING
├── ggml-base.dll # Base ggml components ❌ MISSING
├── Llama-3.2-3B-F1-Reasoning-Instruct-Q4_K_M.gguf # Model file (2.09GB) ✅ PRESENT
- ✅ Library loading with platform detection
- ✅ Backend initialization (
llama_backend_init) - ✅ Backend cleanup (
llama_backend_free) - ✅ Function availability checking
- ✅ Model file validation
- ✅ Comprehensive error handling
final llamaFFI = LlamaFFI();
llamaFFI.initBackend(); // Initialize llama backend
llamaFFI.testLibrary(); // Test library functionality
llamaFFI.modelFileExists(modelPath); // Check if model exists
llamaFFI.listAvailableFunctions(); // Debug helper
llamaFFI.freeBackend(); // CleanupFailed to load dynamic library: The specified module could not be found (error code: 126)
Actual Dependencies (via dumpbin analysis):
llama.dll depends on:
❌ ggml.dll
❌ ggml-base.dll
✅ KERNEL32.dll
❌ MSVCP140.dll
❌ VCRUNTIME140.dll
❌ api-ms-win-crt-*.dll (Universal CRT)
Solutions in order of priority:
- Get complete llama.cpp package with all companion DLLs
- Install Visual C++ Redistributable 2022 x64
- Download from: https://github.com/ggerganov/llama.cpp/releases
Missing function in DLL Solution: Use compatible llama.cpp version
Wrong architecture (32-bit vs 64-bit) Solution: Ensure 64-bit DLL for 64-bit Dart
# Check DLL dependencies (Windows)
dumpbin.exe /dependents .\llama.dll
# Run custom dependency checker
dart run check_dependencies.dart
# List available functions in DLL
dumpbin.exe /exports .\llama.dllThe current implementation provides a foundation. You can extend it with:
// Future implementation example
final model = llamaFFI.loadModel('path/to/model.gguf');
final context = llamaFFI.newContext(model);// Future implementation example
final response = llamaFFI.generateText(
model,
context,
'Hello, how are you?',
maxTokens: 100
);- llama.cpp GitHub: https://github.com/ggerganov/llama.cpp
- Dart FFI Documentation: https://dart.dev/guides/libraries/c-interop
- GGUF Models: https://huggingface.co/models?search=gguf
- Visual C++ Redistributable: https://docs.microsoft.com/en-us/cpp/windows/latest-supported-vc-redist
# Run dependency analysis
dart run check_dependencies.dart
# Check DLL dependencies (Windows with VS tools)
dumpbin.exe /dependents .\llama.dll
# Run Flutter app (after fixing dependencies)
flutter run
# Check what DLLs are available
Get-ChildItem *.dll
# Test FFI integration
dart run lib/llama_diagnostics.dart- Get Missing DLLs: Download complete llama.cpp package containing:
ggml.dllggml-base.dll- Compatible
llama.dll
- Install Visual C++ Redistributable 2022 x64
- Verify Setup: Run
dart run check_dependencies.dart
- Test Basic Integration: Verify library loading and function calls
- Extend Functionality: Add model loading and text generation
- Optimize Performance: Implement proper memory management
- Add Error Handling: Improve error messages and recovery
- Official Releases: https://github.com/ggerganov/llama.cpp/releases
- Look for:
llama-*-win-x64.zipfiles - Alternative: Build from source for guaranteed compatibility
- The current implementation focuses on basic FFI setup and diagnostics
- Model loading and text generation require more complex struct definitions
- The simplified approach avoids Dart FFI struct annotation issues
- Platform detection supports Windows, Linux, and macOS
- Memory management is important for production use
✅ Completed:
- Basic FFI wrapper implementation
- Dependency analysis tools (
check_dependencies.dart) - Flutter UI integration
- Comprehensive diagnostics
ggml.dll- Core machine learning libraryggml-base.dll- Base ggml components- Visual C++ Runtime libraries
🎯 Next Action Required: Download complete llama.cpp package from official releases
Last Updated: Based on dumpbin /dependents analysis revealing exact dependency requirements