A compiler for Compute-In-Memory (CIM) architectures, built on LLVM/MLIR.
Part of the CIMFlow framework — see the main CIMFlow repository for the complete compilation and simulation toolchain.
CIMFlow Compiler is the compilation component of the CIMFlow framework, designed to transform neural network models into optimized instruction sequences for CIM hardware. Built on LLVM/MLIR, it provides a multi-stage compilation pipeline that handles operator partitioning, memory allocation, and ISA code generation for CIM architectures.
- MLIR-Based IR: Leverages LLVM/MLIR for robust intermediate representation and optimization passes
- Multi-Stage Compilation: CG-level partitioning followed by OP-level ISA generation
- Hardware-Aware Optimization: Configurable for different CIM array sizes and memory hierarchies
- Memory Management: Automatic address allocation for local and global memory buffers
- Extensible Architecture: Modular pass infrastructure for custom optimizations
- C++17 compatible compiler (GCC or Clang)
- CMake 3.20 or higher
- Ninja build system
- Python 3.11 or later
- Java 11 (for ANTLR parser)
We recommend using conda for Python environment management:
conda create -n cimflow python=3.11
conda activate cimflowNote: If you don't have conda installed, see the Miniconda installation guide.
# Build tools
sudo apt install build-essential cmake ninja-build ccache
# Java Development Kit
sudo apt install openjdk-11-jdk
# Required libraries
sudo apt install libeigen3-dev libunwind-dev# Clone the repository
git clone https://github.com/BUAA-CI-LAB/CIMFlow-Compiler.git
cd CIMFlow-Compiler
# Initialize submodules
git submodule update --init --recursive
# Run the installation script
./install.shThe installation script will:
- Build LLVM/MLIR (takes 20-30 minutes on first build, faster with ccache)
- Build the CIM compiler
- Install the Python package
Alternatively, you can build manually:
# Build LLVM/MLIR
bash scripts/llvm_build.sh
# Build CIM Compiler
bash scripts/build.sh
# Install Python package
pip install -e .The main compiler executable will be available as cim-compiler after installation.
# Compile an ONNX model to ISA instructions
cim-compiler network \
-m model.onnx \
-o output/ \
-c config.json \
-T 8 -B 16For more options, run cim-compiler network --help.
The compiler is configured via JSON files specifying:
- CIM macro group size and count
- Memory hierarchy configuration
- NoC bandwidth parameters
- Core count and organization
.
├── src/cim_compiler/ # Main source code
│ ├── cg/ # CG-level compilation (Python)
│ ├── csrc/ # C++ compiler core
│ │ ├── include/ # Headers and MLIR dialect definitions
│ │ │ ├── cim/ # CIM dialect
│ │ │ ├── cimisa/ # CIM ISA dialect
│ │ │ └── codegen/ # Code generation headers
│ │ ├── passes/ # MLIR optimization passes
│ │ └── codegen/ # ISA code generation
│ ├── op_lib/ # Operator library
│ ├── cli/ # Command-line interface
│ └── utils/ # Common utilities
├── config/ # Configuration files
├── scripts/ # Build and utility scripts
├── test/ # Test suite
├── thirdparty/ # Third-party dependencies
│ ├── llvm-project/ # LLVM/MLIR
│ └── glog/ # Google logging
└── CMakeLists.txt # Build configuration
The compiler uses the following third-party libraries:
- LLVM/MLIR: Compiler infrastructure and intermediate representation
- glog: Google logging framework
- ANTLR: Parser generator for DSL parsing
- Boost: C++ utility libraries
Current Maintainers:
Previous Maintainers:
This project is licensed under the Apache License 2.0 - see the LICENSE file for details.
This project builds upon LLVM/MLIR, which is also licensed under the Apache License 2.0 with LLVM Exceptions.
Contributions are welcome! Please feel free to submit issues and pull requests.