Skip to content

YiKangOY/Continuous-Logic-Optimization

Repository files navigation

Efficient Continuous Logic Optimization with Diffusion Model

Yikang Ouyang, Xiaofei Yu, Jiadong Zhu, Tinghuan Chen, Yuzhe Ma. In 2025 62th ACM/IEEE Design Automation Conference (DAC).

Contact: youyang929 [ $\alpha$ $\tau$ ] connect.hkust-gz.edu.cn (for anti-spam purpose, sorry for the inconvenience)

Benchmarks

  1. EPFL 15 benchmark EPFL15
  2. ISCAS 85 Dataset ISCAS85
  3. Online adder generator Adder
  4. In-house multipliers with compressor trees.

Environment

The required python environment is shown in requirements.txt, the python version is 3.9.16. Some packages are out-of-dated so you may install them mannually. For GPU support, please refer to official channel of PyG and PyTorch and download the version that suits your CUDA version; PyG, PyTorch CUDA must be install in your computer for GPU support, our cuda version is cuda 11.7.

Opensource EDA tools needed: yosys, abc.

Preprocessing datasets

1. Preprocessing RTL into graphs

cd 0_prepare_dataset
./graph_dataset_gen.sh

This will generate graph data for MAPE evaluation, the designs are listed in 0_prepare_dataset/designlist.txt

2. Synthesizing sequences

You may synthesize sequences to obtain delay and area by your own, or you can directly use our synthesized sequence dataset. The library for synthesizing can be found in scale-lab/DRiLLS#10 or LOSTIN. Synthesizing sequence (This requries many cores and much time):

cd 0_prepare_dataset/sequence/run_abc
python run_abc_syn.py
#After synthesizing with those sequences, you may merge results for each design into a single csv file
0_prepare_dataset/sequence/process_dataset/collect_design.ipynb

Alternatively, you can directly use:

tar -zxvf designs.tar.gz
tar -zxvf dataset.tar.gz

3. Continuous Logic Optimization

You are free to use any surrogate model as long as it can predict QoR and provide gradient:

Surrogate Model Run Command
MTL ./logic_opt.sh
LOSTIN ./logic_opt_lostin.sh
CNN ./logic_opt_cnn.sh

Each script performs the full pipeline described above, using the selected surrogate model. We have made some modification to original surrogate models in our implementation.

Configuration files:

  • config.yaml
  • config_lostin.yaml
  • config_cnn.yaml

Workflow

The pipeline includes the following steps:

  1. Pre-process the dataset
  2. Train a surrogate model
  3. Train a diffusion model
  4. Run continuous logic optimization using the diffusion model
  5. Retrieve optimization results

t-SNE Visulization

To run optimization without the diffusion model and visualize results via t-SNE, use:

./logic_opt_grad.sh

For this implementation, we use the MTL model as the default surrogate model. The configuration parameters are specified in config.yaml.

Citation

If you find our work helpful, please consider cite our work:

@INPROCEEDINGS{Ouyang2025DAC,
  author={Ouyang, Yikang and Yu, Xiaofei and Zhu, Jiadong and Chen, Tinghuan and Ma, Yuzhe},
  booktitle={2025 62th ACM/IEEE Design Automation Conference (DAC)}, 
  title={Efficient Continuous Logic Optimization with Diffusion Model}, 
  year={2025},
  pages={1-7},
}

About

This is the official implementation of Efficient Continuous Logic Optimization with Diffusion Model at DAC 2025

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages