Skip to content

Code of the paper, "StarCANet: A Compact and Efficient Neural Network for Massive MIMO CSI Feedback".

Notifications You must be signed in to change notification settings

cha331/StarCANet

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Overview

This is the PyTorch implementation of the paper StarCANet: A Compact and Efficient Neural Network for Massive MIMO CSI Feedback.

Requirements

To use this project, you need to ensure the following requirements are installed.

  • Python >= 3.10
  • PyTorch >= 2.6
  • SciPy >= 1.15
  • TensorBoard
  • thop

Project Preparation

A. Data Preparation

The channel state information (CSI) matrix is generated from COST2100 model. Chao-Kai Wen and Shi Jin group provides a pre-processed version of COST2100 dataset in DropBox, which is easier to use for the CSI feedback task; You can also download it from Baidu Netdisk.

You can generate your own dataset according to the open source library of COST2100 as well. The details of data pre-processing can be found in our paper.

B. Project Tree Arrangement

We recommend you to arrange the project tree as follows.

home
├── StarCANet  # The cloned StarCANet repository
│   ├── dataset
│   ├── models
│   ├── utils
│   ├── main.py
├── COST2100  # The data folder
│   ├── DATA_Htestin.mat
│   ├── ...
├── checkpoints  # The checkpoints folder
│   ├── StarCANet-S
│   ├── StarCANet-M
│   ├── StarCANet-L
│   │   ├── in_4.pth
│   │   ├── ...
├── run.sh  # The bash script
...

Train StarCANet from Scratch

An example of run.sh is listed below. Simply use it with sh run.sh. It starts to train StarCANet from scratch. The model size can be specified as S, M, or L using the --size argument. Change scenario using --scenario and change compression ratio with --cr.

python /home/StarCANet/main.py \
  --data-dir '/home/COST2100' \
  --scenario 'in' \
  --epochs 1000 \
  --batch-size 200 \
  --workers 8 \
  --cr 4 \
  --size 'L'
  --scheduler cosine \
  --gpu 0 \
  2>&1 | tee log.out

Results and Reproduction

If you want to reproduce our result, you can directly download the corresponding checkpoints from Google Drive. To reproduce the results, simply add --evaluate to run.sh and pick the corresponding pre-trained model with --pretrained. An example is shown as follows.

python /home/StarCANet/main.py \
  --data-dir '/home/COST2100' \
  --scenario 'in' \
  --pretrained '/home/checkpoints/StarCANet-L/in_4.pth' \
  --evaluate \
  --batch-size 200 \
  --workers 0 \
  --cr 4 \
  --size 'L'
  --cpu \
  2>&1 | tee test_log.out

Important Note: Please ensure that the --cr (Compression Ratio) and --size (Model Size) parameters exactly match the configuration of the pre-trained checkpoint file specified by --pretrained. For example, the StarCANet-L/in_4.pth checkpoint used in the example was trained for a $CR=4$ and model size 'L' configuration, and therefore must be used with --cr 4 and --size 'L'. Any parameter mismatch will result in an inability to load the model weights correctly or will lead to erroneous results.

Acknowledgment

This repository is modified from the CRNet open source code. Thanks Zhilin for his amazing work. Thanks Chao-Kai Wen and Shi Jin group for providing the pre-processed COST2100 dataset, you can find their related work named CsiNet in Github-Python_CsiNet.

About

Code of the paper, "StarCANet: A Compact and Efficient Neural Network for Massive MIMO CSI Feedback".

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages