-
Notifications
You must be signed in to change notification settings - Fork 52
Description
Hi Mark, many thanks for your work. I have run your code LISTA.py, but I find that the nmse did not decrease. The kappa is set to be None as default. The outputs are as follows:
Lineartrainrate=0.5 fine tuning all B_0:0,S_0:0,lam_0:0,lam_1:0,lam_2:0,lam_3:0,lam_4:0,lam_5:0
i=0 nmse=-2.994387 dB (best=-2.994386)
i=1000 nmse=-2.979353 dB (best=-2.994386)
i=2000 nmse=-2.979327 dB (best=-2.994386)
i=3000 nmse=-2.977394 dB (best=-2.994386)
i=4000 nmse=-2.978242 dB (best=-2.994386)
i=5000 nmse=-2.977934 dB (best=-2.994386)
i=6000 nmse=-2.975667 dB (best=-2.994386)
Lineartrainrate=0.1 fine tuning all B_0:0,S_0:0,lam_0:0,lam_1:0,lam_2:0,lam_3:0,lam_4:0,lam_5:0
i=0 nmse=-2.975667 dB (best=-2.975667)
i=1000 nmse=-2.990680 dB (best=-2.991928)
i=2000 nmse=-2.990008 dB (best=-2.991947)
i=3000 nmse=-2.991165 dB (best=-2.992401)
i=4000 nmse=-2.990814 dB (best=-2.992401)
i=5000 nmse=-2.990016 dB (best=-2.992401)
i=6000 nmse=-2.991783 dB (best=-2.992401)
i=7000 nmse=-2.990873 dB (best=-2.992795)
i=8000 nmse=-2.991178 dB (best=-2.992942)
i=9000 nmse=-2.992069 dB (best=-2.992942)
i=10000 nmse=-2.990730 dB (best=-2.992942)
i=11000 nmse=-2.991287 dB (best=-2.992942)
i=12000 nmse=-2.991836 dB (best=-2.992942)
i=13000 nmse=-2.991032 dB (best=-2.992942)
I then find that the xhat_ (which should be the sparse coding) is actually not sparse even after several thousands training steps. I then change the kappa value, and I found the nmse may change during the first few iterations, but it still gets stuck at about -2.98 dB.
--
Update:
I find that the LAMP code also gives out non-sparse results for xhat_ after the training. I originally think that it should be sparse, since this is the output of the T-layer network (equation 27 in your paper). Is this case normal?
--
Could you help me with this issue? Where did I get it wrong?
Thank you~