-
Notifications
You must be signed in to change notification settings - Fork 11
Add unit tests for pseudo_kl_divergence_loss #59
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
kevinchern
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi Ahmed @abdela47. Thank you for the pull request.
The tests are well-reasoned, modular, and nicely documented.
I did a quick first pass and added some minor requests.
Separately, you may find this contribution guide helpful for our conventions and best-practices.
tests/test_pseudo_kl_divergence.py
Outdated
| logits = torch.zeros(batch, n_spins) | ||
|
|
||
| # spins: (batch_size, n_samples, n_spins) | ||
| spins = torch.ones(batch, n_samples, n_spins) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could you explain your rationale for using zero-valued logits and spins in this test versus nonzero values in the 2d test?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Zero logits are used in the 3D shape test to keep the entropy term simple and stable (p = 0.5), allowing the test to focus purely on documented shape support; nonzero values are covered in the 2D numerical correctness test.
Co-authored-by: Kevin Chern <[email protected]>
…ins_data.shape for clarity and consistency
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @abdela47, just a couple more changes:
- Can you note this motivation as a comment in the test?
- Can you encapsulate tests in a
unittestframework? And then rename tests to be more concise
thisac
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Happy to approve as soon as the final comments from @kevinchern are addressed. Also, please rebase or merge main into branch so that tests can pass (issues have been fixed in main).
|
|
||
|
|
||
| spins_model = torch.ones(batch_size, n_spins, dtype=torch.float32) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| spins_model = torch.ones(batch_size, n_spins, dtype=torch.float32) | |
| spins_model = torch.ones(batch_size, n_spins, dtype=torch.float32) |
| entropy2 = F.binary_cross_entropy_with_logits(logits2, probs2) | ||
| (-entropy2).backward() | ||
|
|
||
| torch.testing.assert_close(logits.grad, logits2.grad) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Don't forget to add an
if __name__ == "__main__":
unittest.main()at the end of this file to make sure the tests are run correctly.
This PR adds deterministic unit tests for pseudo_kl_divergence_loss.
The tests cover both documented spin shapes and verify gradient behavior.
They isolate the statistical structure of the loss using deterministic dummy Boltzmann
machines and do not rely on samplers or quantum hardware.
Closes #56.