-
Notifications
You must be signed in to change notification settings - Fork 196
[executor-preview] Testing and fixing the NLV3 decoder #2505
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
| def test_valid_model(self): | ||
| """Tests the decoder.""" | ||
| decoded = NoiseLearnerV3ResultDecoder.decode(self.encoded.model_dump_json()) | ||
| for datum_in, datum_out in zip(self.results.data, decoded.data): | ||
| assert datum_in._generators == datum_out._generators | ||
| assert np.allclose(datum_in._rates, datum_out._rates) | ||
| assert np.allclose(datum_in._rates_std, datum_out._rates_std) | ||
| assert datum_in.metadata == datum_out.metadata |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The way to avoid accessing private attributes here is to implement an __eq__ method, which will be useful per se' and testable
| def decode(cls, raw_result: str) -> NoiseLearnerV3Results: # type: ignore[no-untyped-def] | ||
| """Decode raw json to result type.""" | ||
| decoded: Dict = super().decode(raw_result) | ||
| decoded = json.loads(raw_result) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe to revert this change?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes please
SamFerracin
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, just revert that json.loads and it's good to go in
Would you mind testing one more time before merging
- that new jobs go through
- that old jobs (e.g. jobs that you run on the
executor_previewbranch) can be loaded
?
Better safe than sorry
I ran NLV3 with the Then: |
SamFerracin
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks good to me. Could you please
- Fix the conflict
- Confirm that new jobs work correctly, and we can load old jobs (which I think you did already)
After that, I am happy to approve :)
SamFerracin
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is another one of those PRs that I would test e2e by running a job. If that job passes, feel free to merge 😄
| def decode(cls, raw_result: str) -> NoiseLearnerV3Results: # type: ignore[no-untyped-def] | ||
| """Decode raw json to result type.""" | ||
| decoded: dict[str, str] = super().decode(raw_result) | ||
| decoded: dict[str, Any] = super().decode(raw_result) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
mmmh are you sure of this change?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is decoded:
{'schema_version': 'v0.1', 'data': [{'generators_sparse': [[['X', [0]]], [['Y', [0]]], [['Z', [0]]], [['X', [1]]], [['XX', [0, 1]]], [['YX', [0, 1]]], [['ZX', [0, 1]]], [['Y', [1]]], [['XY', [0, 1]]], [['YY', [0, 1]]], [['ZY', [0, 1]]], [['Z', [1]]], [['XZ', [0, 1]]], [['YZ', [0, 1]]], [['ZZ', [0, 1]]]], 'num_qubits': 2, 'rates': {'data': 'JCh+jLlr4z9N27+y0qT8PwAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAACE82jhiLec/AAAAAAAAAAAAAAAAAAAAACE82jhiLec/AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAJCh+jLlr4z9N27+y0qT8PwAAAAAAAAAA', 'shape': [15], 'dtype': 'f64'}, 'rates_std': {'data': 'AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA', 'shape': [15], 'dtype': 'f64'}, 'metadata': {'learning_protocol': 'lindblad', 'post_selection': {'fraction_kept': {'0': 0.6299099392361112, '1': 0.6307237413194444, '2': 0.6239149305555556, '4': 0.6247829861111112, '16': 0.6248372395833334, '32': 0.626708984375}}}}, {'generators_sparse': [[['X', [0]]], [['Y', [0]]], [['Z', [0]]], [['X', [1]]], [['XX', [0, 1]]], [['YX', [0, 1]]], [['ZX', [0, 1]]], [['Y', [1]]], [['XY', [0, 1]]], [['YY', [0, 1]]], [['ZY', [0, 1]]], [['Z', [1]]], [['XZ', [0, 1]]], [['YZ', [0, 1]]], [['ZZ', [0, 1]]]], 'num_qubits': 2, 'rates': {'data': 'AAAAAAAAAABftTLhl/qpPwAAAAAAAAAAMc7fhEJEAUADPj+MEB71PwAAAAAAAAAAMc7fhEJEAUAAAAAAAAAAAAAAAAAAAAAAAz4/jBAe9T8AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABftTLhl/qpPwAAAAAAAAAA', 'shape': [15], 'dtype': 'f64'}, 'rates_std': {'data': 'AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA', 'shape': [15], 'dtype': 'f64'}, 'metadata': {'learning_protocol': 'lindblad', 'post_selection': {'fraction_kept': {'0': 0.6240776909722222, '1': 0.6219618055555556, '2': 0.6254340277777778, '4': 0.6245659722222222, '16': 0.6256510416666666, '32': 0.6236979166666666}}}}]}
Output of
print(type(decoded["data"]))is:
<class 'list'>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for checking!
Part of #2479.
While testing
NoiseLearnerV3ResultDecoder.decode, I think I saw a bug. I'm not sure about the origin of the inputraw_string, but I think it's a json string, in which case the post selection part of it looks like this:The dictionary keys
0and4were originally integers, but with json keys are always strings. One needs the model in order to know that these are integers and therefore decode them as such.To fix the bug, I modified
NoiseLearnerV3ResultDecoder.decodeto be more similar to the decoder of the executor, which addresses the above concern.In addition,
noise_learner_v3_result_from_0_1was implemented as ifmodelwas a dictionary, although it's aNoiseLearnerV3ResultsModel, I fixed it.This PR is still in progress because several tests are still missing.