Skip to content

MAINT: LindbladCoefficientBlock blocktype branching#701

Open
rileyjmurray wants to merge 29 commits intodevelopfrom
lindbladcoeffblock-blocktype-branching
Open

MAINT: LindbladCoefficientBlock blocktype branching#701
rileyjmurray wants to merge 29 commits intodevelopfrom
lindbladcoeffblock-blocktype-branching

Conversation

@rileyjmurray
Copy link
Contributor

@rileyjmurray rileyjmurray commented Jan 6, 2026

This PR has major changes to LindbladCoefficientBlock. I've taken care to produce code that is equivalent to the old code from an input-output perspective.

After this PR we'll be well-positioned to make a base class for LindbladCoefficientBlock (or just make LindbladCoefficientBlock a base class) from which new classes can inherit. I want to do this because I have an idea for a sparse Cholesky approach to reduced-order CPTP models.

The diff for this PR is kind of awful. Honestly, it's a little hopeless to track what happened. My ask is that we merge anyway once all tests pass. The approach I took to the refactor was to go function by function, preserving as much of the original implementation as possible. My first pass used an LLM for applying the refactoring pattern below. Once some tests failed after this, I did another pass that manually copy-pasted relevant codeblocks from the old perform_operation functions into the new _perform_operation_<blocktype> helper functions.

general approach

I took functions like

def perform_operation(self, *args, **kwargs):
    # some pre-branching work ...
    if self._blocktype == 'ham':
        # insert 3 to 5 lines here; call this codeblock "X"
        pass
    elif self._blocktype == 'other_diagonal':
        # insert 10 to 20 lines of code here; call this codeblock "Y"
        pass
    elif self._blocktype == 'other':
        # insert 20 to 80 lines of code here; call this codeblock "Z"
        pass
    else:
        raise ValueError('unsupported block type')
     # some post-branching work ...
     return

and refactored them to

def perform_operation(self, *args, **kwargs):
    # some pre-branching work ...
    if self._blocktype == 'ham':
        self._perform_operation_ham(*args, **kwargs)
    elif self._blocktype == 'other_diagonal':
        self._perform_operation_otherdiag(*args, **kwargs)
    elif self._blocktype == 'other':
        self._perform_operation_other(*args, **kwargs)
    else:
        raise InvalidBlockTypeError()
     # some post-branching work ...
    return

def _perform_operation_ham(self, *args, **kwargs):
    # insert codeblock "X" here
    return

def _perform_operation_otherdiag(self, *args, **kwargs):
    # insert codeblock "Y" here
    return
    
def _perform_operation_other(self, *args, **kwargs):
    # insert codeblock "Z" here
    return
    
    

…_deriv_wrt_params; apply input checking in caller function LindbladErrorgen.deriv_wrt_params.
@rileyjmurray rileyjmurray marked this pull request as ready for review January 23, 2026 05:39
@rileyjmurray rileyjmurray requested a review from a team as a code owner January 23, 2026 05:39
mx.sort_indices()
else:
# superops = _np.einsum("ik,akl,lj->aij", leftTrans, superops, rightTrans)
superops = _np.transpose(_np.tensordot(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are we sure that this ordering is always going to be the best ordering performance wise?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not at all! But the point of this PR is to be minimally-invasive. In an ideal world we'd formally certify that the current refactor does not affect input-output behavior of the existing functions.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants