Skip to content
Discussion options

You must be logged in to vote

Thanks for the question!

My guess is it's not possible in general to get better asymptotic efficiency. Consider the case where f is just a general linear transformation, i.e. f = lambda x: jnp.dot(A, x) for given dense/unstructured A. Computing the Jacobian determinant of this function is exactly computing the determinant of A, and we need to pay d^3 for that.

Intuitively, evaluating the determinant of the dense Jacobian is doing the right thing for the general case: we want to know the volume of a parallelepiped which is the image of an axis-aligned unit cube under the locally-linearized function. Finding the image of each standard basis vector is exactly what jacfwd does, as efficiently…

Replies: 2 comments 2 replies

Comment options

You must be logged in to vote
2 replies
@mattjj
Comment options

@nalzok
Comment options

Answer selected by nalzok
Comment options

You must be logged in to vote
0 replies
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
3 participants