-
Notifications
You must be signed in to change notification settings - Fork 19
Add trajectory gradient optimization functions #297
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
6738fab to
baac988
Compare
|
Be careful on spending too much time on this, we might get projection step up soon when I'm back :P |
|
I think this complements the projection step, here you try to find the shortest path between two k-space/gradient states. Projection is, well, about projecting an existing trajectory. So they can coexist, and it could also serve as a nice reference to compare with. |
chaithyagr
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just doing some basic checks first, still not done, need to go for lunch :P
| Parameters | ||
| ---------- | ||
| N: int | ||
| Number of time points for the gradient waveform |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Number of gradient samples ?
Time points doesnt make much sense to me/
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it really is the length of the waveform to optimize
| ---------- | ||
| N: int | ||
| Number of time points for the gradient waveform | ||
| Deltakx: float |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
delta_kx ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it's all 1D stuff (as if we were only optimizing the x gradient only)
baac988 to
17dc1da
Compare
0ea2bc4 to
d338014
Compare
I think even without completing each other it's a good addition to gather multiple methods from the literature, it allows researchers to compare them, raising the standards (aka dropping sub-optimal methods and creating consensus) |
|
(And btw I started making an example in #236 to gather & compare all gradient manipulation methods, it would be nice to build on that) |
17dc1da to
475c800
Compare
475c800 to
855f251
Compare
Changes proposed in this pull request: