Replies: 1 comment 1 reply
-
|
Hi @stebos100 Thank you for reaching out. Yes, it is possible to use Clad with array inputs. In brief, the derivative type is the same as the value type. double fn(double x[10]) { ... }To differentiate the function auto fn_grad = clad::gradient(fn);
double x[10], dx[10] = {0};
clad::array_ref<double> dx_ref(dx, 10);
fn_grad.execute(x, dx_ref); // dx_ref stores derivatives of the function's return value with respect to argument xHowever, if the function double fn(double x1, double x2, double x3) { ... }In this case, you will need to manually specify input arguments Please let us know if you have any questions. |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hi all ! I was wondering if you could possible help me.
I want to enquire if it is possible to use f = clad::gradient, and f.execute() with vector or arrays as inputs. so when we specify the inputs for the gradient function itself we simply provide a array or vector ?
ie.
std::vector inputs \with some data;
auto f = clad::gradient(func, inputs);
and when we use the f.execute() we can simply store the derivative outputs to an array or vector instead of manually extracting them (
for eg. f.execute(x1,x2,x3, &dx[0], &dx[1], &dx[3]) will be incredibly tedious if you have 600 inputs that need to be differentiated) and instead do something similar to :
std::vector derivatives;
f.execute(inputs , &derivatives);
Looking forward to hearing from you all !
Beta Was this translation helpful? Give feedback.
All reactions