Skip to content
This repository was archived by the owner on Jul 10, 2025. It is now read-only.

Commit 14d3569

Browse files
committed
Clarify experimental API
1 parent 6803fc3 commit 14d3569

File tree

1 file changed

+1
-4
lines changed

1 file changed

+1
-4
lines changed

rfcs/20201201-cpp-gradients.md

Lines changed: 1 addition & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@
1010

1111
## Objective
1212

13-
We propose performing gradient computation entirely in C++. This aligns with TensorFlow Core team’s vision of providing first-class C++ APIs for building ML models. We mainly focus on reverse-mode autodiff here and leave forward-mode AD as future work. A C API for this implementation is also left as future work but we imagine it to be a straightforward wrapper over the provided C++ APIs.
13+
We propose performing gradient computation entirely in C++. We mainly focus on reverse-mode autodiff here and leave forward-mode AD as future work. A C API for this implementation is also left as future work but we imagine it to be a straightforward wrapper over the provided C++ APIs. The APIs discussed here are experimental and hence do not have any backwards compatibility guarantees yet.
1414

1515

1616
## Motivation
@@ -308,9 +308,6 @@ Status TapeOperation::Execute(absl::Span<AbstractTensorHandle*> retvals,
308308
This way the same C++ gen\_ops code can be used to execute ops with/without a tape by simply wrapping the current execution context in a `TapeContext`.
309309

310310

311-
Note: This interface is subject to change.
312-
313-
314311
**Some details on memory management**
315312

316313
`AbstractTensorHandle` provides `Ref` and `Unref` methods which can be used to manage its lifecycle. Gradient functions and the tape follow these guidelines for memory safety:

0 commit comments

Comments
 (0)