You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
TextGrad is both a library and an optimizer algorithm. Currently, we support three optimizers:
6
+
7
+
- OPRO: [Large Language Models as Optimizers](https://arxiv.org/abs/2309.03409)
8
+
- TextGrad: [TextGrad: Automatic "Differentiation" via Text](https://arxiv.org/abs/2406.07496)
9
+
- OptoPrime: [Our proposed algorithm](https://arxiv.org/abs/2406.16218) -- using the entire computational graph to perform parameter update. It is 2-3x
10
+
faster than TextGrad.
11
+
12
+
Using our framework, you can seamlessly switch between different optimizers:
The table evaluates the frameworks in the following aspects:
29
+
30
+
- Computation Graph: Whether the optimizer leverages the computation graph of the workflow.
31
+
- Code as Functions: Whether the framework allows users to write actual executable Python functions and not require
32
+
users to wrap them in strings.
33
+
- Library Support: Whether the framework has a library to support the optimizer.
34
+
- Speed: TextGrad is about 2-3x slower than OptoPrime (Trace). OPRO has no concept of computational graph, therefore is very fast.
35
+
- Large Graph: OptoPrime (Trace) represents the entire computation graph in context, therefore, might have issue with graphs that have more than hundreds of operations. TextGrad does not have the context-length issue, however, might be very slow on large graphs.
36
+
37
+
We provide a comparison to validate our implementation of TextGrad in Trace:
0 commit comments