Skip to content

Commit ee7c1be

Browse files
authored
Fixing links and /learn/optimization/optimizers/ documentation (#8012)
* Added relative path to RAG tutorial page. Fixes #8009 * Corrected link to outdated notebook * Added link to dspy.BootstrapFinetune Tutorial
1 parent c422518 commit ee7c1be

File tree

1 file changed

+3
-3
lines changed

1 file changed

+3
-3
lines changed

docs/docs/learn/optimization/optimizers.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -47,7 +47,7 @@ These optimizers extend the signature by automatically generating and including
4747

4848
3. [**`BootstrapFewShotWithRandomSearch`**](/deep-dive/optimizers/bootstrap-fewshot): Applies `BootstrapFewShot` several times with random search over generated demonstrations, and selects the best program over the optimization. Parameters mirror those of `BootstrapFewShot`, with the addition of `num_candidate_programs`, which specifies the number of random programs evaluated over the optimization, including candidates of the uncompiled program, `LabeledFewShot` optimized program, `BootstrapFewShot` compiled program with unshuffled examples and `num_candidate_programs` of `BootstrapFewShot` compiled programs with randomized example sets.
4949

50-
4. **`KNNFewShot`**. Uses k-Nearest Neighbors algorithm to find the nearest training example demonstrations for a given input example. These nearest neighbor demonstrations are then used as the trainset for the BootstrapFewShot optimization process. See [this notebook](https://github.com/stanfordnlp/dspy/blob/main/examples/knn.ipynb) for an example.
50+
4. **`KNNFewShot`**. Uses k-Nearest Neighbors algorithm to find the nearest training example demonstrations for a given input example. These nearest neighbor demonstrations are then used as the trainset for the BootstrapFewShot optimization process. See [this notebook](https://github.com/stanfordnlp/dspy/blob/main/examples/outdated_v2.4_examples/knn.ipynb) for an example.
5151

5252

5353
### Automatic Instruction Optimization
@@ -147,14 +147,14 @@ optimized_program = teleprompter.compile(YOUR_PROGRAM_HERE, trainset=YOUR_TRAINS
147147
optimized_rag = tp.compile(RAG(), trainset=trainset, max_bootstrapped_demos=2, max_labeled_demos=2)
148148
```
149149

150-
For a complete RAG example that you can run, start this [tutorial](http://127.0.0.1:8000/quick-start/getting-started-01/). It improves the quality of a RAG system over a subset of StackExchange communities from 53% to 61%.
150+
For a complete RAG example that you can run, start this [tutorial](/tutorials/rag/). It improves the quality of a RAG system over a subset of StackExchange communities from 53% to 61%.
151151

152152
=== "Optimizing weights for Classification"
153153
This is a minimal but fully runnable example of setting up a `dspy.ChainOfThought` module that classifies
154154
short texts into one of 77 banking labels and then using `dspy.BootstrapFinetune` with 2000 text-label pairs
155155
from the `Banking77` to finetune the weights of GPT-4o-mini for this task. We use the variant
156156
`dspy.ChainOfThoughtWithHint`, which takes an optional `hint` at bootstrapping time, to maximize the utility of
157-
the training data. Naturally, hints are not available at test time.
157+
the training data. Naturally, hints are not available at test time. More can be found in this [tutorial](/tutorials/classification_finetuning/).
158158

159159
<details><summary>Click to show dataset setup code.</summary>
160160

0 commit comments

Comments
 (0)