Skip to content

Commit a54a7c0

Browse files
authored
Fix broken image links in offline-evaluation notebook (#425)
## Problem It appears that links got broken when this algos-and-libraries directory got moved without updating image links inside the notebooks ## Solution Adjust paths ## Type of Change - [x] Bug fix (non-breaking change which fixes an issue)
1 parent 070e28a commit a54a7c0

File tree

1 file changed

+15
-15
lines changed

1 file changed

+15
-15
lines changed

learn/experimental/algos-and-libraries/offline-evaluation/offline-evaluation.ipynb

Lines changed: 15 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -29,7 +29,7 @@
2929
"\n",
3030
"In this notebook, we go through the most common offline metrics. These can be divided again into order-unaware (e.g., Recall@k and Precision@k) and order-aware metrics (e.g., MRR, MAP, and NDCG@k).\n",
3131
"\n",
32-
"<center><div> <img src=\"https://raw.githubusercontent.com/pinecone-io/examples/master/learn/algos-and-libraries/offline-evaluation/assets/metrics_diagram.png\" alt=\"Drawing\" style=\"width:400px;\"/></div> </center>"
32+
"<center><div> <img src=\"https://raw.githubusercontent.com/pinecone-io/examples/master/learn/experimental/algos-and-libraries/offline-evaluation/assets/metrics_diagram.png\" alt=\"Drawing\" style=\"width:600px;\"/></div> </center>"
3333
]
3434
},
3535
{
@@ -40,15 +40,15 @@
4040
"\n",
4141
"Suppose that we have a small eight-image dataset (in reality this number is more likely to be in the million+ range) and searches for *\"cat in the box\"*. The retrieval engine returns all $k = 8$ results. These can be represented by the following images:\n",
4242
"\n",
43-
"<center><div> <img src=\"https://raw.githubusercontent.com/pinecone-io/examples/master/learn/algos-and-libraries/offline-evaluation/assets/example.png\" alt=\"Drawing\" style=\"width:550px;\"/></div> </center>\n",
43+
"<center><div> <img src=\"https://raw.githubusercontent.com/pinecone-io/examples/master/learn/experimental/algos-and-libraries/offline-evaluation/assets/example.png\" alt=\"Drawing\" style=\"width:550px;\"/></div> </center>\n",
4444
"\n",
4545
"From the above, $4$ out of $8$ results are relevant, we call them *actual relevant results* as they show a cat inside a box (see results $\\#2$, $\\#4$, $\\#5$, and $\\#7$). \n",
4646
"\n",
4747
"The other results are not relevant because they show *only* cats (see results $\\#1$, $\\#6$), a box only (see result $\\#3$), or a *dog* inside a box (see result $\\#8$). These results do not correspond to the search query.\n",
4848
"\n",
4949
"We have highlighted in cyan these actual relevant results.\n",
5050
"\n",
51-
"<center><div> <img src=\"https://raw.githubusercontent.com/pinecone-io/examples/master/learn/algos-and-libraries/offline-evaluation/assets/example_highlighted.png\" alt=\"Drawing\" style=\"width:550px;\"/></div> </center>\n",
51+
"<center><div> <img src=\"https://raw.githubusercontent.com/pinecone-io/examples/master/learn/experimental/algos-and-libraries/offline-evaluation/assets/example_highlighted.png\" alt=\"Drawing\" style=\"width:550px;\"/></div> </center>\n",
5252
"\n",
5353
"It is this dataset that we will use throughout this notebook. We will learn how to use offline metrics and define them in *Python*."
5454
]
@@ -85,7 +85,7 @@
8585
"cell_type": "markdown",
8686
"metadata": {},
8787
"source": [
88-
"<center><div> <img src=\"https://raw.githubusercontent.com/pinecone-io/examples/master/learn/algos-and-libraries/offline-evaluation/assets/confusion_matrix.png\" alt=\"Drawing\" style=\"width: 400px;\"/></div> </center>"
88+
"<center><div> <img src=\"https://raw.githubusercontent.com/pinecone-io/examples/master/learn/experimental/algos-and-libraries/offline-evaluation/assets/confusion_matrix.png\" alt=\"Drawing\" style=\"width: 400px;\"/></div> </center>"
8989
]
9090
},
9191
{
@@ -94,7 +94,7 @@
9494
"source": [
9595
"Let's clarify this by looking at our example of cat in the box. Suppose that the software predicts as *positive* the first $2$ results. In this case, we will have $1$ true positive, $3$ true negative, $1$ false positive, and $3$ false negative.\n",
9696
"\n",
97-
"<center><div> <img src=\"https://raw.githubusercontent.com/pinecone-io/examples/master/learn/algos-and-libraries/offline-evaluation/assets/example_condition.png\" alt=\"Drawing\" style=\"width: 500px;\"/></div> </center>"
97+
"<center><div> <img src=\"https://raw.githubusercontent.com/pinecone-io/examples/master/learn/experimental/algos-and-libraries/offline-evaluation/assets/example_condition.png\" alt=\"Drawing\" style=\"width: 500px;\"/></div> </center>"
9898
]
9999
},
100100
{
@@ -124,7 +124,7 @@
124124
"\n",
125125
"Let's go back to our example with $N = 8$, $K = 1...N$, and $4$ actual relevant results - see them below in cyan.\n",
126126
" \n",
127-
"<center><div> <img src=\"https://raw.githubusercontent.com/pinecone-io/examples/master/learn/algos-and-libraries/offline-evaluation/assets/example_highlighted.png\" alt=\"Drawing\" style=\"width:550px;\"/></div> </center>"
127+
"<center><div> <img src=\"https://raw.githubusercontent.com/pinecone-io/examples/master/learn/experimental/algos-and-libraries/offline-evaluation/assets/example_highlighted.png\" alt=\"Drawing\" style=\"width:550px;\"/></div> </center>"
128128
]
129129
},
130130
{
@@ -137,7 +137,7 @@
137137
"\n",
138138
"$$Recall@2 = \\frac{1}{1 + 3} = \\frac{1}{4} = 0.25$$\n",
139139
"\n",
140-
"<center><div> <img src=\"https://raw.githubusercontent.com/pinecone-io/examples/master/learn/algos-and-libraries/offline-evaluation/assets/recall-at-two.png\" alt=\"Drawing\" style=\"width: 450px;\"/></div> </center>"
140+
"<center><div> <img src=\"https://raw.githubusercontent.com/pinecone-io/examples/master/learn/experimental/algos-and-libraries/offline-evaluation/assets/recall-at-two.png\" alt=\"Drawing\" style=\"width: 450px;\"/></div> </center>"
141141
]
142142
},
143143
{
@@ -258,7 +258,7 @@
258258
"\n",
259259
"The other queries can be *\"white cat\"* and *\"dark cat\"*. The search results are shown below, where the cyan highlighted images correspond to the actual relevant results based on the query.\n",
260260
"\n",
261-
"<center><div> <img src=\"https://raw.githubusercontent.com/pinecone-io/examples/master/learn/algos-and-libraries/offline-evaluation/assets/mrr.png\" alt=\"Drawing\" style=\"width: 900px;\"/></div> </center>"
261+
"<center><div> <img src=\"https://raw.githubusercontent.com/pinecone-io/examples/master/learn/experimental/algos-and-libraries/offline-evaluation/assets/mrr.png\" alt=\"Drawing\" style=\"width: 900px;\"/></div> </center>"
262262
]
263263
},
264264
{
@@ -377,7 +377,7 @@
377377
"\n",
378378
"$$Precision@2 = \\frac{1}{1 + 1} = \\frac{1}{2} = 0.50$$\n",
379379
"\n",
380-
"<center><div> <img src=\"https://raw.githubusercontent.com/pinecone-io/examples/master/learn/algos-and-libraries/offline-evaluation/assets/precision-at-two.png\" alt=\"Drawing\" style=\"width: 450px;\"/></div> </center>"
380+
"<center><div> <img src=\"https://raw.githubusercontent.com/pinecone-io/examples/master/learn/experimental/algos-and-libraries/offline-evaluation/assets/precision-at-two.png\" alt=\"Drawing\" style=\"width: 450px;\"/></div> </center>"
381381
]
382382
},
383383
{
@@ -423,7 +423,7 @@
423423
"\n",
424424
"Following what previously explained, $Precision@k$ and $rel_k$ will be the following:\n",
425425
"\n",
426-
"<center><div> <img src=\"https://raw.githubusercontent.com/pinecone-io/examples/master/learn/algos-and-libraries/offline-evaluation/assets/example_MAP.png\" alt=\"Drawing\" style=\"width: 450px;\"/></div> </center>"
426+
"<center><div> <img src=\"https://raw.githubusercontent.com/pinecone-io/examples/master/learn/experimental/algos-and-libraries/offline-evaluation/assets/example_MAP.png\" alt=\"Drawing\" style=\"width: 450px;\"/></div> </center>"
427427
]
428428
},
429429
{
@@ -432,7 +432,7 @@
432432
"source": [
433433
"Given that $Precision@k$ is multiplied by $rel_k$, the formula can be simplified when $rel_k = 0$ as $Precision@k * rel_k = 0$.\n",
434434
"\n",
435-
"<center><div> <img src=\"https://raw.githubusercontent.com/pinecone-io/examples/master/learn/algos-and-libraries/offline-evaluation/assets/example_MAP2.png\" alt=\"Drawing\" style=\"width: 250px;\"/></div> </center>\n",
435+
"<center><div> <img src=\"https://raw.githubusercontent.com/pinecone-io/examples/master/learn/experimental/algos-and-libraries/offline-evaluation/assets/example_MAP2.png\" alt=\"Drawing\" style=\"width: 250px;\"/></div> </center>\n",
436436
"\n",
437437
"We then need to calculate $\\sum_{k = 1}^{n} (Precision@k * rel_k)$ for each query, and divide it by the *# of relevant results* to get the $AP@K_q$ score. The # of relevant results will be $4$ for query $\\#1$, $4$ for query $\\#2$, and $2$ for query $\\#3$."
438438
]
@@ -543,7 +543,7 @@
543543
"source": [
544544
"The $CG@K$ metric is calculated as the sum of the top-$K$ relevance scores. Differently from before, the results from the user's search will not be divided into relevant and not relevant but will be rated from the less to the most relevant. We can use different colors to characterize the different scores: dark grey will be the color associated to the less relevant result $(0)$, cyan will be the color associated to the most relevant result $(4)$.\n",
545545
"\n",
546-
"<center><div> <img src=\"https://raw.githubusercontent.com/pinecone-io/examples/master/learn/algos-and-libraries/offline-evaluation/assets/relevance_score.png\" alt=\"Drawing\" style=\"width: 400px;\"/></div> </center>"
546+
"<center><div> <img src=\"https://raw.githubusercontent.com/pinecone-io/examples/master/learn/experimental/algos-and-libraries/offline-evaluation/assets/relevance_score.png\" alt=\"Drawing\" style=\"width: 400px;\"/></div> </center>"
547547
]
548548
},
549549
{
@@ -552,7 +552,7 @@
552552
"source": [
553553
"Let's now consider another example with $1$ query (this time, the user will search for *\"***white*** cat in the box\"*), $K = 8$, $k = 1...K$, and let's include the relevance score. Based on judgement, and considering their position, $k$, the relevance score can be assigned as follows:\n",
554554
"\n",
555-
"<center><div> <img src=\"https://raw.githubusercontent.com/pinecone-io/examples/master/learn/algos-and-libraries/offline-evaluation/assets/ndcg_relevance.png\" alt=\"Drawing\" style=\"width: 450px;\"/></div> </center>\n",
555+
"<center><div> <img src=\"https://raw.githubusercontent.com/pinecone-io/examples/master/learn/experimental/algos-and-libraries/offline-evaluation/assets/ndcg_relevance.png\" alt=\"Drawing\" style=\"width: 450px;\"/></div> </center>\n",
556556
"\n",
557557
"We can then calculate the cumulative gain *up to* position $K$, or $CG@K$, by summing up the relevance scores up to $K$."
558558
]
@@ -595,7 +595,7 @@
595595
"source": [
596596
"It's worth noting how this metric does not consider the position of the results. For example, if we swap the first two results so that the relevance for $k = 1$ equals $4$ and the relevance for $k = 2$ equals $0$, we can see that $CG@2$ is still $4$, even though having relevance $4$ as first result is often better than having it as the second.\n",
597597
"\n",
598-
"<center><div> <img src=\"https://raw.githubusercontent.com/pinecone-io/examples/master/learn/algos-and-libraries/offline-evaluation/assets/ndcg_relevance_two.png\" alt=\"Drawing\" style=\"width: 450px;\"/></div> </center>"
598+
"<center><div> <img src=\"https://raw.githubusercontent.com/pinecone-io/examples/master/learn/experimental/algos-and-libraries/offline-evaluation/assets/ndcg_relevance_two.png\" alt=\"Drawing\" style=\"width: 450px;\"/></div> </center>"
599599
]
600600
},
601601
{
@@ -684,7 +684,7 @@
684684
"\n",
685685
"If we consider our example, ideally, we would have wanted our results to be sorted by relevance, as follows:\n",
686686
"\n",
687-
"<center><div> <img src=\"https://raw.githubusercontent.com/pinecone-io/examples/master/learn/algos-and-libraries/offline-evaluation/assets/ndcg_relevance_sorted.png\" alt=\"Drawing\" style=\"width: 500px;\"/></div> </center>"
687+
"<center><div> <img src=\"https://raw.githubusercontent.com/pinecone-io/examples/master/learn/experimental/algos-and-libraries/offline-evaluation/assets/ndcg_relevance_sorted.png\" alt=\"Drawing\" style=\"width: 500px;\"/></div> </center>"
688688
]
689689
},
690690
{

0 commit comments

Comments
 (0)