Skip to content

Commit 71ece5c

Browse files
[Typo] Learning/QML: Make feature map correspond to given image (#4410)
_Reopening of PR: #4407 due to Copilot review permissions problems._ --- #### Tackles issue: #4406
1 parent 61b2050 commit 71ece5c

File tree

1 file changed

+2
-2
lines changed

1 file changed

+2
-2
lines changed

learning/courses/quantum-machine-learning/classical-ml-review.ipynb

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -73,7 +73,7 @@
7373
"\n",
7474
"In this and the following subsection, the discussion focuses on mappings to higher dimensions. The point here is to explain the \"kernel trick\" in the context of mappings between spaces, and thus set the stage for what a quantum kernel is. The point is __not__ that higher dimensions in quantum wave functions solve all of our problems. As mentioned in the introduction, classical Gaussian feature maps are already infinite-dimensional. The dimensionality of data features is important, but high-dimensional quantum states are not sufficient for improvement over classical methods.\n",
7575
"\n",
76-
"Graphically, one can easily see how we can generalize the SVM approach to cases where the original data are not linearly separable, given the right mapping to higher dimensions. Looking at the two-dimensional data on the left, we can see that there is no linear decision boundary that can separate the two classes. However, we can consider adding a third feature to our feature space. If this new feature is - for instance - the product of the previous two features $x_1$ and $x_2$, then the data becomes linearly separable. This also means that we can run the support vector machine algorithm successfully now on this higher dimensional feature space.\n",
76+
"Graphically, one can easily see how we can generalize the SVM approach to cases where the original data are not linearly separable, given the right mapping to higher dimensions. Looking at the two-dimensional data on the left, we can see that there is no linear decision boundary that can separate the two classes. However, we can consider adding a third feature to our feature space. If this new feature is - for instance - the distance to the origin of the previous two features $x_1$ and $x_2$, then the data becomes linearly separable. This also means that we can run the support vector machine algorithm successfully now on this higher dimensional feature space.\n",
7777
"\n",
7878
"![A diagram showing a ring of one data type with a second data type filling in the middle of the ring. A second cell shows the data projected into 3D, as in a bowl shape. Now the data are linearly separable.](/learning/images/courses/quantum-machine-learning/classical-ml-review/qml-cr-background-2d-3d.avif)\n",
7979
"\n",
@@ -83,7 +83,7 @@
8383
"\\vec{x} = \\begin{pmatrix}x_1 \\\\ x_2 \\end{pmatrix}\n",
8484
"$$\n",
8585
"$$\n",
86-
"\\vec{\\Phi}(\\vec{x}) = \\begin{pmatrix}x_1 \\\\ x_2 \\\\ x_1 x_2\\end{pmatrix}\n",
86+
"\\vec{\\Phi}(\\vec{x}) = \\begin{pmatrix}x_1 \\\\ x_2 \\\\ {x_1}^2+{x_2}^2\\end{pmatrix}\n",
8787
"$$\n",
8888
"\n",
8989
"Some feature maps may map into very high dimensional spaces. In such cases, the high-dimensionality makes inner products more computationally expensive. We will return to that point below.\n",

0 commit comments

Comments
 (0)