You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -17,7 +17,7 @@ Navigate to `src/ex1_pca.py` and have a look at the `__main__` function :
17
17
18
18
Now we will implement the functions to perform a PCA transform and an inverse transform on our 2D array. First implement the function `pca_transform`:
19
19
20
-
4. Compute the mean vector over the features of the input matrix. The resulting mean vector should have the size $(d,1)$.
20
+
4. Compute the mean vector over the features of the input matrix. The resulting mean vector should have the size $(d,1)$. (Hint: use `keepdims=True`in the function `numpy.mean` to keep the dimensions for easier subtraction.)
21
21
5. Center the data by subtracting the mean from the 2D image array.
22
22
6. Compute the covariance matrix of the centered data. (Hint: `numpy.cov`.)
23
23
7. Perform the eigendecomposition of the covariance matrix. (Hint: `numpy.linalg.eigh`)
@@ -27,8 +27,8 @@ Now we will implement the functions to perform a PCA transform and an inverse tr
27
27
Next, implement the function `pca_inverse_transform`, which reconstructs the data using the top $n_comp$ principal components following these steps:
28
28
29
29
10. Select the first $n_comp$ components from the given eigenvectors.
30
-
11. Project the centered data onto the space defined by the selected eigenvectors by multiplying both matrices, giving us the reduced data.
31
-
12. Reconstruct the data projecting it back to the original space by multiplying the reduced data with the transposed selected eigenvectors. Don't forget to add the mean vector afterwards.
30
+
11. Project the centered data onto the space defined by the selected eigenvectors by multiplying the transposed selected eigenvectors and the centered data matrix, giving us the reduced data.
31
+
12. Reconstruct the data projecting it back to the original space by multiplying the selected eigenvectors with the reduced data. Don't forget to add the mean vector afterwards.
32
32
13. Return the reconstructed data.
33
33
34
34
Now, before returning to the `__main__` function, we also want to calculate the explained variance associated with our principal components. For that, implement the `expl_var` function following these steps:
0 commit comments