Skip to content

Commit 91e5637

Browse files
authored
Merge pull request #218 from tholoz/patch-1
Fixed markup typo
2 parents bba63fc + 650bfe7 commit 91e5637

File tree

1 file changed

+8
-6
lines changed

1 file changed

+8
-6
lines changed

docs/comparing_clustering_algorithms.rst

Lines changed: 8 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -171,14 +171,16 @@ multiple different clusterings. This does not engender much confidence
171171
in any individual clustering that may result.
172172

173173
So, in summary, here's how K-Means seems to stack up against out
174-
desiderata: \* **Don't be wrong!**: K-means is going to throw points
174+
desiderata:
175+
- **Don't be wrong!**: K-means is going to throw points
175176
into clusters whether they belong or not; it also assumes you clusters
176-
are globular. K-Means scores very poorly on this point. \* **Intuitive
177-
parameters**: If you have a good intuition for how many clusters the
177+
are globular. K-Means scores very poorly on this point.
178+
- **Intuitive parameters**: If you have a good intuition for how many clusters the
178179
dataset your exploring has then great, otherwise you might have a
179-
problem. \* **Stability**: Hopefully the clustering is stable for your
180-
data. Best to have many runs and check though. \* **Performance**: This
181-
is K-Means big win. It's a simple algorithm and with the right tricks
180+
problem.
181+
- **Stability**: Hopefully the clustering is stable for your
182+
data. Best to have many runs and check though.
183+
- **Performance**: This is K-Means big win. It's a simple algorithm and with the right tricks
182184
and optimizations can be made exceptionally efficient. There are few
183185
algorithms that can compete with K-Means for performance. If you have
184186
truly huge data then K-Means might be your only option.

0 commit comments

Comments
 (0)