Skip to content

Commit cdcbda9

Browse files
authored
Merge pull request #79 from michaelaye/patch-2
Fixing a few typos
2 parents be22c05 + 3ef01df commit cdcbda9

File tree

1 file changed

+5
-5
lines changed

1 file changed

+5
-5
lines changed

docs/parameter_selection.rst

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@ Selecting ``min_cluster_size``
1515

1616
The primary parameter to effect the resulting clustering is
1717
``min_cluster_size``. Ideally this is a relatively intuitive parameter
18-
to select -- set it to the smallest size grouping that you sih to
18+
to select -- set it to the smallest size grouping that you wish to
1919
consider a cluster. It can have slightly non-obvious effects however.
2020
Let's consider the digits dataset from sklearn. We can project the data
2121
into two dimensions to visualize it via t-SNE.
@@ -31,7 +31,7 @@ into two dimensions to visualize it via t-SNE.
3131
.. image:: images/parameter_selection_3_1.png
3232

3333

34-
If we cluster this data in the full 64 dimensional space with hdbscan we
34+
If we cluster this data in the full 64 dimensional space with HDBSCAN\* we
3535
can see some effects from varying the ``min_cluster_size``.
3636

3737
We start with a ``min_cluster_size`` of 15.
@@ -54,7 +54,7 @@ We start with a ``min_cluster_size`` of 15.
5454
Increasing the ``min_cluster_size`` to 30 reduces the number of
5555
clusters, merging some together. This is a result of HDBSCAN\*
5656
reoptimizing which flat clustering provides greater stability under a
57-
slightly different notion of what constitutes cluster.
57+
slightly different notion of what constitutes a cluster.
5858

5959
.. code:: python
6060
@@ -115,7 +115,7 @@ pruned out. Thus ``min_cluster_size`` does behave more closely to our
115115
intuitions, but only if we fix ``min_samples``. If you wish to explore
116116
different ``min_cluster_size`` settings with a fixed ``min_samples``
117117
value, especially for larger dataset sizes, you can cache the hard
118-
computation, and recompute onlythe relatively cheap flat cluster
118+
computation, and recompute only the relatively cheap flat cluster
119119
extraction using the ``memory`` parameter, which makes use of ``joblib``
120120
[link].
121121

@@ -158,7 +158,7 @@ leaving the ``min_cluster_size`` at 60, but reducing ``min_samples`` to
158158

159159
Now most points are clustered, and there are much fewer noise points.
160160
Steadily increasing ``min_samples`` will, as we saw in the examples
161-
above, make the clustering progressivly more conservative, culiminating
161+
above, make the clustering progressivly more conservative, culminating
162162
in the example above where ``min_samples`` was set to 60 and we had only
163163
two clusters with most points declared as noise.
164164

0 commit comments

Comments
 (0)