Skip to content

Commit 894bd92

Browse files
stars to dashes
1 parent 9daab50 commit 894bd92

File tree

4 files changed

+31
-31
lines changed

4 files changed

+31
-31
lines changed

source/clustering.md

Lines changed: 9 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -39,16 +39,16 @@ including techniques to choose the number of clusters.
3939

4040
By the end of the chapter, readers will be able to do the following:
4141

42-
* Describe a situation in which clustering is an appropriate technique to use,
42+
- Describe a situation in which clustering is an appropriate technique to use,
4343
and what insight it might extract from the data.
44-
* Explain the K-means clustering algorithm.
45-
* Interpret the output of a K-means analysis.
46-
* Differentiate between clustering, classification, and regression.
47-
* Identify when it is necessary to scale variables before clustering, and do this using Python.
48-
* Perform K-means clustering in Python using `scikit-learn`.
49-
* Use the elbow method to choose the number of clusters for K-means.
50-
* Visualize the output of K-means clustering in Python using a colored scatter plot.
51-
* Describe advantages, limitations and assumptions of the K-means clustering algorithm.
44+
- Explain the K-means clustering algorithm.
45+
- Interpret the output of a K-means analysis.
46+
- Differentiate between clustering, classification, and regression.
47+
- Identify when it is necessary to scale variables before clustering, and do this using Python.
48+
- Perform K-means clustering in Python using `scikit-learn`.
49+
- Use the elbow method to choose the number of clusters for K-means.
50+
- Visualize the output of K-means clustering in Python using a colored scatter plot.
51+
- Describe advantages, limitations and assumptions of the K-means clustering algorithm.
5252

5353

5454
## Clustering

source/inference.md

Lines changed: 10 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -36,16 +36,16 @@ populations and then introduce two common techniques in statistical inference:
3636

3737
By the end of the chapter, readers will be able to do the following:
3838

39-
* Describe real-world examples of questions that can be answered with statistical inference.
40-
* Define common population parameters (e.g., mean, proportion, standard deviation) that are often estimated using sampled data, and estimate these from a sample.
41-
* Define the following statistical sampling terms: population, sample, population parameter, point estimate, and sampling distribution.
42-
* Explain the difference between a population parameter and a sample point estimate.
43-
* Use Python to draw random samples from a finite population.
44-
* Use Python to create a sampling distribution from a finite population.
45-
* Describe how sample size influences the sampling distribution.
46-
* Define bootstrapping.
47-
* Use Python to create a bootstrap distribution to approximate a sampling distribution.
48-
* Contrast the bootstrap and sampling distributions.
39+
- Describe real-world examples of questions that can be answered with statistical inference.
40+
- Define common population parameters (e.g., mean, proportion, standard deviation) that are often estimated using sampled data, and estimate these from a sample.
41+
- Define the following statistical sampling terms: population, sample, population parameter, point estimate, and sampling distribution.
42+
- Explain the difference between a population parameter and a sample point estimate.
43+
- Use Python to draw random samples from a finite population.
44+
- Use Python to create a sampling distribution from a finite population.
45+
- Describe how sample size influences the sampling distribution.
46+
- Define bootstrapping.
47+
- Use Python to create a bootstrap distribution to approximate a sampling distribution.
48+
- Contrast the bootstrap and sampling distributions.
4949

5050
+++
5151

source/regression1.md

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -51,15 +51,15 @@ however that is beyond the scope of this book.
5151
## Chapter learning objectives
5252
By the end of the chapter, readers will be able to do the following:
5353

54-
* Recognize situations where a regression analysis would be appropriate for making predictions.
55-
* Explain the K-nearest neighbors (K-NN) regression algorithm and describe how it differs from K-NN classification.
56-
* Interpret the output of a K-NN regression.
57-
* In a data set with two or more variables, perform K-nearest neighbors regression in Python.
58-
* Evaluate K-NN regression prediction quality in Python using the root mean squared prediction error (RMSPE).
59-
* Estimate the RMSPE in Python using cross-validation or a test set.
60-
* Choose the number of neighbors in K-nearest neighbors regression by minimizing estimated cross-validation RMSPE.
54+
- Recognize situations where a regression analysis would be appropriate for making predictions.
55+
- Explain the K-nearest neighbors (K-NN) regression algorithm and describe how it differs from K-NN classification.
56+
- Interpret the output of a K-NN regression.
57+
- In a data set with two or more variables, perform K-nearest neighbors regression in Python.
58+
- Evaluate K-NN regression prediction quality in Python using the root mean squared prediction error (RMSPE).
59+
- Estimate the RMSPE in Python using cross-validation or a test set.
60+
- Choose the number of neighbors in K-nearest neighbors regression by minimizing estimated cross-validation RMSPE.
6161
- Describe underfitting and overfitting, and relate it to the number of neighbors in K-nearest neighbors regression.
62-
* Describe the advantages and disadvantages of K-nearest neighbors regression.
62+
- Describe the advantages and disadvantages of K-nearest neighbors regression.
6363

6464
+++
6565

source/regression2.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -38,10 +38,10 @@ predictor.
3838
## Chapter learning objectives
3939
By the end of the chapter, readers will be able to do the following:
4040

41-
* Use Python to fit simple and multivariable linear regression models on training data.
42-
* Evaluate the linear regression model on test data.
43-
* Compare and contrast predictions obtained from K-nearest neighbors regression to those obtained using linear regression from the same data set.
44-
* Describe how linear regression is affected by outliers and multicollinearity.
41+
- Use Python to fit simple and multivariable linear regression models on training data.
42+
- Evaluate the linear regression model on test data.
43+
- Compare and contrast predictions obtained from K-nearest neighbors regression to those obtained using linear regression from the same data set.
44+
- Describe how linear regression is affected by outliers and multicollinearity.
4545

4646
+++
4747

0 commit comments

Comments
 (0)