Skip to content

Commit ca1562b

Browse files
Merge pull request #352 from UBC-DSCI/patch-figure-center
Added centring to figures where it was missing
2 parents bdc7eb5 + cc0debd commit ca1562b

13 files changed

+147
-10
lines changed

classification1.Rmd

Lines changed: 10 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1373,3 +1373,13 @@ wkflw_plot <-
13731373
13741374
wkflw_plot
13751375
```
1376+
1377+
## Exercises
1378+
1379+
Practice exercises for the material covered in this chapter
1380+
can be found in the accompanying [worksheet](https://ubc-dsci.github.io/data-science-a-first-intro-worksheets/worksheet_06/worksheet_06.ipynb).
1381+
The worksheet tries to provide automated feedback
1382+
and help guide you through the problems.
1383+
To make sure this functionality works as intended,
1384+
please follow the instructions for computer setup needed to run the worksheets
1385+
found in Chapter \@ref(move-to-your-own-machine).

classification2.Rmd

Lines changed: 10 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1342,6 +1342,16 @@ fwd_sel_accuracies_plot <- accuracies |>
13421342
fwd_sel_accuracies_plot
13431343
```
13441344

1345+
## Exercises
1346+
1347+
Practice exercises for the material covered in this chapter
1348+
can be found in the accompanying [worksheet](https://ubc-dsci.github.io/data-science-a-first-intro-worksheets/worksheet_07/worksheet_07.ipynb).
1349+
The worksheet tries to provide automated feedback
1350+
and help guide you through the problems.
1351+
To make sure this functionality works as intended,
1352+
please follow the instructions for computer setup needed to run the worksheets
1353+
found in Chapter \@ref(move-to-your-own-machine).
1354+
13451355
## Additional resources
13461356
- The [`tidymodels` website](https://tidymodels.org/packages) is an excellent reference for more details on, and advanced usage of, the functions and packages in the past two chapters. Aside from that, it also has a [nice beginner's tutorial](https://www.tidymodels.org/start/) and [an extensive list of more advanced examples](https://www.tidymodels.org/learn/) that you can use to continue learning beyond the scope of this book. It's worth noting that the `tidymodels` package does a lot more than just classification, and so the examples on the website similarly go beyond classification as well. In the next two chapters, you'll learn about another kind of predictive modeling setting, so it might be worth visiting the website only after reading through those chapters.
13471357
- [An Introduction to Statistical Learning](https://www.statlearning.com/) [-@james2013introduction] provides a great next stop in the process of learning about classification. Chapter 4 discusses additional basic techniques for classification that we do not cover, such as logistic regression, linear discriminant analysis, and naive Bayes. Chapter 5 goes into much more detail about cross-validation. Chapters 8 and 9 cover decision trees and support vector machines, two very popular but more advanced classification methods. Finally, Chapter 6 covers a number of methods for selecting predictor variables. Note that while this book is still a very accessible introductory text, it requires a bit more mathematical background than we require.

clustering.Rmd

Lines changed: 20 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -164,7 +164,7 @@ penguin_data
164164
Next, we can create a scatter plot using this data set
165165
to see if we can detect subtypes or groups in our data set.
166166

167-
```{r 10-toy-example-plot, warning = FALSE, fig.height = 4, fig.width = 4.35, fig.cap = "Scatter plot of standardized bill length versus standardized flipper length."}
167+
```{r 10-toy-example-plot, warning = FALSE, fig.height = 4, fig.width = 4.35, fig.align = "center", fig.cap = "Scatter plot of standardized bill length versus standardized flipper length."}
168168
ggplot(data, aes(x = flipper_length_standardized,
169169
y = bill_length_standardized)) +
170170
geom_point() +
@@ -198,7 +198,7 @@ This procedure will separate the data into groups;
198198
Figure \@ref(fig:10-toy-example-clustering) shows these groups
199199
denoted by colored scatter points.
200200

201-
```{r 10-toy-example-clustering, echo = FALSE, warning = FALSE, fig.height = 4, fig.width = 5, fig.cap = "Scatter plot of standardized bill length versus standardized flipper length with colored groups."}
201+
```{r 10-toy-example-clustering, echo = FALSE, warning = FALSE, fig.height = 4, fig.width = 5, fig.align = "center", fig.cap = "Scatter plot of standardized bill length versus standardized flipper length with colored groups."}
202202
ggplot(data, aes(y = bill_length_standardized,
203203
x = flipper_length_standardized, color = cluster)) +
204204
geom_point() +
@@ -256,7 +256,7 @@ in Figure \@ref(fig:10-toy-example-clus1-center).
256256

257257
(ref:10-toy-example-clus1-center) Cluster 1 from the `penguin_data` data set example. Observations are in blue, with the cluster center highlighted in red.
258258

259-
```{r 10-toy-example-clus1-center, echo = FALSE, warning = FALSE, fig.height = 4, fig.width = 4.35, fig.cap = "(ref:10-toy-example-clus1-center)"}
259+
```{r 10-toy-example-clus1-center, echo = FALSE, warning = FALSE, fig.height = 4, fig.width = 4.35, fig.align = "center", fig.cap = "(ref:10-toy-example-clus1-center)"}
260260
base <- ggplot(data, aes(x = flipper_length_standardized, y = bill_length_standardized)) +
261261
geom_point() +
262262
xlab("Flipper Length (standardized)") +
@@ -303,7 +303,7 @@ These distances are denoted by lines in Figure \@ref(fig:10-toy-example-clus1-di
303303

304304
(ref:10-toy-example-clus1-dists) Cluster 1 from the `penguin_data` data set example. Observations are in blue, with the cluster center highlighted in red. The distances from the observations to the cluster center are represented as black lines.
305305

306-
```{r 10-toy-example-clus1-dists, echo = FALSE, warning = FALSE, fig.height = 4, fig.width = 4.35, fig.cap = "(ref:10-toy-example-clus1-dists)"}
306+
```{r 10-toy-example-clus1-dists, echo = FALSE, warning = FALSE, fig.height = 4, fig.width = 4.35, fig.align = "center", fig.cap = "(ref:10-toy-example-clus1-dists)"}
307307
base <- ggplot(clus1) +
308308
geom_point(aes(y = bill_length_standardized,
309309
x = flipper_length_standardized),
@@ -342,7 +342,7 @@ Figure \@ref(fig:10-toy-example-all-clus-dists).
342342

343343
(ref:10-toy-example-all-clus-dists) All clusters from the `penguin_data` data set example. Observations are in orange, blue, and yellow with the cluster center highlighted in red. The distances from the observations to each of the respective cluster centers are represented as black lines.
344344

345-
```{r 10-toy-example-all-clus-dists, echo = FALSE, warning = FALSE, fig.height = 4, fig.width = 5, fig.cap = "(ref:10-toy-example-all-clus-dists)"}
345+
```{r 10-toy-example-all-clus-dists, echo = FALSE, warning = FALSE, fig.height = 4, fig.width = 5, fig.align = "center", fig.cap = "(ref:10-toy-example-all-clus-dists)"}
346346
347347
348348
all_clusters_base <- data |>
@@ -408,7 +408,7 @@ and randomly assigning a roughly equal number of observations
408408
to each of the K clusters.
409409
An example random initialization is shown in Figure \@ref(fig:10-toy-kmeans-init).
410410

411-
```{r 10-toy-kmeans-init, echo = FALSE, message = FALSE, warning = FALSE, fig.height = 4, fig.width = 4.35, fig.cap = "Random initialization of labels."}
411+
```{r 10-toy-kmeans-init, echo = FALSE, message = FALSE, warning = FALSE, fig.height = 4, fig.width = 4.35, fig.align = "center", fig.cap = "Random initialization of labels."}
412412
set.seed(14)
413413
penguin_data["label"] <- factor(sample(1:3, nrow(penguin_data), replace = TRUE))
414414
@@ -439,7 +439,7 @@ and the right column depicts the reassignment of data to clusters.
439439

440440
(ref:10-toy-kmeans-iter) First four iterations of K-means clustering on the `penguin_data` example data set. Each row corresponds to an iteration, where the left column depicts the center update, and the right column depicts the reassignment of data to clusters. Cluster centers are indicated by larger points that are outlined in black.
441441

442-
```{r 10-toy-kmeans-iter, echo = FALSE, warning = FALSE, fig.height = 16, fig.width = 8, fig.cap = "(ref:10-toy-kmeans-iter)"}
442+
```{r 10-toy-kmeans-iter, echo = FALSE, warning = FALSE, fig.height = 16, fig.width = 8, fig.align = "center", fig.cap = "(ref:10-toy-kmeans-iter)"}
443443
list_plot_cntrs <- vector(mode = "list", length = 4)
444444
list_plot_lbls <- vector(mode = "list", length = 4)
445445
@@ -546,7 +546,7 @@ These, however, are beyond the scope of this book.
546546
Unlike the classification and regression models we studied in previous chapters, K-means \index{K-means!restart,nstart} can get "stuck" in a bad solution.
547547
For example, Figure \@ref(fig:10-toy-kmeans-bad-init) illustrates an unlucky random initialization by K-means.
548548

549-
```{r 10-toy-kmeans-bad-init, echo = FALSE, warning = FALSE, message = FALSE, fig.height = 4, fig.width = 4.35, fig.cap = "Random initialization of labels."}
549+
```{r 10-toy-kmeans-bad-init, echo = FALSE, warning = FALSE, message = FALSE, fig.height = 4, fig.width = 4.35, fig.align = "center", fig.cap = "Random initialization of labels."}
550550
penguin_data <- penguin_data |>
551551
mutate(label = as_factor(c(3L, 3L, 1L, 1L, 2L, 1L, 2L, 1L, 1L,
552552
1L, 3L, 1L, 2L, 2L, 2L, 3L, 3L, 3L)))
@@ -567,7 +567,7 @@ Figure \@ref(fig:10-toy-kmeans-bad-iter) shows what the iterations of K-means wo
567567

568568
(ref:10-toy-kmeans-bad-iter) First five iterations of K-means clustering on the `penguin_data` example data set with a poor random initialization. Each row corresponds to an iteration, where the left column depicts the center update, and the right column depicts the reassignment of data to clusters. Cluster centers are indicated by larger points that are outlined in black.
569569

570-
```{r 10-toy-kmeans-bad-iter, echo = FALSE, warning = FALSE, fig.height = 20, fig.width = 8, fig.cap = "(ref:10-toy-kmeans-bad-iter)"}
570+
```{r 10-toy-kmeans-bad-iter, echo = FALSE, warning = FALSE, fig.height = 20, fig.width = 8, fig.align = "center", fig.cap = "(ref:10-toy-kmeans-bad-iter)"}
571571
list_plot_cntrs <- vector(mode = "list", length = 5)
572572
list_plot_lbls <- vector(mode = "list", length = 5)
573573
@@ -959,7 +959,7 @@ but there is a trade-off that doing many clusterings
959959
could take a long time.
960960
So this is something that needs to be balanced.
961961

962-
```{r 10-choose-k-nstart, fig.height = 4, fig.width = 4.35, message= F, warning = F, fig.cap = "A plot showing the total WSSD versus the number of clusters when K-means is run with 10 restarts."}
962+
```{r 10-choose-k-nstart, fig.height = 4, fig.width = 4.35, message= FALSE, warning = FALSE, fig.align = "center", fig.cap = "A plot showing the total WSSD versus the number of clusters when K-means is run with 10 restarts."}
963963
penguin_clust_ks <- tibble(k = 1:9) |>
964964
rowwise() |>
965965
mutate(penguin_clusts = list(kmeans(standardized_data, nstart = 10, k)),
@@ -978,5 +978,15 @@ elbow_plot <- ggplot(clustering_statistics, aes(x = k, y = tot.withinss)) +
978978
elbow_plot
979979
```
980980

981+
## Exercises
982+
983+
Practice exercises for the material covered in this chapter
984+
can be found in the accompanying [worksheet](https://ubc-dsci.github.io/data-science-a-first-intro-worksheets/worksheet_10/worksheet_10.ipynb).
985+
The worksheet tries to provide automated feedback
986+
and help guide you through the problems.
987+
To make sure this functionality works as intended,
988+
please follow the instructions for computer setup needed to run the worksheets
989+
found in Chapter \@ref(move-to-your-own-machine).
990+
981991
## Additional resources
982992
- Chapter 10 of [An Introduction to Statistical Learning](https://www.statlearning.com/) [-@james2013introduction] provides a great next stop in the process of learning about clustering and unsupervised learning in general. In the realm of clustering specifically, it provides a great companion introduction to K-means, but also covers *hierarchical* clustering for when you expect there to be subgroups, and then subgroups within subgroups, etc. in your data. In the realm of more general unsupervised learning, it covers *principal components analysis (PCA)*, which is a very popular technique in scientific applications for reducing the number of predictors in a dataset.

inference.Rmd

Lines changed: 12 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1158,6 +1158,18 @@ more. We have just scratched the surface of statistical inference; however, the
11581158
material presented here will serve as the foundation for more advanced
11591159
statistical techniques you may learn about in the future!
11601160

1161+
## Exercises
1162+
1163+
Practice exercises for the material covered in this chapter
1164+
can be found in the two accompanying worksheets
1165+
([first worksheet](https://ubc-dsci.github.io/data-science-a-first-intro-worksheets/worksheet_11/worksheet_11.ipynb)
1166+
and [second worksheet](https://ubc-dsci.github.io/data-science-a-first-intro-worksheets/worksheet_12/worksheet_12.ipynb)).
1167+
The worksheets try to provide automated feedback
1168+
and help guide you through the problems.
1169+
To make sure this functionality works as intended,
1170+
please follow the instructions for computer setup needed to run the worksheets
1171+
found in Chapter \@ref(move-to-your-own-machine).
1172+
11611173
## Additional resources
11621174

11631175
- Chapters 7 to 10 of [Modern Dive](https://moderndive.com/) provide a great next step in learning about inference. In particular, Chapters 7 and 8 cover sampling and bootstrapping using `tidyverse` and `infer` in a slightly more in-depth manner than the present chapter. Chapters 9 and 10 take the next step beyond the scope of this chapter and begin to provide some of the initial mathematical underpinnings of inference and more advanced applications of the concept of inference in testing hypotheses and performing regression. This material offers a great starting point for getting more into the technical side of statistics.

intro.Rmd

Lines changed: 10 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -686,3 +686,13 @@ you about the different arguments and usage of functions that you have already l
686686
```{r 01-help, echo = FALSE, message = FALSE, warning = FALSE, fig.cap = "The documentation for the `filter` function, including a high-level description, a list of arguments and their meanings, and more.", fig.retina = 2, out.width="100%"}
687687
knitr::include_graphics("img/help-filter.png")
688688
```
689+
690+
## Exercises
691+
692+
Practice exercises for the material covered in this chapter
693+
can be found in the accompanying [worksheet](https://ubc-dsci.github.io/data-science-a-first-intro-worksheets/worksheet_01/worksheet_01.ipynb).
694+
The worksheet tries to provide automated feedback
695+
and help guide you through the problems.
696+
To make sure this functionality works as intended,
697+
please follow the instructions for computer setup needed to run the worksheets
698+
found in Chapter \@ref(move-to-your-own-machine).

preface-text.Rmd

Lines changed: 14 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -47,3 +47,17 @@ try out the example code that we include throughout the book.
4747
```{r img-chapter-overview, echo = FALSE, message = FALSE, warning = FALSE, fig.cap = "Where are we going?", out.width="100%", fig.retina = 2, fig.align = "center"}
4848
knitr::include_graphics("img/chapter_overview.jpeg")
4949
```
50+
51+
Each chapter in the book has an accompanying worksheet that provides exercises
52+
to help you practice the concepts you will learn. We strongly recommend that you
53+
work through the worksheet when you finish reading each chapter
54+
before moving on to the next chapter. All of the worksheets
55+
are available at
56+
[https://ubc-dsci.github.io/data-science-a-first-intro-worksheets](https://ubc-dsci.github.io/data-science-a-first-intro-worksheets);
57+
the "Exercises" section at the end of each chapter points you to the right worksheet for that chapter.
58+
The worksheets are designed to provide automated feedback and help guide you through the problems.
59+
To make sure that functionality works as intended, make sure to follow the setup directions
60+
in Chapter \@ref(move-to-your-own-machine) regarding downloading the worksheets.
61+
62+
63+

reading.Rmd

Lines changed: 10 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1208,6 +1208,16 @@ to ask the Twitter API for more data
12081208
for more examples of what is possible), just be mindful as usual about how much
12091209
data you are requesting and how frequently you are making requests.
12101210
1211+
## Exercises
1212+
1213+
Practice exercises for the material covered in this chapter
1214+
can be found in the accompanying [worksheet](https://ubc-dsci.github.io/data-science-a-first-intro-worksheets/worksheet_02/worksheet_02.ipynb).
1215+
The worksheet tries to provide automated feedback
1216+
and help guide you through the problems.
1217+
To make sure this functionality works as intended,
1218+
please follow the instructions for computer setup needed to run the worksheets
1219+
found in Chapter \@ref(move-to-your-own-machine).
1220+
12111221
## Additional resources
12121222
- The [`readr` page on the tidyverse website](https://readr.tidyverse.org/) is where you should look if you want to learn more about the functions in this chapter, the full set of arguments you can use, and other related functions. The site also provides a very nice cheat sheet that summarizes many of the data wrangling functions from this chapter.
12131223
- Sometimes you might run into data in such poor shape that none of the reading functions we cover in this chapter works. In that case, you can consult the [data import chapter](https://r4ds.had.co.nz/data-import.html) from [R for Data Science](https://r4ds.had.co.nz/), which goes into a lot more detail about how R parses text from files into data frames.

regression1.Rmd

Lines changed: 10 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -831,3 +831,13 @@ regression has both strengths and weaknesses. Some are listed here:
831831
1. becomes very slow as the training data gets larger
832832
2. may not perform well with a large number of predictors
833833
3. may not predict well beyond the range of values input in your training data
834+
835+
## Exercises
836+
837+
Practice exercises for the material covered in this chapter
838+
can be found in the accompanying [worksheet](https://ubc-dsci.github.io/data-science-a-first-intro-worksheets/worksheet_08/worksheet_08.ipynb).
839+
The worksheet tries to provide automated feedback
840+
and help guide you through the problems.
841+
To make sure this functionality works as intended,
842+
please follow the instructions for computer setup needed to run the worksheets
843+
found in Chapter \@ref(move-to-your-own-machine).

regression2.Rmd

Lines changed: 10 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -859,6 +859,16 @@ These sides of regression are well beyond the scope of this book; but
859859
the material you have learned here should give you a foundation of knowledge
860860
that will serve you well when moving to more advanced books on the topic.
861861

862+
## Exercises
863+
864+
Practice exercises for the material covered in this chapter
865+
can be found in the accompanying [worksheet](https://ubc-dsci.github.io/data-science-a-first-intro-worksheets/worksheet_09/worksheet_09.ipynb).
866+
The worksheet tries to provide automated feedback
867+
and help guide you through the problems.
868+
To make sure this functionality works as intended,
869+
please follow the instructions for computer setup needed to run the worksheets
870+
found in Chapter \@ref(move-to-your-own-machine).
871+
862872
## Additional resources
863873
- The [`tidymodels` website](https://tidymodels.org/packages) is an excellent reference for more details on, and advanced usage of, the functions and packages in the past two chapters. Aside from that, it also has a [nice beginner's tutorial](https://www.tidymodels.org/start/) and [an extensive list of more advanced examples](https://www.tidymodels.org/learn/) that you can use to continue learning beyond the scope of this book.
864874
- [Modern Dive](https://moderndive.com/) is another textbook that uses the `tidyverse` / `tidymodels` framework. Chapter 6 complements the material in the current chapter well; it covers some slightly more advanced concepts than we do without getting mathematical. Give this chapter a read before moving on to the next reference. It is also worth noting that this book takes a more "explanatory" / "inferential" approach to regression in general (in Chapters 5, 6, and 10), which provides a nice complement to the predictive tack we take in the present book.

setup.Rmd

Lines changed: 11 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -11,6 +11,7 @@ By the end of the chapter, readers will be able to:
1111

1212
- install the Git version control software
1313
- install and launch a local instance of JupyterLab with the R kernel
14+
- download the worksheets that accompany the chapters of this book from GitHub
1415

1516
## Installing software on your own computer
1617

@@ -233,3 +234,13 @@ It is good practice to restart all the programs you used when installing this
233234
software stack before you proceed to doing your data analysis.
234235
This will ensure all the software and settings you put in place are
235236
correctly sourced. This includes JupyterLab, terminal or Anaconda Prompt.
237+
238+
## Downloading the worksheets for this book
239+
240+
The worksheets containing practice exercises for this book
241+
can be downloaded by visiting
242+
[https://github.com/UBC-DSCI/data-science-a-first-intro-worksheets](https://github.com/UBC-DSCI/data-science-a-first-intro-worksheets),
243+
clicking the green "Code" button, and then selecting "Download ZIP".
244+
The worksheets are contained within the compressed zip folder that will be downloaded.
245+
Once you unzip the downloaded file, you can open the folder and run each worksheet
246+
using Jupyter. See Chapter \@ref(getting-started-with-jupyter) for instructions on how to use Jupyter.

0 commit comments

Comments
 (0)