Skip to content

Commit 875f5f5

Browse files
committed
Add the Getting Column Summaries article
1 parent 316acdc commit 875f5f5

File tree

2 files changed

+80
-0
lines changed

2 files changed

+80
-0
lines changed

docs/_quarto.yml

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -68,6 +68,7 @@ website:
6868
- section: "Data Inspection"
6969
contents:
7070
- user-guide/preview.qmd
71+
- user-guide/col-summary-tbl.qmd
7172

7273
html-table-processing: none
7374

Lines changed: 79 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,79 @@
1+
---
2+
title: Getting Column Summaries
3+
jupyter: python3
4+
html-table-processing: none
5+
---
6+
7+
```{python}
8+
#| echo: false
9+
#| output: false
10+
import pointblank as pb
11+
```
12+
13+
While previewing a table with [`preview()`](https://posit-dev.github.io/pointblank/reference/preview.html)
14+
is undoubtedly a good thing to do, sometimes you need more. This is where summarizing a table comes
15+
in. When you view a summary of a table, the column-by-column info can quickly increase your
16+
understanding of a dataset. Plus, it allows you to quickly catch anomalies in your data (e.g., the
17+
maximum value of a column could be far outside the realm of possibility).
18+
19+
Pointblank provides a function to make it extremely easy to view column-level summaries in a single
20+
table. That function is called
21+
[`col_summary_tbl()`](https://posit-dev.github.io/pointblank/reference/col_summary_tbl.html) and,
22+
just like [`preview()`](https://posit-dev.github.io/pointblank/reference/preview.html) does, it
23+
supports the use of any table that Pointblank can use for validation. And no matter what the input
24+
data is, the resultant reporting table is consistent in its design and construction.
25+
26+
## Trying out `col_summary_tbl()`
27+
28+
The function only requires a table. Let's use the `small_table` dataset (a very simple table) to
29+
start us off:
30+
31+
```{python}
32+
import pointblank as pb
33+
34+
small_table = pb.load_dataset(dataset="small_table", tbl_type="polars")
35+
pb.col_summary_tbl(small_table)
36+
```
37+
38+
The header provides the type of table we're looking at (`POLARS`, since this is a Pandas DataFrame) and the table dimensions. The rest of the table focuses on the column-level summaries. As such, each row represents a summary of a column in the `small_table` dataset. There's a lot of information
39+
in this summary table to digest. Some of it is intuitive since this sort of table summarization isn't all that uncommon, but other aspects of it could also give some pause. So we'll carefully wade through how to interpret this report.
40+
41+
## Data Categories in the Column Summary Table
42+
43+
On the left side of the table are icons of different colors. These represent categories that the columns fall into. There are only five categories and columns can only be of one type. The mapping from letter marks to categories are:
44+
45+
- `N`: numeric
46+
- `S`: string-based
47+
- `D`: date/datetime
48+
- `T/F`: boolean
49+
- `O`: object
50+
51+
The numeric category (`N`) takes data types such as floats and integers. The `S` category is for string-based columns. Date or datetime values are lumped into the `D` category. Boolean columns
52+
(`T/F`) have their own category and are *not* considered numeric (e.g., `0`/`1`). The `O` category
53+
is a catchall for all other types of columns. Given the disparity of these categories and that we want them in the same table, some statistical measures will be sensible for certain column categories but not for others. Given that, we'll explain how each category is represented in the column summary table.
54+
55+
## Numeric Data
56+
57+
Three columns in `small_table` are numeric: `a` (`Int64`), `c` (`Int64`), and `d` (`Float64`). The
58+
common measures of the missing count/proportion (`NA`) and the unique value count/proportion (`UQ`)
59+
are provided for the numeric data type. For these two measures, the top number is the absolute
60+
count of missing values and the count of unique values. The bottom number is a proportion of the absolute count divided by the row count; this makes each proportion a value between `0` and `1`
61+
(bounds included).
62+
63+
The next two columns represent the mean (`Mean`) and the standard deviation (`SD`). The minumum (`Min`), maximum, (`Max`) and a set of quantiles occupy the next few columns (includes `P5`, `Q1`, `Med` for median, `Q3`, and `P95`). Finally, the interquartile range (`IQR`: `Q3` - `Q1`) is the last measure provided.
64+
65+
## String Data
66+
67+
String data is present in `small_table`, being in columns `b` and `f`. The missing value (`NA`) and uniqueness (`UQ`) measures are accounted for here. The statistical measures are all based on string lengths, so what happens is that all strings in a column are converted to those numeric values and a subset of stats values is presented. To avoid some understandable confusion when reading the table, the stats values in each of the cells with values are annotated with the text `"SL"`. It makes less sense to provide a full suite a quantile values so only the minimum (`Min`), median (`Med`), and maximum (`Max`) are provided.
68+
69+
## Date/Datetime Data and Boolean Data
70+
71+
We see that in the first two rows of our summary table that we have summaries of the `date_time` and
72+
`date` columns. The summaries we provide for a date/datetime category (notice the green `D` to the
73+
left of the column names) are:
74+
75+
1. the missing count/proportion (`NA`)
76+
2. the unique value count/proportion (`UQ`)
77+
3. the minimum and maximum dates/datetimes
78+
79+
One column, `e`, is of the `Boolean` type. Because columns of this type could only have `True`, `False`, or missing values, we provide summary data for missingness (under `NA`) and proportions of `True` and `False` values (under `UQ`).

0 commit comments

Comments
 (0)