You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+28-28Lines changed: 28 additions & 28 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,20 +1,17 @@
1
1
# Quantifying
2
-
3
-
Quantifying the Commons - Measuring the diversity of Openly Licensed and Public Domain Works
2
+
Quantifying the Commons: Measuring the diversity of Openly Licensed and Public Domain Works
4
3
5
4
6
5
## Overview
6
+
This project seeks to quantify the size and diversity of the creative commons legal tools. We aim to track the collection of works (articles, images, publications, etc.) that are openly licensed or in the public domain. The project automates data collection from multiple data sources, processes the data, and generates reports.
7
7
8
-
This project seeks to quantify the size and diversity of the creative commons legal tools. We aim to track the collection of works (articles, images, publications) that are openly licensed or in the public domain. The project automates data collection from multiple data sources, processes the data, and generates reports.
9
-
10
-
11
-
#### The three phases of generating a report:
12
8
13
-
- 1-Fetch - This phase involves collecting data from a specific source using its API. Before writing any code, we plan the analyses we want to perform by asking meaningful questions about the data. We also consider API limitations (such as query limits) and design a query strategy to work within those constraints.
9
+
### The three phases of generating a report
10
+
-**1-Fetch**: This phase involves collecting data from a specific source using its API. Before writing any code, we plan the analyses we want to perform by asking meaningful questions about the data. We also consider API limitations (such as query limits) and design a query strategy to work within those constraints.
14
11
15
12
16
-
- Meaningful questions:
17
-
The reports generated by this project (and the data fetched and processed to support it) seeks to be meaningful. We hope this project will provide data and analysis that helps inform discussions about the commons--the collection of works that are openly licensed or in the public domain.
13
+
-**Meaningful questions**
14
+
-The reports generated by this project (and the data fetched and processed to support it) seeks to be meaningful. We hope this project will provide data and analysis that helps inform discussions about the commons--the collection of works that are openly licensed or in the public domain.
18
15
The goal of this project is to help answer questions like:
19
16
- How has the world's use of the commons changed over time?
20
17
- How is the knowledge and culture of the commons distributed?
@@ -24,49 +21,52 @@ This project seeks to quantify the size and diversity of the creative commons le
24
21
- What are the correlations between public domain dedication or licenses and
25
22
region, language, domain/endeavor, etc.?
26
23
27
-
28
-
- Limitations of an API
24
+
-**Limitations of an API**
29
25
- Some data sources provide APIs with certain limitations. A common limitation is a daily or hourly query limit, which restricts how many requests can be made in a given time period. To work around this, we carefully plan our queries, batch requests where possible, and schedule fetch jobs to stay within the allowed limits.
30
-
- Headings of data in 1-fetch
31
-
-[Tool identifier](https://creativecommons.org/share-your-work/cclicenses/): A unique identifier used to distinguish each Creative Commons legal tool within the dataset. This helps ensure consistency when tracking tools across different data sources.
32
-
-[SPDX identifier](https://spdx.org/licenses/): A standardized identifier maintained by the Software Package Data Exchange (SPDX) project. It provides a consistent way to reference licenses and improves interoperability across systems.
33
26
27
+
-**Headings of data in 1-fetch**
28
+
-[Tool identifier](https://creativecommons.org/share-your-work/cclicenses/): A unique identifier used to distinguish each Creative Commons legal tool within the dataset. This helps ensure consistency when tracking tools across different data sources.
29
+
-[SPDX identifier](https://spdx.org/licenses/): A standardized identifier maintained by the Software Package Data Exchange (SPDX) project. It provides a consistent way to reference licenses.
34
30
35
-
- 2-Process: In this phase, the fetched data is transformed into a structured and standardized format for analysis. The data is then analyzed and categorized based on defined criteria to extract insights that answer the meaningful questions identified during the fetch stage.
36
31
32
+
-**2-Process**: In this phase, the fetched data is transformed into a structured and standardized format for analysis. The data is then analyzed and categorized based on defined criteria to extract insights that answer the meaningful questions identified during the fetch stage.
37
33
38
-
- 3-report: This phase focuses on presenting the results of the analysis. We generate graphs and summaries that clearly show trends, patterns, and distributions in the data. These reports help communicate key insights about the size, diversity, and characteristics of openly licensed and public-domain works.
39
34
35
+
-**3-report**: This phase focuses on presenting the results of the analysis. We generate graphs and summaries that clearly show trends, patterns, and distributions in the data. These reports help communicate key insights about the size, diversity, and characteristics of openly licensed and public-domain works.
40
36
41
-
#### Automation scripts
42
-
For automating these steps, the project uses Python scripts to fetch, process, and report data. GitHub Actions is used to automatically run these scripts on a defined schedule and on code updates. It handles task execution, manages dependencies, and ensures the workflow runs consistently.
43
37
38
+
### Automation scripts
39
+
For automating these steps, the project uses Python scripts to fetch, process, and report data. GitHub Actions is used to automatically run these scripts on a defined schedule and on code updates. It handles script execution, manages dependencies, and ensures the workflow runs consistently.
44
40
45
-
- Script assumptions
41
+
-**Script assumptions**
46
42
- Execution schedule for each quarter:
47
43
- 1-Fetch: first month, 1st half of second month
48
44
- 2-Process: 2nd half of second month
49
45
- 3-Report: third month
50
46
51
-
- Script requirements
52
-
- Must be safe
47
+
-**Script requirements**
48
+
-*Must be safe*
53
49
- Scripts must not make any changes with default options
54
50
- Easiest way to run script should also be the safest
55
51
- Have options spelled out
56
52
- Must be timely
57
-
- Scripts should complete within a maximum of 45 minutes
58
-
- Scripts shouldn't take longer than 3 minutes with default options
53
+
54
+
-*Scripts should complete within a maximum of 45 minutes*
55
+
-*Scripts shouldn't take longer than 3 minutes with default options*
59
56
- That way there’s a quicker way to see what is happening when it is running; see execution, without errors, etc.
60
57
- Then later in production it can be run with longer options
61
-
- Must be idempotent (Idempotence - [Wikipedia](https://en.wikipedia.org/wiki/Idempotence))
58
+
59
+
-*Must be idempotent (Idempotence: [Wikipedia](https://en.wikipedia.org/wiki/Idempotence))*
62
60
- This applies to both the data fetched and the data stored.
63
-
If the data changes randomly, we can't draw meaningful conclusions
64
-
- Balanced use of third-party libraries
61
+
If the data changes randomly, we can't draw meaningful conclusions.
62
+
63
+
-*Balanced use of third-party libraries*
65
64
- Third-party libraries should be leveraged when they are:
66
65
- API specific (google-api-python-client, internetarchive, etc.)
66
+
67
67
- File formats
68
-
- CSV - the format is well supported (rendered on GitHub, etc.), easy to use, and the data used by the project is simple enough to avoid any shortcomings.
69
-
- YAML - prioritizes human readability which addresses the primary costs and risks associated with configuration files.
68
+
- CSV: the format is well supported (rendered on GitHub, etc.), easy to use, and the data used by the project is simple enough to avoid any shortcomings.
69
+
- YAML: prioritizes human readability which addresses the primary costs and risks associated with configuration files.
0 commit comments