|
1 | 1 | # Download Analytics |
2 | 2 |
|
3 | | -The Download Analytics project allows you to extract download metrics from a Python library published on [PyPI](https://pypi.org/). |
| 3 | +The Download Analytics project allows you to extract download metrics for Python libraries published on [PyPI](https://pypi.org/) and [Anaconda](https://www.anaconda.com/). |
4 | 4 |
|
5 | | -## Overview |
| 5 | +The DataCebo team uses these scripts to report download counts for the libraries in the [SDV ecosystem](https://sdv.dev/) and other libraries. |
6 | 6 |
|
| 7 | +## Overview |
7 | 8 | The Download Analytics project is a collection of scripts and tools to extract information |
8 | | -about OSS project downloads from diffierent sources and to analyze them to produce user |
| 9 | +about OSS project downloads from different sources and to analyze them to produce user |
9 | 10 | engagement metrics. |
10 | 11 |
|
11 | | -### Data sources |
| 12 | +### Data Sources |
| 13 | +Currently, the download data is collected from the following distributions: |
| 14 | +* [PyPI](https://pypi.org/): Information about the project downloads from [PyPI](https://pypi.org/) |
| 15 | + obtained from the public BigQuery dataset, equivalent to the information shown on |
| 16 | + [pepy.tech](https://pepy.tech) and [ClickPy](https://clickpy.clickhouse.com/) |
| 17 | + - More information about the BigQuery dataset can be found on the [official PyPI documentation](https://packaging.python.org/en/latest/guides/analyzing-pypi-package-downloads/) |
12 | 18 |
|
13 | | -Currently the download data is collected from the following distributions: |
| 19 | +* [Anaconda](https://www.anaconda.com/): Information about conda package downloads for default and select Anaconda channels. |
| 20 | + - The conda package download data is provided by Anaconda, Inc. It includes package download counts |
| 21 | + starting from January 2017. More information about this dataset can be found on the [official README.md](https://github.com/anaconda/anaconda-package-data/blob/master/README.md). |
| 22 | + - Additional conda package downloads are retrieved using the public API provided by Anaconda. This allows for the retrieval of the current number of downloads for each file served. |
| 23 | + - Anaconda API Endpoint: https://api.anaconda.org/package/{username}/{package_name} |
| 24 | + - Replace `{username}` with the Anaconda channel (`conda-forge`) |
| 25 | + - Replace `{package_name}` with the specific package (`sdv`) in the Anaconda channel |
| 26 | + - For each file returned by the API endpoint, the current number of downloads is saved. Over time, a historical download recording can be built. |
| 27 | + - Both of these sources were used to track Anaconda downloads because the package data for Anaconda does not match the download count on the website. This is due to missing download data. See: https://github.com/anaconda/anaconda-package-data/issues/45 |
14 | 28 |
|
15 | | -* [PyPI](https://pypi.org/): Information about the project downloads from [PyPI](https://pypi.org/) |
16 | | - obtained from the public Big Query dataset, equivalent to the information shown on |
17 | | - [pepy.tech](https://pepy.tech). |
18 | | -* [conda-forge](https://conda-forge.org/): Information about the project downloads from the |
19 | | - `conda-forge` channel on `conda`. |
20 | | - - The conda package download data provided by Anaconda. It includes package download counts |
21 | | - starting from January 2017. More information: |
22 | | - - https://github.com/anaconda/anaconda-package-data |
23 | | - - The conda package metadata data provided by Anaconda. There is a public API which allows for |
24 | | - the retrieval of package information, including current number of downloads. |
25 | | - - https://api.anaconda.org/package/{username}/{package_name} |
26 | | - - Replace {username} with the Anaconda username (`conda-forge`) and {package_name} with |
27 | | - the specific package name (`sdv`). |
28 | | - |
29 | | -In the future, we may also expand the source distributions to include: |
30 | | - |
31 | | -* [github](https://github.com/): Information about the project downloads from github releases. |
32 | | - |
33 | | -For more information about how to configure and use the software, or about the data that is being |
34 | | -collected check the resources below. |
35 | | - |
36 | | -### Add new libraries |
37 | | -In order add new libraries, it is important to follow these steps to ensure that data is backfilled. |
38 | | -1. Update `config.yaml` with the new libraries (pypi project names only for now) |
39 | | -2. Run the [Manual collection workflow](https://github.com/datacebo/download-analytics/actions/workflows/manual.yaml) on your branch. |
40 | | - - Use workflow from **your branch name**. |
41 | | - - List all project names from config.yaml |
42 | | - - Remove `7` from max days to indicate you want all data |
43 | | - - Pass any extra arguments (for example `--dry-run` to test your changes) |
44 | | -3. Let the workflow finish and check that pypi.csv contains the right data. |
45 | | -4. Get your pull request reviewed and merged into `main`. The daily collection workflow will fill the data for the last 30 days and future days. |
46 | | - - Note: The collection script looks at timestamps and avoids adding overlapping data. |
47 | | - |
48 | | -### Metrics |
49 | | -This library collects the number of downloads for your chosen software. You can break these up along several dimensions: |
50 | | - |
51 | | -- **By Month**: The number of downloads per month |
52 | | -- **By Version**: The number of downloads per version of the software, as determine by the software maintainers |
53 | | -- **By Python Version**: The number of downloads per minor Python version (eg. 3.8) |
54 | | -- **And more!** See the resources below for more information. |
| 29 | +### Future Data Sources |
| 30 | +In the future, we may expand the source distributions to include: |
| 31 | +* [GitHub Releases](https://github.com/): Information about the project downloads from GitHub releases. |
| 32 | + |
| 33 | +## Workflows |
| 34 | + |
| 35 | +### Daily Collection |
| 36 | +On a daily basis, this workflow collects download data from PyPI and Anaconda. The data is then published to Google Drive in CSV format (`pypi.csv`). In addition, it computes metrics for the PyPI downloads (see below). |
| 37 | + |
| 38 | +#### Metrics |
| 39 | +This PyPI download metrics are computed along several dimensions: |
| 40 | + |
| 41 | +- **By Month**: The number of downloads per month. |
| 42 | +- **By Version**: The number of downloads per version of the software, as determined by the software maintainers. |
| 43 | +- **By Python Version**: The number of downloads per minor Python version (eg. 3.8). |
| 44 | +- **By Full Python Version**: The number of downloads per full Python version (eg. 3.9.1). |
| 45 | +- **And more!** |
| 46 | + |
| 47 | +### Daily Summarize |
| 48 | + |
| 49 | +On a daily basis, this workflow summarizes the PyPI download data from `pypi.csv` and calculates downloads for libraries. |
| 50 | + |
| 51 | +The summarized data is uploaded to a GitHub repo: |
| 52 | +- [Downloads_Summary.xlsx](https://github.com/sdv-dev/sdv-dev.github.io/blob/gatsby-home/assets/Downloads_Summary.xlsx) |
| 53 | + |
| 54 | +#### SDV Calculation |
| 55 | +Installing the main SDV library also installs all the other libraries as dependencies. To calculate SDV downloads, we use an exclusive download methodology: |
| 56 | + |
| 57 | +1. Get download counts for `sdgym` and `sdv`. |
| 58 | +2. Adjust `sdv` downloads by subtracting `sdgym` downloads (since sdgym depends on sdv). |
| 59 | +3. Get download counts for direct SDV dependencies: `rdt`, `copulas`, `ctgan`, `deepecho`, `sdmetrics`. |
| 60 | +4. Adjust downloads for each dependency by subtracting the `sdv` download count. |
| 61 | +5. Ensure no download count goes negative using `max(0, adjusted_count)` for each library. |
| 62 | + |
| 63 | +This methodology prevents double-counting downloads while providing an accurate representation of SDV usage. |
55 | 64 |
|
56 | 65 | ## Resources |
57 | | -For more information about the configuration, workflows and metrics, see the resources below. |
| 66 | +For more information about the configuration, workflows, and metrics, see the resources below. |
58 | 67 | | | Document | Description | |
59 | 68 | | ------------- | ----------------------------------- | ----------- | |
60 | | -| :pilot: | [WORKFLOWS](docs/WORKFLOWS.md) | How to collect data and add new libraries to the Github actions. | |
61 | | -| :gear: | [SETUP](docs/SETUP.md) | How to generate credentials to access BigQuery and Google Drive and add them to Github Actions. | |
| 69 | +| :pilot: | [WORKFLOWS](docs/WORKFLOWS.md) | How to collect data and add new libraries to the GitHub actions. | |
| 70 | +| :gear: | [SETUP](docs/SETUP.md) | How to generate credentials to access BigQuery and Google Drive and add them to GitHub Actions. | |
62 | 71 | | :keyboard: | [DEVELOPMENT](docs/DEVELOPMENT.md) | How to install and run the scripts locally. Overview of the project implementation. | |
63 | 72 | | :floppy_disk: | [COLLECTED DATA](docs/COLLECTED_DATA.md) | Explanation about the data that is being collected. | |
| 73 | + |
| 74 | + |
| 75 | +--- |
| 76 | + |
| 77 | +<div align="center"> |
| 78 | + <a href="https://datacebo.com"><picture> |
| 79 | + <source media="(prefers-color-scheme: dark)" srcset="https://github.com/sdv-dev/SDV/blob/stable/docs/images/datacebo-logo-dark-mode.png"> |
| 80 | + <img align="center" width=40% src="https://github.com/sdv-dev/SDV/blob/stable/docs/images/datacebo-logo.png"></img> |
| 81 | + </picture></a> |
| 82 | +</div> |
| 83 | +<br/> |
| 84 | +<br/> |
| 85 | + |
| 86 | +[The Synthetic Data Vault Project](https://sdv.dev) was first created at MIT's [Data to AI Lab]( |
| 87 | +https://dai.lids.mit.edu/) in 2016. After 4 years of research and traction with enterprise, we |
| 88 | +created [DataCebo](https://datacebo.com) in 2020 with the goal of growing the project. |
| 89 | +Today, DataCebo is the proud developer of SDV, the largest ecosystem for |
| 90 | +synthetic data generation & evaluation. It is home to multiple libraries that support synthetic |
| 91 | +data, including: |
| 92 | + |
| 93 | +* 🔄 Data discovery & transformation. Reverse the transforms to reproduce realistic data. |
| 94 | +* 🧠 Multiple machine learning models -- ranging from Copulas to Deep Learning -- to create tabular, |
| 95 | + multi table and time series data. |
| 96 | +* 📊 Measuring quality and privacy of synthetic data, and comparing different synthetic data |
0 commit comments