Skip to content

Commit 615287d

Browse files
authored
Bump version to 2.13.0 (#1044)
1 parent d0cbd9f commit 615287d

20 files changed

+75
-75
lines changed

.bumpversion.cfg

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
[bumpversion]
2-
current_version = 2.12.1
2+
current_version = 2.13.0
33
commit = False
44
tag = False
55
tag_name = {new_version}

CONTRIBUTING_COMMON_ERRORS.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -13,9 +13,9 @@ Requirement already satisfied: pbr!=2.1.0,>=2.0.0 in ./.venv/lib/python3.7/site-
1313
Using legacy 'setup.py install' for python-Levenshtein, since package 'wheel' is not installed.
1414
Installing collected packages: awswrangler, python-Levenshtein
1515
Attempting uninstall: awswrangler
16-
Found existing installation: awswrangler 2.12.1
17-
Uninstalling awswrangler-2.12.1:
18-
Successfully uninstalled awswrangler-2.12.1
16+
Found existing installation: awswrangler 2.13.0
17+
Uninstalling awswrangler-2.13.0:
18+
Successfully uninstalled awswrangler-2.13.0
1919
Running setup.py develop for awswrangler
2020
Running setup.py install for python-Levenshtein ... error
2121
ERROR: Command errored out with exit status 1:

README.md

Lines changed: 14 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ Easy integration with Athena, Glue, Redshift, Timestream, QuickSight, Chime, Clo
88

99
> An [AWS Professional Service](https://aws.amazon.com/professional-services/) open source initiative | [email protected]
1010
11-
[![Release](https://img.shields.io/badge/release-2.12.1-brightgreen.svg)](https://pypi.org/project/awswrangler/)
11+
[![Release](https://img.shields.io/badge/release-2.13.0-brightgreen.svg)](https://pypi.org/project/awswrangler/)
1212
[![Python Version](https://img.shields.io/badge/python-3.6%20%7C%203.7%20%7C%203.8%20%7C%203.9-brightgreen.svg)](https://anaconda.org/conda-forge/awswrangler)
1313
[![Code style: black](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/psf/black)
1414
[![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0)
@@ -23,7 +23,7 @@ Easy integration with Athena, Glue, Redshift, Timestream, QuickSight, Chime, Clo
2323
| **[PyPi](https://pypi.org/project/awswrangler/)** | [![PyPI Downloads](https://pepy.tech/badge/awswrangler)](https://pypi.org/project/awswrangler/) | `pip install awswrangler` |
2424
| **[Conda](https://anaconda.org/conda-forge/awswrangler)** | [![Conda Downloads](https://img.shields.io/conda/dn/conda-forge/awswrangler.svg)](https://anaconda.org/conda-forge/awswrangler) | `conda install -c conda-forge awswrangler` |
2525

26-
> ⚠️ **For platforms without PyArrow 3 support (e.g. [EMR](https://aws-data-wrangler.readthedocs.io/en/2.12.1/install.html#emr-cluster), [Glue PySpark Job](https://aws-data-wrangler.readthedocs.io/en/2.12.1/install.html#aws-glue-pyspark-jobs), MWAA):**<br>
26+
> ⚠️ **For platforms without PyArrow 3 support (e.g. [EMR](https://aws-data-wrangler.readthedocs.io/en/2.13.0/install.html#emr-cluster), [Glue PySpark Job](https://aws-data-wrangler.readthedocs.io/en/2.13.0/install.html#aws-glue-pyspark-jobs), MWAA):**<br>
2727
➡️ `pip install pyarrow==2 awswrangler`
2828

2929
Powered By [<img src="https://arrow.apache.org/img/arrow.png" width="200">](https://arrow.apache.org/powered_by/)
@@ -42,7 +42,7 @@ Powered By [<img src="https://arrow.apache.org/img/arrow.png" width="200">](http
4242

4343
Installation command: `pip install awswrangler`
4444

45-
> ⚠️ **For platforms without PyArrow 3 support (e.g. [EMR](https://aws-data-wrangler.readthedocs.io/en/2.12.1/install.html#emr-cluster), [Glue PySpark Job](https://aws-data-wrangler.readthedocs.io/en/2.12.1/install.html#aws-glue-pyspark-jobs), MWAA):**<br>
45+
> ⚠️ **For platforms without PyArrow 3 support (e.g. [EMR](https://aws-data-wrangler.readthedocs.io/en/2.13.0/install.html#emr-cluster), [Glue PySpark Job](https://aws-data-wrangler.readthedocs.io/en/2.13.0/install.html#aws-glue-pyspark-jobs), MWAA):**<br>
4646
➡️`pip install pyarrow==2 awswrangler`
4747

4848
```py3
@@ -96,17 +96,17 @@ FROM "sampleDB"."sampleTable" ORDER BY time DESC LIMIT 3
9696

9797
## [Read The Docs](https://aws-data-wrangler.readthedocs.io/)
9898

99-
- [**What is AWS Data Wrangler?**](https://aws-data-wrangler.readthedocs.io/en/2.12.1/what.html)
100-
- [**Install**](https://aws-data-wrangler.readthedocs.io/en/2.12.1/install.html)
101-
- [PyPi (pip)](https://aws-data-wrangler.readthedocs.io/en/2.12.1/install.html#pypi-pip)
102-
- [Conda](https://aws-data-wrangler.readthedocs.io/en/2.12.1/install.html#conda)
103-
- [AWS Lambda Layer](https://aws-data-wrangler.readthedocs.io/en/2.12.1/install.html#aws-lambda-layer)
104-
- [AWS Glue Python Shell Jobs](https://aws-data-wrangler.readthedocs.io/en/2.12.1/install.html#aws-glue-python-shell-jobs)
105-
- [AWS Glue PySpark Jobs](https://aws-data-wrangler.readthedocs.io/en/2.12.1/install.html#aws-glue-pyspark-jobs)
106-
- [Amazon SageMaker Notebook](https://aws-data-wrangler.readthedocs.io/en/2.12.1/install.html#amazon-sagemaker-notebook)
107-
- [Amazon SageMaker Notebook Lifecycle](https://aws-data-wrangler.readthedocs.io/en/2.12.1/install.html#amazon-sagemaker-notebook-lifecycle)
108-
- [EMR](https://aws-data-wrangler.readthedocs.io/en/2.12.1/install.html#emr)
109-
- [From source](https://aws-data-wrangler.readthedocs.io/en/2.12.1/install.html#from-source)
99+
- [**What is AWS Data Wrangler?**](https://aws-data-wrangler.readthedocs.io/en/2.13.0/what.html)
100+
- [**Install**](https://aws-data-wrangler.readthedocs.io/en/2.13.0/install.html)
101+
- [PyPi (pip)](https://aws-data-wrangler.readthedocs.io/en/2.13.0/install.html#pypi-pip)
102+
- [Conda](https://aws-data-wrangler.readthedocs.io/en/2.13.0/install.html#conda)
103+
- [AWS Lambda Layer](https://aws-data-wrangler.readthedocs.io/en/2.13.0/install.html#aws-lambda-layer)
104+
- [AWS Glue Python Shell Jobs](https://aws-data-wrangler.readthedocs.io/en/2.13.0/install.html#aws-glue-python-shell-jobs)
105+
- [AWS Glue PySpark Jobs](https://aws-data-wrangler.readthedocs.io/en/2.13.0/install.html#aws-glue-pyspark-jobs)
106+
- [Amazon SageMaker Notebook](https://aws-data-wrangler.readthedocs.io/en/2.13.0/install.html#amazon-sagemaker-notebook)
107+
- [Amazon SageMaker Notebook Lifecycle](https://aws-data-wrangler.readthedocs.io/en/2.13.0/install.html#amazon-sagemaker-notebook-lifecycle)
108+
- [EMR](https://aws-data-wrangler.readthedocs.io/en/2.13.0/install.html#emr)
109+
- [From source](https://aws-data-wrangler.readthedocs.io/en/2.13.0/install.html#from-source)
110110
- [**Tutorials**](https://github.com/awslabs/aws-data-wrangler/tree/main/tutorials)
111111
- [001 - Introduction](https://github.com/awslabs/aws-data-wrangler/blob/main/tutorials/001%20-%20Introduction.ipynb)
112112
- [002 - Sessions](https://github.com/awslabs/aws-data-wrangler/blob/main/tutorials/002%20-%20Sessions.ipynb)

awswrangler/__metadata__.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -7,5 +7,5 @@
77

88
__title__: str = "awswrangler"
99
__description__: str = "Pandas on AWS."
10-
__version__: str = "2.12.1"
10+
__version__: str = "2.13.0"
1111
__license__: str = "Apache License 2.0"

awswrangler/athena/_read.py

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -626,11 +626,11 @@ def read_sql_query(
626626
627627
**Related tutorial:**
628628
629-
- `Amazon Athena <https://aws-data-wrangler.readthedocs.io/en/2.12.1/
629+
- `Amazon Athena <https://aws-data-wrangler.readthedocs.io/en/2.13.0/
630630
tutorials/006%20-%20Amazon%20Athena.html>`_
631-
- `Athena Cache <https://aws-data-wrangler.readthedocs.io/en/2.12.1/
631+
- `Athena Cache <https://aws-data-wrangler.readthedocs.io/en/2.13.0/
632632
tutorials/019%20-%20Athena%20Cache.html>`_
633-
- `Global Configurations <https://aws-data-wrangler.readthedocs.io/en/2.12.1/
633+
- `Global Configurations <https://aws-data-wrangler.readthedocs.io/en/2.13.0/
634634
tutorials/021%20-%20Global%20Configurations.html>`_
635635
636636
**There are two approaches to be defined through ctas_approach parameter:**
@@ -678,7 +678,7 @@ def read_sql_query(
678678
/athena.html#Athena.Client.get_query_execution>`_ .
679679
680680
For a practical example check out the
681-
`related tutorial <https://aws-data-wrangler.readthedocs.io/en/2.12.1/
681+
`related tutorial <https://aws-data-wrangler.readthedocs.io/en/2.13.0/
682682
tutorials/024%20-%20Athena%20Query%20Metadata.html>`_!
683683
684684
@@ -911,11 +911,11 @@ def read_sql_table(
911911
912912
**Related tutorial:**
913913
914-
- `Amazon Athena <https://aws-data-wrangler.readthedocs.io/en/2.12.1/
914+
- `Amazon Athena <https://aws-data-wrangler.readthedocs.io/en/2.13.0/
915915
tutorials/006%20-%20Amazon%20Athena.html>`_
916-
- `Athena Cache <https://aws-data-wrangler.readthedocs.io/en/2.12.1/
916+
- `Athena Cache <https://aws-data-wrangler.readthedocs.io/en/2.13.0/
917917
tutorials/019%20-%20Athena%20Cache.html>`_
918-
- `Global Configurations <https://aws-data-wrangler.readthedocs.io/en/2.12.1/
918+
- `Global Configurations <https://aws-data-wrangler.readthedocs.io/en/2.13.0/
919919
tutorials/021%20-%20Global%20Configurations.html>`_
920920
921921
**There are two approaches to be defined through ctas_approach parameter:**
@@ -960,7 +960,7 @@ def read_sql_table(
960960
/athena.html#Athena.Client.get_query_execution>`_ .
961961
962962
For a practical example check out the
963-
`related tutorial <https://aws-data-wrangler.readthedocs.io/en/2.12.1/
963+
`related tutorial <https://aws-data-wrangler.readthedocs.io/en/2.13.0/
964964
tutorials/024%20-%20Athena%20Query%20Metadata.html>`_!
965965
966966

awswrangler/s3/_read_parquet.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -841,7 +841,7 @@ def read_parquet_table(
841841
This function MUST return a bool, True to read the partition or False to ignore it.
842842
Ignored if `dataset=False`.
843843
E.g ``lambda x: True if x["year"] == "2020" and x["month"] == "1" else False``
844-
https://aws-data-wrangler.readthedocs.io/en/2.12.1/tutorials/023%20-%20Flexible%20Partitions%20Filter.html
844+
https://aws-data-wrangler.readthedocs.io/en/2.13.0/tutorials/023%20-%20Flexible%20Partitions%20Filter.html
845845
columns : List[str], optional
846846
Names of columns to read from the file(s).
847847
validate_schema:

awswrangler/s3/_read_text.py

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -241,7 +241,7 @@ def read_csv(
241241
This function MUST return a bool, True to read the partition or False to ignore it.
242242
Ignored if `dataset=False`.
243243
E.g ``lambda x: True if x["year"] == "2020" and x["month"] == "1" else False``
244-
https://aws-data-wrangler.readthedocs.io/en/2.12.1/tutorials/023%20-%20Flexible%20Partitions%20Filter.html
244+
https://aws-data-wrangler.readthedocs.io/en/2.13.0/tutorials/023%20-%20Flexible%20Partitions%20Filter.html
245245
pandas_kwargs :
246246
KEYWORD arguments forwarded to pandas.read_csv(). You can NOT pass `pandas_kwargs` explicit, just add valid
247247
Pandas arguments in the function call and Wrangler will accept it.
@@ -389,7 +389,7 @@ def read_fwf(
389389
This function MUST return a bool, True to read the partition or False to ignore it.
390390
Ignored if `dataset=False`.
391391
E.g ``lambda x: True if x["year"] == "2020" and x["month"] == "1" else False``
392-
https://aws-data-wrangler.readthedocs.io/en/2.12.1/tutorials/023%20-%20Flexible%20Partitions%20Filter.html
392+
https://aws-data-wrangler.readthedocs.io/en/2.13.0/tutorials/023%20-%20Flexible%20Partitions%20Filter.html
393393
pandas_kwargs:
394394
KEYWORD arguments forwarded to pandas.read_fwf(). You can NOT pass `pandas_kwargs` explicit, just add valid
395395
Pandas arguments in the function call and Wrangler will accept it.
@@ -541,7 +541,7 @@ def read_json(
541541
This function MUST return a bool, True to read the partition or False to ignore it.
542542
Ignored if `dataset=False`.
543543
E.g ``lambda x: True if x["year"] == "2020" and x["month"] == "1" else False``
544-
https://aws-data-wrangler.readthedocs.io/en/2.12.1/tutorials/023%20-%20Flexible%20Partitions%20Filter.html
544+
https://aws-data-wrangler.readthedocs.io/en/2.13.0/tutorials/023%20-%20Flexible%20Partitions%20Filter.html
545545
pandas_kwargs:
546546
KEYWORD arguments forwarded to pandas.read_json(). You can NOT pass `pandas_kwargs` explicit, just add valid
547547
Pandas arguments in the function call and Wrangler will accept it.

awswrangler/s3/_write_parquet.py

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -300,18 +300,18 @@ def to_parquet( # pylint: disable=too-many-arguments,too-many-locals,too-many-b
300300
concurrent_partitioning: bool
301301
If True will increase the parallelism level during the partitions writing. It will decrease the
302302
writing time and increase the memory usage.
303-
https://aws-data-wrangler.readthedocs.io/en/2.12.1/tutorials/022%20-%20Writing%20Partitions%20Concurrently.html
303+
https://aws-data-wrangler.readthedocs.io/en/2.13.0/tutorials/022%20-%20Writing%20Partitions%20Concurrently.html
304304
mode: str, optional
305305
``append`` (Default), ``overwrite``, ``overwrite_partitions``. Only takes effect if dataset=True.
306306
For details check the related tutorial:
307-
https://aws-data-wrangler.readthedocs.io/en/2.12.1/stubs/awswrangler.s3.to_parquet.html#awswrangler.s3.to_parquet
307+
https://aws-data-wrangler.readthedocs.io/en/2.13.0/stubs/awswrangler.s3.to_parquet.html#awswrangler.s3.to_parquet
308308
catalog_versioning : bool
309309
If True and `mode="overwrite"`, creates an archived version of the table catalog before updating it.
310310
schema_evolution : bool
311311
If True allows schema evolution (new or missing columns), otherwise a exception will be raised. True by default.
312312
(Only considered if dataset=True and mode in ("append", "overwrite_partitions"))
313313
Related tutorial:
314-
https://aws-data-wrangler.readthedocs.io/en/2.12.1/tutorials/014%20-%20Schema%20Evolution.html
314+
https://aws-data-wrangler.readthedocs.io/en/2.13.0/tutorials/014%20-%20Schema%20Evolution.html
315315
database : str, optional
316316
Glue/Athena catalog: Database name.
317317
table : str, optional

awswrangler/s3/_write_text.py

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -177,18 +177,18 @@ def to_csv( # pylint: disable=too-many-arguments,too-many-locals,too-many-state
177177
concurrent_partitioning: bool
178178
If True will increase the parallelism level during the partitions writing. It will decrease the
179179
writing time and increase the memory usage.
180-
https://aws-data-wrangler.readthedocs.io/en/2.12.1/tutorials/022%20-%20Writing%20Partitions%20Concurrently.html
180+
https://aws-data-wrangler.readthedocs.io/en/2.13.0/tutorials/022%20-%20Writing%20Partitions%20Concurrently.html
181181
mode : str, optional
182182
``append`` (Default), ``overwrite``, ``overwrite_partitions``. Only takes effect if dataset=True.
183183
For details check the related tutorial:
184-
https://aws-data-wrangler.readthedocs.io/en/2.12.1/stubs/awswrangler.s3.to_parquet.html#awswrangler.s3.to_parquet
184+
https://aws-data-wrangler.readthedocs.io/en/2.13.0/stubs/awswrangler.s3.to_parquet.html#awswrangler.s3.to_parquet
185185
catalog_versioning : bool
186186
If True and `mode="overwrite"`, creates an archived version of the table catalog before updating it.
187187
schema_evolution : bool
188188
If True allows schema evolution (new or missing columns), otherwise a exception will be raised.
189189
(Only considered if dataset=True and mode in ("append", "overwrite_partitions")). False by default.
190190
Related tutorial:
191-
https://aws-data-wrangler.readthedocs.io/en/2.12.1/tutorials/014%20-%20Schema%20Evolution.html
191+
https://aws-data-wrangler.readthedocs.io/en/2.13.0/tutorials/014%20-%20Schema%20Evolution.html
192192
database : str, optional
193193
Glue/Athena catalog: Database name.
194194
table : str, optional
@@ -750,18 +750,18 @@ def to_json( # pylint: disable=too-many-arguments,too-many-locals,too-many-stat
750750
concurrent_partitioning: bool
751751
If True will increase the parallelism level during the partitions writing. It will decrease the
752752
writing time and increase the memory usage.
753-
https://aws-data-wrangler.readthedocs.io/en/2.12.1/tutorials/022%20-%20Writing%20Partitions%20Concurrently.html
753+
https://aws-data-wrangler.readthedocs.io/en/2.13.0/tutorials/022%20-%20Writing%20Partitions%20Concurrently.html
754754
mode : str, optional
755755
``append`` (Default), ``overwrite``, ``overwrite_partitions``. Only takes effect if dataset=True.
756756
For details check the related tutorial:
757-
https://aws-data-wrangler.readthedocs.io/en/2.12.1/stubs/awswrangler.s3.to_parquet.html#awswrangler.s3.to_parquet
757+
https://aws-data-wrangler.readthedocs.io/en/2.13.0/stubs/awswrangler.s3.to_parquet.html#awswrangler.s3.to_parquet
758758
catalog_versioning : bool
759759
If True and `mode="overwrite"`, creates an archived version of the table catalog before updating it.
760760
schema_evolution : bool
761761
If True allows schema evolution (new or missing columns), otherwise a exception will be raised.
762762
(Only considered if dataset=True and mode in ("append", "overwrite_partitions"))
763763
Related tutorial:
764-
https://aws-data-wrangler.readthedocs.io/en/2.12.1/tutorials/014%20-%20Schema%20Evolution.html
764+
https://aws-data-wrangler.readthedocs.io/en/2.13.0/tutorials/014%20-%20Schema%20Evolution.html
765765
database : str, optional
766766
Glue/Athena catalog: Database name.
767767
table : str, optional

docs/source/install.rst

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -120,7 +120,7 @@ Go to your Glue PySpark job and create a new *Job parameters* key/value:
120120

121121
To install a specific version, set the value for above Job parameter as follows:
122122

123-
* Value: ``cython==0.29.21,pg8000==1.21.0,pyarrow==2,pandas==1.3.0,awswrangler==2.12.1``
123+
* Value: ``cython==0.29.21,pg8000==1.21.0,pyarrow==2,pandas==1.3.0,awswrangler==2.13.0``
124124

125125
.. note:: Pyarrow 3 is not currently supported in Glue PySpark Jobs, which is why a previous installation of pyarrow 2 is required.
126126

@@ -139,7 +139,7 @@ Lambda zipped layers and Python wheels are stored in a publicly accessible S3 bu
139139

140140
* Python wheel: ``awswrangler-<version>-py3-none-any.whl``
141141

142-
For example: ``s3://aws-data-wrangler-public-artifacts/releases/2.12.1/awswrangler-layer-2.12.1-py3.8.zip``
142+
For example: ``s3://aws-data-wrangler-public-artifacts/releases/2.13.0/awswrangler-layer-2.13.0-py3.8.zip``
143143

144144
Amazon SageMaker Notebook
145145
-------------------------
@@ -231,7 +231,7 @@ complement Big Data pipelines.
231231
sudo pip install pyarrow==2 awswrangler
232232
233233
.. note:: Make sure to freeze the Wrangler version in the bootstrap for productive
234-
environments (e.g. awswrangler==2.12.1)
234+
environments (e.g. awswrangler==2.13.0)
235235
236236
.. note:: Pyarrow 3 is not currently supported in the default EMR image, which is why a previous installation of pyarrow 2 is required.
237237

0 commit comments

Comments
 (0)