Skip to content

Commit 99bba9f

Browse files
committed
Pointing doc links to stable.
1 parent 99f92dd commit 99bba9f

10 files changed

+65
-64
lines changed

README.md

Lines changed: 42 additions & 41 deletions
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,5 @@
11
# AWS Data Wrangler
2+
23
*Pandas on AWS*
34

45
![AWS Data Wrangler](docs/source/_static/logo2.png?raw=true "AWS Data Wrangler")
@@ -69,17 +70,17 @@ wr.db.to_sql(df, engine, schema="test", name="my_table")
6970

7071
## [Read The Docs](https://aws-data-wrangler.readthedocs.io/)
7172

72-
- [**What is AWS Data Wrangler?**](https://aws-data-wrangler.readthedocs.io/en/latest/what.html)
73-
- [**Install**](https://aws-data-wrangler.readthedocs.io/en/latest/install.html)
74-
- [PyPi (pip)](https://aws-data-wrangler.readthedocs.io/en/latest/install.html#pypi-pip)
75-
- [Conda](https://aws-data-wrangler.readthedocs.io/en/latest/install.html#conda)
76-
- [AWS Lambda Layer](https://aws-data-wrangler.readthedocs.io/en/latest/install.html#aws-lambda-layer)
77-
- [AWS Glue Python Shell Jobs](https://aws-data-wrangler.readthedocs.io/en/latest/install.html#aws-glue-python-shell-jobs)
78-
- [AWS Glue PySpark Jobs](https://aws-data-wrangler.readthedocs.io/en/latest/install.html#aws-glue-pyspark-jobs)
79-
- [Amazon SageMaker Notebook](https://aws-data-wrangler.readthedocs.io/en/latest/install.html#amazon-sagemaker-notebook)
80-
- [Amazon SageMaker Notebook Lifecycle](https://aws-data-wrangler.readthedocs.io/en/latest/install.html#amazon-sagemaker-notebook-lifecycle)
81-
- [EMR](https://aws-data-wrangler.readthedocs.io/en/latest/install.html#emr)
82-
- [From source](https://aws-data-wrangler.readthedocs.io/en/latest/install.html#from-source)
73+
- [**What is AWS Data Wrangler?**](https://aws-data-wrangler.readthedocs.io/en/stable/what.html)
74+
- [**Install**](https://aws-data-wrangler.readthedocs.io/en/stable/install.html)
75+
- [PyPi (pip)](https://aws-data-wrangler.readthedocs.io/en/stable/install.html#pypi-pip)
76+
- [Conda](https://aws-data-wrangler.readthedocs.io/en/stable/install.html#conda)
77+
- [AWS Lambda Layer](https://aws-data-wrangler.readthedocs.io/en/stable/install.html#aws-lambda-layer)
78+
- [AWS Glue Python Shell Jobs](https://aws-data-wrangler.readthedocs.io/en/stable/install.html#aws-glue-python-shell-jobs)
79+
- [AWS Glue PySpark Jobs](https://aws-data-wrangler.readthedocs.io/en/stable/install.html#aws-glue-pyspark-jobs)
80+
- [Amazon SageMaker Notebook](https://aws-data-wrangler.readthedocs.io/en/stable/install.html#amazon-sagemaker-notebook)
81+
- [Amazon SageMaker Notebook Lifecycle](https://aws-data-wrangler.readthedocs.io/en/stable/install.html#amazon-sagemaker-notebook-lifecycle)
82+
- [EMR](https://aws-data-wrangler.readthedocs.io/en/stable/install.html#emr)
83+
- [From source](https://aws-data-wrangler.readthedocs.io/en/stable/install.html#from-source)
8384
- [**Tutorials**](https://github.com/awslabs/aws-data-wrangler/tree/master/tutorials)
8485
- [001 - Introduction](https://github.com/awslabs/aws-data-wrangler/blob/master/tutorials/001%20-%20Introduction.ipynb)
8586
- [002 - Sessions](https://github.com/awslabs/aws-data-wrangler/blob/master/tutorials/002%20-%20Sessions.ipynb)
@@ -106,15 +107,15 @@ wr.db.to_sql(df, engine, schema="test", name="my_table")
106107
- [023 - Flexible Partitions Filter](https://github.com/awslabs/aws-data-wrangler/blob/master/tutorials/023%20-%20Flexible%20Partitions%20Filter.ipynb)
107108
- [024 - Athena Query Metadata](https://github.com/awslabs/aws-data-wrangler/blob/master/tutorials/024%20-%20Athena%20Query%20Metadata.ipynb)
108109
- [025 - Redshift - Loading Parquet files with Spectrum](https://github.com/awslabs/aws-data-wrangler/blob/master/tutorials/025%20-%20Redshift%20-%20Loading%20Parquet%20files%20with%20Spectrum.ipynb)
109-
- [**API Reference**](https://aws-data-wrangler.readthedocs.io/en/latest/api.html)
110-
- [Amazon S3](https://aws-data-wrangler.readthedocs.io/en/latest/api.html#amazon-s3)
111-
- [AWS Glue Catalog](https://aws-data-wrangler.readthedocs.io/en/latest/api.html#aws-glue-catalog)
112-
- [Amazon Athena](https://aws-data-wrangler.readthedocs.io/en/latest/api.html#amazon-athena)
113-
- [Databases (Amazon Redshift, PostgreSQL, MySQL)](https://aws-data-wrangler.readthedocs.io/en/latest/api.html#databases-amazon-redshift-postgresql-mysql)
114-
- [Amazon EMR](https://aws-data-wrangler.readthedocs.io/en/latest/api.html#amazon-emr)
115-
- [Amazon CloudWatch Logs](https://aws-data-wrangler.readthedocs.io/en/latest/api.html#amazon-cloudwatch-logs)
116-
- [Amazon QuickSight](https://aws-data-wrangler.readthedocs.io/en/latest/api.html#amazon-quicksight)
117-
- [AWS STS](https://aws-data-wrangler.readthedocs.io/en/latest/api.html#aws-sts)
110+
- [**API Reference**](https://aws-data-wrangler.readthedocs.io/en/stable/api.html)
111+
- [Amazon S3](https://aws-data-wrangler.readthedocs.io/en/stable/api.html#amazon-s3)
112+
- [AWS Glue Catalog](https://aws-data-wrangler.readthedocs.io/en/stable/api.html#aws-glue-catalog)
113+
- [Amazon Athena](https://aws-data-wrangler.readthedocs.io/en/stable/api.html#amazon-athena)
114+
- [Databases (Amazon Redshift, PostgreSQL, MySQL)](https://aws-data-wrangler.readthedocs.io/en/stable/api.html#databases-amazon-redshift-postgresql-mysql)
115+
- [Amazon EMR](https://aws-data-wrangler.readthedocs.io/en/stable/api.html#amazon-emr)
116+
- [Amazon CloudWatch Logs](https://aws-data-wrangler.readthedocs.io/en/stable/api.html#amazon-cloudwatch-logs)
117+
- [Amazon QuickSight](https://aws-data-wrangler.readthedocs.io/en/stable/api.html#amazon-quicksight)
118+
- [AWS STS](https://aws-data-wrangler.readthedocs.io/en/stable/api.html#aws-sts)
118119
- [**License**](https://github.com/awslabs/aws-data-wrangler/blob/master/LICENSE.txt)
119120
- [**Contributing**](https://github.com/awslabs/aws-data-wrangler/blob/master/CONTRIBUTING.md)
120121
- [**Legacy Docs** (pre-1.0.0)](https://aws-data-wrangler.readthedocs.io/en/legacy/)
@@ -123,31 +124,31 @@ wr.db.to_sql(df, engine, schema="test", name="my_table")
123124

124125
Please [send a Pull Request](https://github.com/awslabs/aws-data-wrangler/edit/master/README.md) with your resource reference and @githubhandle.
125126

126-
* [Optimize Python ETL by extending Pandas with AWS Data Wrangler](https://aws.amazon.com/blogs/big-data/optimize-python-etl-by-extending-pandas-with-aws-data-wrangler/) [[@igorborgest](https://github.com/igorborgest)]
127-
* [Reading Parquet Files With AWS Lambda](https://aprakash.wordpress.com/2020/04/14/reading-parquet-files-with-aws-lambda/) [[@anand086](https://github.com/anand086)]
128-
* [Transform AWS CloudTrail data using AWS Data Wrangler](https://aprakash.wordpress.com/2020/09/17/transform-aws-cloudtrail-data-using-aws-data-wrangler/) [[@anand086](https://github.com/anand086)]
129-
* [Getting started on AWS Data Wrangler and Athena](https://medium.com/@dheerajsharmainampudi/getting-started-on-aws-data-wrangler-and-athena-7b446c834076) [[@dheerajsharma21](https://github.com/dheerajsharma21)]
130-
* [Simplifying Pandas integration with AWS data related services](https://medium.com/@bv_subhash/aws-data-wrangler-simplifying-pandas-integration-with-aws-data-related-services-2b3325c12188) [[@bvsubhash](https://github.com/bvsubhash)]
127+
- [Optimize Python ETL by extending Pandas with AWS Data Wrangler](https://aws.amazon.com/blogs/big-data/optimize-python-etl-by-extending-pandas-with-aws-data-wrangler/) [[@igorborgest](https://github.com/igorborgest)]
128+
- [Reading Parquet Files With AWS Lambda](https://aprakash.wordpress.com/2020/04/14/reading-parquet-files-with-aws-lambda/) [[@anand086](https://github.com/anand086)]
129+
- [Transform AWS CloudTrail data using AWS Data Wrangler](https://aprakash.wordpress.com/2020/09/17/transform-aws-cloudtrail-data-using-aws-data-wrangler/) [[@anand086](https://github.com/anand086)]
130+
- [Getting started on AWS Data Wrangler and Athena](https://medium.com/@dheerajsharmainampudi/getting-started-on-aws-data-wrangler-and-athena-7b446c834076) [[@dheerajsharma21](https://github.com/dheerajsharma21)]
131+
- [Simplifying Pandas integration with AWS data related services](https://medium.com/@bv_subhash/aws-data-wrangler-simplifying-pandas-integration-with-aws-data-related-services-2b3325c12188) [[@bvsubhash](https://github.com/bvsubhash)]
131132

132133
## Who uses AWS Data Wrangler?
133134

134135
Knowing which companies are using this library is important to help prioritize the project internally.
135136

136137
Please [send a Pull Request](https://github.com/awslabs/aws-data-wrangler/edit/master/README.md) with your company name and @githubhandle if you may.
137138

138-
* [Amazon](https://www.amazon.com/)
139-
* [AWS](https://aws.amazon.com/)
140-
* [Cepsa](https://cepsa.com) [[@alvaropc](https://github.com/alvaropc)]
141-
* [Cognitivo](https://www.cognitivo.ai/) [[@msantino](https://github.com/msantino)]
142-
* [Digio](https://www.digio.com.br/) [[@afonsomy](https://github.com/afonsomy)]
143-
* [DNX](https://www.dnx.solutions/) [[@DNXLabs](https://github.com/DNXLabs)]
144-
* [Funcional Health Tech](https://www.funcionalcorp.com.br/) [[@webysther](https://github.com/webysther)]
145-
* [LINE TV](https://www.linetv.tw/) [[@bryanyang0528](https://github.com/bryanyang0528)]
146-
* [M4U](https://www.m4u.com.br/) [[@Thiago-Dantas](https://github.com/Thiago-Dantas)]
147-
* [OKRA Technologies](https://okra.ai) [[@JPFrancoia](https://github.com/JPFrancoia), [@schot](https://github.com/schot)]
148-
* [Pier](https://www.pier.digital/) [[@flaviomax](https://github.com/flaviomax)]
149-
* [Pismo](https://www.pismo.io/) [[@msantino](https://github.com/msantino)]
150-
* [Serasa Experian](https://www.serasaexperian.com.br/) [[@andre-marcos-perez](https://github.com/andre-marcos-perez)]
151-
* [Shipwell](https://shipwell.com/) [[@zacharycarter](https://github.com/zacharycarter)]
152-
* [Thinkbumblebee](https://www.thinkbumblebee.com/) [[@dheerajsharma21]](https://github.com/dheerajsharma21)
153-
* [Zillow](https://www.zillow.com/) [[@nicholas-miles]](https://github.com/nicholas-miles)
139+
- [Amazon](https://www.amazon.com/)
140+
- [AWS](https://aws.amazon.com/)
141+
- [Cepsa](https://cepsa.com) [[@alvaropc](https://github.com/alvaropc)]
142+
- [Cognitivo](https://www.cognitivo.ai/) [[@msantino](https://github.com/msantino)]
143+
- [Digio](https://www.digio.com.br/) [[@afonsomy](https://github.com/afonsomy)]
144+
- [DNX](https://www.dnx.solutions/) [[@DNXLabs](https://github.com/DNXLabs)]
145+
- [Funcional Health Tech](https://www.funcionalcorp.com.br/) [[@webysther](https://github.com/webysther)]
146+
- [LINE TV](https://www.linetv.tw/) [[@bryanyang0528](https://github.com/bryanyang0528)]
147+
- [M4U](https://www.m4u.com.br/) [[@Thiago-Dantas](https://github.com/Thiago-Dantas)]
148+
- [OKRA Technologies](https://okra.ai) [[@JPFrancoia](https://github.com/JPFrancoia), [@schot](https://github.com/schot)]
149+
- [Pier](https://www.pier.digital/) [[@flaviomax](https://github.com/flaviomax)]
150+
- [Pismo](https://www.pismo.io/) [[@msantino](https://github.com/msantino)]
151+
- [Serasa Experian](https://www.serasaexperian.com.br/) [[@andre-marcos-perez](https://github.com/andre-marcos-perez)]
152+
- [Shipwell](https://shipwell.com/) [[@zacharycarter](https://github.com/zacharycarter)]
153+
- [Thinkbumblebee](https://www.thinkbumblebee.com/) [[@dheerajsharma21]](https://github.com/dheerajsharma21)
154+
- [Zillow](https://www.zillow.com/) [[@nicholas-miles]](https://github.com/nicholas-miles)

awswrangler/s3/_write_parquet.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -286,7 +286,7 @@ def to_parquet( # pylint: disable=too-many-arguments,too-many-locals
286286
mode: str, optional
287287
``append`` (Default), ``overwrite``, ``overwrite_partitions``. Only takes effect if dataset=True.
288288
For details check the related tutorial:
289-
https://aws-data-wrangler.readthedocs.io/en/latest/stubs/awswrangler.s3.to_parquet.html#awswrangler.s3.to_parquet
289+
https://aws-data-wrangler.readthedocs.io/en/stable/stubs/awswrangler.s3.to_parquet.html#awswrangler.s3.to_parquet
290290
catalog_versioning : bool
291291
If True and `mode="overwrite"`, creates an archived version of the table catalog before updating it.
292292
schema_evolution : bool

awswrangler/s3/_write_text.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -156,7 +156,7 @@ def to_csv( # pylint: disable=too-many-arguments,too-many-locals
156156
mode : str, optional
157157
``append`` (Default), ``overwrite``, ``overwrite_partitions``. Only takes effect if dataset=True.
158158
For details check the related tutorial:
159-
https://aws-data-wrangler.readthedocs.io/en/latest/stubs/awswrangler.s3.to_parquet.html#awswrangler.s3.to_parquet
159+
https://aws-data-wrangler.readthedocs.io/en/stable/stubs/awswrangler.s3.to_parquet.html#awswrangler.s3.to_parquet
160160
catalog_versioning : bool
161161
If True and `mode="overwrite"`, creates an archived version of the table catalog before updating it.
162162
database : str, optional

docs/source/what.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -5,4 +5,4 @@ An `AWS Professional Service <https://aws.amazon.com/professional-services>`_ `o
55

66
Built on top of other open-source projects like `Pandas <https://github.com/pandas-dev/pandas>`_, `Apache Arrow <https://github.com/apache/arrow>`_, `Boto3 <https://github.com/boto/boto3>`_, `SQLAlchemy <https://github.com/sqlalchemy/sqlalchemy>`_, `Psycopg2 <https://github.com/psycopg/psycopg2>`_ and `PyMySQL <https://github.com/PyMySQL/PyMySQL>`_, it offers abstracted functions to execute usual ETL tasks like load/unload data from **Data Lakes**, **Data Warehouses** and **Databases**.
77

8-
Check our `tutorials <https://github.com/awslabs/aws-data-wrangler/tree/master/tutorials>`_ or the `list of functionalities <https://aws-data-wrangler.readthedocs.io/en/latest/api.html>`_.
8+
Check our `tutorials <https://github.com/awslabs/aws-data-wrangler/tree/master/tutorials>`_ or the `list of functionalities <https://aws-data-wrangler.readthedocs.io/en/stable/api.html>`_.

tutorials/001 - Introduction.ipynb

Lines changed: 10 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,7 @@
1919
"\n",
2020
"Built on top of other open-source projects like [Pandas](https://github.com/pandas-dev/pandas), [Apache Arrow](https://github.com/apache/arrow), [Boto3](https://github.com/boto/boto3), [SQLAlchemy](https://github.com/sqlalchemy/sqlalchemy), [Psycopg2](https://github.com/psycopg/psycopg2) and [PyMySQL](https://github.com/PyMySQL/PyMySQL), it offers abstracted functions to execute usual ETL tasks like load/unload data from **Data Lakes**, **Data Warehouses** and **Databases**.\n",
2121
"\n",
22-
"Check our [list of functionalities](https://aws-data-wrangler.readthedocs.io/en/latest/api.html)."
22+
"Check our [list of functionalities](https://aws-data-wrangler.readthedocs.io/en/stable/api.html)."
2323
]
2424
},
2525
{
@@ -30,15 +30,15 @@
3030
"\n",
3131
"The Wrangler runs almost anywhere over Python 3.6, 3.7 and 3.8, so there are several different ways to install it in the desired enviroment.\n",
3232
"\n",
33-
" - [PyPi (pip)](https://aws-data-wrangler.readthedocs.io/en/latest/install.html#pypi-pip)\n",
34-
" - [Conda](https://aws-data-wrangler.readthedocs.io/en/latest/install.html#conda)\n",
35-
" - [AWS Lambda Layer](https://aws-data-wrangler.readthedocs.io/en/latest/install.html#aws-lambda-layer)\n",
36-
" - [AWS Glue Python Shell Jobs](https://aws-data-wrangler.readthedocs.io/en/latest/install.html#aws-glue-python-shell-jobs)\n",
37-
" - [AWS Glue PySpark Jobs](https://aws-data-wrangler.readthedocs.io/en/latest/install.html#aws-glue-pyspark-jobs)\n",
38-
" - [Amazon SageMaker Notebook](https://aws-data-wrangler.readthedocs.io/en/latest/install.html#amazon-sagemaker-notebook)\n",
39-
" - [Amazon SageMaker Notebook Lifecycle](https://aws-data-wrangler.readthedocs.io/en/latest/install.html#amazon-sagemaker-notebook-lifecycle)\n",
40-
" - [EMR Cluster](https://aws-data-wrangler.readthedocs.io/en/latest/install.html#emr-cluster)\n",
41-
" - [From source](https://aws-data-wrangler.readthedocs.io/en/latest/install.html#from-source)\n",
33+
" - [PyPi (pip)](https://aws-data-wrangler.readthedocs.io/en/stable/install.html#pypi-pip)\n",
34+
" - [Conda](https://aws-data-wrangler.readthedocs.io/en/stable/install.html#conda)\n",
35+
" - [AWS Lambda Layer](https://aws-data-wrangler.readthedocs.io/en/stable/install.html#aws-lambda-layer)\n",
36+
" - [AWS Glue Python Shell Jobs](https://aws-data-wrangler.readthedocs.io/en/stable/install.html#aws-glue-python-shell-jobs)\n",
37+
" - [AWS Glue PySpark Jobs](https://aws-data-wrangler.readthedocs.io/en/stable/install.html#aws-glue-pyspark-jobs)\n",
38+
" - [Amazon SageMaker Notebook](https://aws-data-wrangler.readthedocs.io/en/stable/install.html#amazon-sagemaker-notebook)\n",
39+
" - [Amazon SageMaker Notebook Lifecycle](https://aws-data-wrangler.readthedocs.io/en/stable/install.html#amazon-sagemaker-notebook-lifecycle)\n",
40+
" - [EMR Cluster](https://aws-data-wrangler.readthedocs.io/en/stable/install.html#emr-cluster)\n",
41+
" - [From source](https://aws-data-wrangler.readthedocs.io/en/stable/install.html#from-source)\n",
4242
"\n",
4343
"Some good practices for most of the above methods are:\n",
4444
" - Use new and individual Virtual Environments for each project ([venv](https://docs.python.org/3/library/venv.html))\n",

tutorials/007 - Redshift, MySQL, PostgreSQL.ipynb

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -10,9 +10,9 @@
1010
"\n",
1111
"[Wrangler](https://github.com/awslabs/aws-data-wrangler)'s Database module (`wr.db.*`) has two mainly functions that tries to follow the Pandas conventions, but add more data type consistency.\n",
1212
"\n",
13-
"- [wr.db.to_sql()](https://aws-data-wrangler.readthedocs.io/en/latest/stubs/awswrangler.db.to_sql.html#awswrangler.db.to_sql)\n",
13+
"- [wr.db.to_sql()](https://aws-data-wrangler.readthedocs.io/en/stable/stubs/awswrangler.db.to_sql.html#awswrangler.db.to_sql)\n",
1414
"\n",
15-
"- [wr.db.read_sql_query()](https://aws-data-wrangler.readthedocs.io/en/latest/stubs/awswrangler.db.read_sql_query.html#awswrangler.db.read_sql_query)"
15+
"- [wr.db.read_sql_query()](https://aws-data-wrangler.readthedocs.io/en/stable/stubs/awswrangler.db.read_sql_query.html#awswrangler.db.read_sql_query)"
1616
]
1717
},
1818
{
@@ -38,11 +38,11 @@
3838
"\n",
3939
"The Wrangler offers basically three diffent ways to create a SQLAlchemy engine.\n",
4040
"\n",
41-
"1 - [wr.catalog.get_engine()](https://aws-data-wrangler.readthedocs.io/en/latest/stubs/awswrangler.catalog.get_engine.html#awswrangler.catalog.get_engine): Get the engine from a Glue Catalog Connection.\n",
41+
"1 - [wr.catalog.get_engine()](https://aws-data-wrangler.readthedocs.io/en/stable/stubs/awswrangler.catalog.get_engine.html#awswrangler.catalog.get_engine): Get the engine from a Glue Catalog Connection.\n",
4242
"\n",
43-
"2 - [wr.db.get_engine()](https://aws-data-wrangler.readthedocs.io/en/latest/stubs/awswrangler.db.get_engine.html#awswrangler.db.get_engine): Get the engine from primitives values (host, user, password, etc).\n",
43+
"2 - [wr.db.get_engine()](https://aws-data-wrangler.readthedocs.io/en/stable/stubs/awswrangler.db.get_engine.html#awswrangler.db.get_engine): Get the engine from primitives values (host, user, password, etc).\n",
4444
"\n",
45-
"3 - [wr.db.get_redshift_temp_engine()](https://aws-data-wrangler.readthedocs.io/en/latest/stubs/awswrangler.db.get_redshift_temp_engine.html#awswrangler.db.get_redshift_temp_engine): Get redshift engine with temporary credentials. "
45+
"3 - [wr.db.get_redshift_temp_engine()](https://aws-data-wrangler.readthedocs.io/en/stable/stubs/awswrangler.db.get_redshift_temp_engine.html#awswrangler.db.get_redshift_temp_engine): Get redshift engine with temporary credentials. "
4646
]
4747
},
4848
{

tutorials/014 - Schema Evolution.ipynb

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -10,8 +10,8 @@
1010
"\n",
1111
"Wrangler support new **columns** on Parquet Dataset through:\n",
1212
"\n",
13-
"- [wr.s3.to_parquet()](https://aws-data-wrangler.readthedocs.io/en/latest/stubs/awswrangler.s3.to_parquet.html#awswrangler.s3.to_parquet)\n",
14-
"- [wr.s3.store_parquet_metadata()](https://aws-data-wrangler.readthedocs.io/en/latest/stubs/awswrangler.s3.store_parquet_metadata.html#awswrangler.s3.store_parquet_metadata) i.e. \"Crawler\""
13+
"- [wr.s3.to_parquet()](https://aws-data-wrangler.readthedocs.io/en/stable/stubs/awswrangler.s3.to_parquet.html#awswrangler.s3.to_parquet)\n",
14+
"- [wr.s3.store_parquet_metadata()](https://aws-data-wrangler.readthedocs.io/en/stable/stubs/awswrangler.s3.store_parquet_metadata.html#awswrangler.s3.store_parquet_metadata) i.e. \"Crawler\""
1515
]
1616
},
1717
{

tutorials/021 - Global Configurations.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -13,7 +13,7 @@
1313
"- **Environment variables**\n",
1414
"- **wr.config**\n",
1515
"\n",
16-
"*P.S. Check the [function API doc](https://aws-data-wrangler.readthedocs.io/en/latest/api.html) to see if your function has some argument that can be configured through Global configurations.*"
16+
"*P.S. Check the [function API doc](https://aws-data-wrangler.readthedocs.io/en/stable/api.html) to see if your function has some argument that can be configured through Global configurations.*"
1717
]
1818
},
1919
{

0 commit comments

Comments
 (0)