Skip to content
This repository was archived by the owner on Nov 8, 2022. It is now read-only.

Commit 66863f8

Browse files
author
Peter Izsak
committed
Update S3 links
1 parent b9d7df0 commit 66863f8

File tree

19 files changed

+47
-47
lines changed

19 files changed

+47
-47
lines changed

docs-source/source/model_zoo.rst

Lines changed: 11 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -27,27 +27,27 @@ NLP Architect Model Zoo
2727
- Links
2828
* - :doc:`Sparse GNMT <sparse_gnmt>`
2929
- 90% sparse GNMT model and a 2x2 block sparse translating German to English trained on Europarl-v7 [#]_ , Common Crawl and News Commentary 11 datasets
30-
- | `model <https://s3-us-west-2.amazonaws.com/nlp-architect-data/models/sparse_gnmt/gnmt_sparse.zip>`_
31-
| `2x2 block sparse model <https://s3-us-west-2.amazonaws.com/nlp-architect-data/models/sparse_gnmt/gnmt_blocksparse2x2.zip>`_
30+
- | `model <https://d2zs9tzlek599f.cloudfront.net/models/sparse_gnmt/gnmt_sparse.zip>`_
31+
| `2x2 block sparse model <https://d2zs9tzlek599f.cloudfront.net/models/sparse_gnmt/gnmt_blocksparse2x2.zip>`_
3232
* - :doc:`Intent Extraction <intent>`
3333
- A :py:class:`MultiTaskIntentModel <nlp_architect.models.intent_extraction.MultiTaskIntentModel>` intent extraction and slot tagging model, trained on SNIPS NLU dataset
34-
- | `model <https://s3-us-west-2.amazonaws.com/nlp-architect-data/models/intent/model.h5>`_
35-
| `params <https://s3-us-west-2.amazonaws.com/nlp-architect-data/models/intent/model_info.dat>`_
34+
- | `model <https://d2zs9tzlek599f.cloudfront.net/models/intent/model.h5>`_
35+
| `params <https://d2zs9tzlek599f.cloudfront.net/models/intent/model_info.dat>`_
3636
* - :doc:`Named Entity Recognition <ner_crf>`
3737
- A :py:class:`NERCRF <nlp_architect.models.ner_crf.NERCRF>` model trained on CoNLL 2003 dataset
38-
- | `model <https://s3-us-west-2.amazonaws.com/nlp-architect-data/models/ner/model.h5>`_
39-
| `params <https://s3-us-west-2.amazonaws.com/nlp-architect-data/models/ner/model_info.dat>`_
38+
- | `model <https://d2zs9tzlek599f.cloudfront.net/models/ner/model.h5>`_
39+
| `params <https://d2zs9tzlek599f.cloudfront.net/models/ner/model_info.dat>`_
4040
* - :doc:`Dependency parser <bist_parser>`
4141
- Graph-based dependency parser using BiLSTM feature extractors
42-
- `model <https://s3-us-west-2.amazonaws.com/nlp-architect-data/models/dep_parse/bist-pretrained.zip>`_
42+
- `model <https://d2zs9tzlek599f.cloudfront.net/models/dep_parse/bist-pretrained.zip>`_
4343
* - :doc:`Machine comprehension <reading_comprehension>`
4444
- Match LSTM model trained on SQuAD dataset
45-
- | `model <https://s3-us-west-2.amazonaws.com/nlp-architect-data/models/mrc/mrc_model.zip>`_
46-
| `data <https://s3-us-west-2.amazonaws.com/nlp-architect-data/models/mrc/mrc_data.zip>`_
45+
- | `model <https://d2zs9tzlek599f.cloudfront.net/models/mrc/mrc_model.zip>`_
46+
| `data <https://d2zs9tzlek599f.cloudfront.net/models/mrc/mrc_data.zip>`_
4747
* - :doc:`Word chunker <chunker>`
4848
- A word chunker model trained on CoNLL 2000 dataset
49-
- | `model <https://s3-us-west-2.amazonaws.com/nlp-architect-data/models/chunker/model.h5>`_
50-
| `params <https://s3-us-west-2.amazonaws.com/nlp-architect-data/models/chunker/model_info.dat.params>`_
49+
- | `model <https://d2zs9tzlek599f.cloudfront.net/models/chunker/model.h5>`_
50+
| `params <https://d2zs9tzlek599f.cloudfront.net/models/chunker/model_info.dat.params>`_
5151
5252
References
5353
----------

docs-source/source/publications.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -53,4 +53,4 @@ Demos
5353
-----
5454
- NeurIPS 2018:
5555

56-
- Unsupervised Aspect-based Sentiment Analysis (`video <https://s3-us-west-2.amazonaws.com/nlp-architect-data/content/absa_kingsman_demo.mp4>`_)
56+
- Unsupervised Aspect-based Sentiment Analysis (`video <https://d2zs9tzlek599f.cloudfront.net/content/absa_kingsman_demo.mp4>`_)

docs-source/source/sparse_gnmt.rst

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -151,7 +151,7 @@ Run inference using our pre-trained models:
151151
.. code-block:: bash
152152
153153
# Download pre-trained model zip file, e.g. gnmt_sparse.zip
154-
wget https://s3-us-west-2.amazonaws.com/nlp-architect-data/models/sparse_gnmt/gnmt_sparse.zip
154+
wget https://d2zs9tzlek599f.cloudfront.net/models/sparse_gnmt/gnmt_sparse.zip
155155
156156
# Unzip checkpoint + vocabulary files
157157
unzip gnmt_sparse.zip -d /tmp/gnmt_sparse_checkpoint
@@ -386,5 +386,5 @@ References
386386
.. _`Shared Task`: http://www.statmt.org/wmt16/translation-task.html
387387
.. _`Model Zoo`: http://nlp_architect.nervanasys.com/model_zoo.html
388388
.. _`TensorFlow API`: https://www.tensorflow.org/api_docs/python/tf/quantize
389-
.. _`Sparse`: https://s3-us-west-2.amazonaws.com/nlp-architect-data/models/sparse_gnmt/gnmt_sparse.zip
390-
.. _`2x2 Block Sparse`: https://s3-us-west-2.amazonaws.com/nlp-architect-data/models/sparse_gnmt/gnmt_blocksparse2x2.zip
389+
.. _`Sparse`: https://d2zs9tzlek599f.cloudfront.net/models/sparse_gnmt/gnmt_sparse.zip
390+
.. _`2x2 Block Sparse`: https://d2zs9tzlek599f.cloudfront.net/models/sparse_gnmt/gnmt_blocksparse2x2.zip

docs-source/source/term_set_expansion.rst

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -84,19 +84,19 @@ size, min_count, window and hs hyperparameters. Please refer to the np2vec modul
8484
--corpus_format txt
8585
8686
87-
A `pretrained model <https://s3-us-west-2.amazonaws.com/nlp-architect-data/models/term_set/enwiki-20171201_pretrained_set_expansion.txt.tar.gz>`__
87+
A `pretrained model <https://d2zs9tzlek599f.cloudfront.net/models/term_set/enwiki-20171201_pretrained_set_expansion.txt.tar.gz>`__
8888
on English Wikipedia dump (``enwiki-20171201-pages-articles-multistream.xml.bz2``) is available under
8989
Apache 2.0 license. It has been trained with hyperparameters values
90-
recommended above. Full English Wikipedia `raw corpus <https://s3-us-west-2.amazonaws.com/nlp-architect-data/models/term_set/enwiki-20171201.txt.gz>`_ and
91-
`marked corpus <https://s3-us-west-2.amazonaws.com/nlp-architect-data/models/term_set/enwiki-20171201_spacy_marked.txt.tar.gz>`_
90+
recommended above. Full English Wikipedia `raw corpus <https://d2zs9tzlek599f.cloudfront.net/models/term_set/enwiki-20171201.txt.gz>`_ and
91+
`marked corpus <https://d2zs9tzlek599f.cloudfront.net/models/term_set/enwiki-20171201_spacy_marked.txt.tar.gz>`_
9292
are also available under the
9393
`Creative Commons Attribution-Share-Alike 3.0 License <https://creativecommons.org/licenses/by-sa/3.0/>`__.
9494
95-
A `pretrained model with grouping <https://s3-us-west-2.amazonaws.com/nlp-architect-data/models/term_set/enwiki-20171201_grouping_pretrained_set_expansion.tar.gz>`__
95+
A `pretrained model with grouping <https://d2zs9tzlek599f.cloudfront.net/models/term_set/enwiki-20171201_grouping_pretrained_set_expansion.tar.gz>`__
9696
on the same English Wikipedia dump is also
9797
available under
9898
Apache 2.0 license. It has been trained with hyperparameters values
99-
recommended above. `Marked corpus <https://s3-us-west-2.amazonaws.com/nlp-architect-data/models/term_set/enwiki-20171201_grouping_marked.txt.tar.gz>`_
99+
recommended above. `Marked corpus <https://d2zs9tzlek599f.cloudfront.net/models/term_set/enwiki-20171201_grouping_marked.txt.tar.gz>`_
100100
is also available under the
101101
`Creative Commons Attribution-Share-Alike 3.0 License <https://creativecommons.org/licenses/by-sa/3.0/>`__.
102102

docs-source/source/trend_analysis.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -37,7 +37,7 @@ In this stage, the algorithm will also train a W2V model on the joint corpora to
3737
In the second stage the topic lists are being compared and analyzed.
3838
Finally the UI reads the analysis data and generates automatic reports for extracted topics, “Hot” and “Cold” trends, and topic clustering in 2D space.
3939

40-
The noun phrase extraction module is using a pre-trained `model <https://s3-us-west-2.amazonaws.com/nlp-architect-data/models/chunker/model.h5>`__ which is available under the Apache 2.0 license.
40+
The noun phrase extraction module is using a pre-trained `model <https://d2zs9tzlek599f.cloudfront.net/models/chunker/model.h5>`__ which is available under the Apache 2.0 license.
4141

4242
Flow diagram
4343
============

examples/reading_comprehension/match_lstm_mrc/machine_comprehension_api.py

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -85,13 +85,13 @@ def download_model(self):
8585
if self.prompt is True:
8686
license_prompt(
8787
"mrc_data",
88-
"https://s3-us-west-2.amazonaws.com/nlp-architect-data/models/mrc"
88+
"https://d2zs9tzlek599f.cloudfront.net/models/mrc"
8989
"/mrc_data.zip",
9090
self.data_dir,
9191
)
9292
license_prompt(
9393
"mrc_model",
94-
"https://s3-us-west-2.amazonaws.com/nlp-architect-data/models/mrc"
94+
"https://d2zs9tzlek599f.cloudfront.net/models/mrc"
9595
"/mrc_model.zip",
9696
self.model_dir,
9797
)
@@ -100,12 +100,12 @@ def download_model(self):
100100
makedirs(self.data_dir, exist_ok=True)
101101
makedirs(self.model_dir, exist_ok=True)
102102
download_unlicensed_file(
103-
"https://s3-us-west-2.amazonaws.com/nlp-architect-data" "/models/mrc/",
103+
"https://d2zs9tzlek599f.cloudfront.net" "/models/mrc/",
104104
"mrc_data.zip",
105105
data_zipfile,
106106
)
107107
download_unlicensed_file(
108-
"https://s3-us-west-2.amazonaws.com/nlp-architect-data" "/models/mrc/",
108+
"https://d2zs9tzlek599f.cloudfront.net" "/models/mrc/",
109109
"mrc_model.zip",
110110
model_zipfile,
111111
)

examples/sparse_gnmt/README.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -32,8 +32,8 @@ You can use these models to [Run Inference with Pre-Trained Model](#run-inferenc
3232
| Model | Sparsity | BLEU| Non-Zero Parameters | Data Type |
3333
|----------------------------|:--------:|:----:|:-------------------:|:---------:|
3434
| Baseline | 0% | 29.9 | ~210M | Float32 |
35-
| [Sparse](https://s3-us-west-2.amazonaws.com/nlp-architect-data/models/sparse_gnmt/gnmt_sparse.zip) | 90% | 28.4 | ~22M | Float32 |
36-
| [2x2 Block Sparse](https://s3-us-west-2.amazonaws.com/nlp-architect-data/models/sparse_gnmt/gnmt_blocksparse2x2.zip) | 90% | 27.8 | ~22M | Float32 |
35+
| [Sparse](https://d2zs9tzlek599f.cloudfront.net/models/sparse_gnmt/gnmt_sparse.zip) | 90% | 28.4 | ~22M | Float32 |
36+
| [2x2 Block Sparse](https://d2zs9tzlek599f.cloudfront.net/models/sparse_gnmt/gnmt_blocksparse2x2.zip) | 90% | 27.8 | ~22M | Float32 |
3737
| Quantized Sparse | 90% | 28.4 | ~22M | Integer8 |
3838
| Quantized 2x2 Block Sparse | 90% | 27.6 | ~22M | Integer8 |
3939

@@ -92,7 +92,7 @@ Follow these instructions in order to use our pre-trained models:
9292
```
9393
9494
# Download pre-trained model zip file, e.g. gnmt_sparse.zip
95-
wget https://s3-us-west-2.amazonaws.com/nlp-architect-data/models/sparse_gnmt/gnmt_sparse.zip
95+
wget https://d2zs9tzlek599f.cloudfront.net/models/sparse_gnmt/gnmt_sparse.zip
9696
9797
# Unzip checkpoint + vocabulary files
9898
unzip gnmt_sparse.zip -d /tmp/gnmt_sparse_checkpoint

nlp_architect/api/intent_extraction_api.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -73,12 +73,12 @@ def _download_pretrained_model(prompt=True):
7373
if agreed is False:
7474
sys.exit(0)
7575
download_unlicensed_file(
76-
"https://s3-us-west-2.amazonaws.com/nlp-architect-data" "/models/intent/",
76+
"https://d2zs9tzlek599f.cloudfront.net" "/models/intent/",
7777
"model_info.dat",
7878
IntentExtractionApi.pretrained_model_info,
7979
)
8080
download_unlicensed_file(
81-
"https://s3-us-west-2.amazonaws.com/nlp-architect-data" "/models/intent/",
81+
"https://d2zs9tzlek599f.cloudfront.net" "/models/intent/",
8282
"model.h5",
8383
IntentExtractionApi.pretrained_model,
8484
)

nlp_architect/api/ner_api.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -72,12 +72,12 @@ def _download_pretrained_model(self, prompt=True):
7272
if agreed is False:
7373
sys.exit(0)
7474
download_unlicensed_file(
75-
"https://s3-us-west-2.amazonaws.com/nlp-architect-data" "/models/ner/",
75+
"https://d2zs9tzlek599f.cloudfront.net" "/models/ner/",
7676
"model_v4.h5",
7777
self.pretrained_model,
7878
)
7979
download_unlicensed_file(
80-
"https://s3-us-west-2.amazonaws.com/nlp-architect-data" "/models/ner/",
80+
"https://d2zs9tzlek599f.cloudfront.net" "/models/ner/",
8181
"model_info_v4.dat",
8282
self.pretrained_model_info,
8383
)

nlp_architect/models/absa/utils.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -37,7 +37,7 @@ def _download_pretrained_rerank_model(rerank_model_full_path):
3737
makedirs(rerank_model_dir, exist_ok=True)
3838
print("dowloading pre-trained reranking model..")
3939
download_unlicensed_file(
40-
"https://s3-us-west-2.amazonaws.com/nlp-architect-data/models/" "absa/",
40+
"https://d2zs9tzlek599f.cloudfront.net/models/" "absa/",
4141
"rerank_model.h5",
4242
rerank_model_full_path,
4343
)

0 commit comments

Comments
 (0)