Skip to content

Commit 4655090

Browse files
committed
let -> const
1 parent f723dfd commit 4655090

File tree

1 file changed

+13
-13
lines changed

1 file changed

+13
-13
lines changed

docs/source/pipelines.md

Lines changed: 13 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -16,7 +16,7 @@ Start by creating an instance of `pipeline()` and specifying a task you want to
1616
```javascript
1717
import { pipeline } from '@huggingface/transformers';
1818

19-
let classifier = await pipeline('sentiment-analysis');
19+
const classifier = await pipeline('sentiment-analysis');
2020
```
2121

2222
When running for the first time, the `pipeline` will download and cache the default pretrained model associated with the task. This can take a while, but subsequent calls will be much faster.
@@ -30,14 +30,14 @@ By default, models will be downloaded from the [Hugging Face Hub](https://huggin
3030
You can now use the classifier on your target text by calling it as a function:
3131

3232
```javascript
33-
let result = await classifier('I love transformers!');
33+
const result = await classifier('I love transformers!');
3434
// [{'label': 'POSITIVE', 'score': 0.9998}]
3535
```
3636

3737
If you have multiple inputs, you can pass them as an array:
3838

3939
```javascript
40-
let result = await classifier(['I love transformers!', 'I hate transformers!']);
40+
const result = await classifier(['I love transformers!', 'I hate transformers!']);
4141
// [{'label': 'POSITIVE', 'score': 0.9998}, {'label': 'NEGATIVE', 'score': 0.9982}]
4242
```
4343

@@ -46,9 +46,9 @@ You can also specify a different model to use for the pipeline by passing it as
4646
<!-- TODO: REPLACE 'nlptown/bert-base-multilingual-uncased-sentiment' with 'nlptown/bert-base-multilingual-uncased-sentiment'-->
4747

4848
```javascript
49-
let reviewer = await pipeline('sentiment-analysis', 'Xenova/bert-base-multilingual-uncased-sentiment');
49+
const reviewer = await pipeline('sentiment-analysis', 'Xenova/bert-base-multilingual-uncased-sentiment');
5050

51-
let result = await reviewer('The Shawshank Redemption is a true masterpiece of cinema.');
51+
const result = await reviewer('The Shawshank Redemption is a true masterpiece of cinema.');
5252
// [{label: '5 stars', score: 0.8167929649353027}]
5353
```
5454

@@ -59,10 +59,10 @@ The `pipeline()` function is a great way to quickly use a pretrained model for i
5959
<!-- TODO: Replace 'Xenova/whisper-small.en' with 'openai/whisper-small.en' -->
6060
```javascript
6161
// Allocate a pipeline for Automatic Speech Recognition
62-
let transcriber = await pipeline('automatic-speech-recognition', 'Xenova/whisper-small.en');
62+
const transcriber = await pipeline('automatic-speech-recognition', 'Xenova/whisper-small.en');
6363

6464
// Transcribe an audio file, loaded from a URL.
65-
let result = await transcriber('https://huggingface.co/datasets/Narsil/asr_dummy/resolve/main/mlk.flac');
65+
const result = await transcriber('https://huggingface.co/datasets/Narsil/asr_dummy/resolve/main/mlk.flac');
6666
// {text: ' I have a dream that one day this nation will rise up and live out the true meaning of its creed.'}
6767
```
6868

@@ -86,7 +86,7 @@ You can also specify which revision of the model to use, by passing a `revision`
8686
Since the Hugging Face Hub uses a git-based versioning system, you can use any valid git revision specifier (e.g., branch name or commit hash)
8787

8888
```javascript
89-
let transcriber = await pipeline('automatic-speech-recognition', 'Xenova/whisper-tiny.en', {
89+
const transcriber = await pipeline('automatic-speech-recognition', 'Xenova/whisper-tiny.en', {
9090
revision: 'output_attentions',
9191
});
9292
```
@@ -100,17 +100,17 @@ Many pipelines have additional options that you can specify. For example, when u
100100
<!-- TODO: Replace 'Xenova/nllb-200-distilled-600M' with 'facebook/nllb-200-distilled-600M' -->
101101
```javascript
102102
// Allocation a pipeline for translation
103-
let translator = await pipeline('translation', 'Xenova/nllb-200-distilled-600M');
103+
const translator = await pipeline('translation', 'Xenova/nllb-200-distilled-600M');
104104

105105
// Translate from English to Greek
106-
let result = await translator('I like to walk my dog.', {
106+
const result = await translator('I like to walk my dog.', {
107107
src_lang: 'eng_Latn',
108108
tgt_lang: 'ell_Grek'
109109
});
110110
// [ { translation_text: 'Μου αρέσει να περπατάω το σκυλί μου.' } ]
111111

112112
// Translate back to English
113-
let result2 = await translator(result[0].translation_text, {
113+
const result2 = await translator(result[0].translation_text, {
114114
src_lang: 'ell_Grek',
115115
tgt_lang: 'eng_Latn'
116116
});
@@ -125,8 +125,8 @@ For example, to generate a poem using `LaMini-Flan-T5-783M`, you can do:
125125

126126
```javascript
127127
// Allocate a pipeline for text2text-generation
128-
let poet = await pipeline('text2text-generation', 'Xenova/LaMini-Flan-T5-783M');
129-
let result = await poet('Write me a love poem about cheese.', {
128+
const poet = await pipeline('text2text-generation', 'Xenova/LaMini-Flan-T5-783M');
129+
const result = await poet('Write me a love poem about cheese.', {
130130
max_new_tokens: 200,
131131
temperature: 0.9,
132132
repetition_penalty: 2.0,

0 commit comments

Comments
 (0)