You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/ai-studio/how-to/deploy-models-jamba.md
+15-27Lines changed: 15 additions & 27 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -25,14 +25,11 @@ The Jamba-Instruct model is AI21's production-grade Mamba-based large language m
25
25
26
26
::: zone pivot="programming-language-python"
27
27
28
-
## Jamba-Instruct chat models
29
-
30
-
The Jamba Instruct model is AI21's production-grade Mamba-based large language model (LLM) which leverages AI21's hybrid Mamba-Transformer architecture. It's an instruction-tuned version of AI21's hybrid structured state space model (SSM) transformer Jamba model. The Jamba Instruct model is built for reliable commercial use with respect to quality and performance.
31
28
32
29
33
30
You can learn more about the models in their respective model card:
Response: As of now, it's estimated that there are about 7,000 languages spoken around the world. However, this number can vary as some languages become extinct and new ones develop. It's also important to note that the number of speakers can greatly vary between languages, with some having millions of speakers and others only a few hundred.
145
-
Model: Jamba-Instruct
142
+
Model: AI21-Jamba-Instruct
146
143
Usage:
147
144
Prompt tokens: 19
148
145
Total tokens: 91
@@ -279,14 +276,11 @@ except HttpResponseError as ex:
279
276
280
277
::: zone pivot="programming-language-javascript"
281
278
282
-
## Jamba-Instruct chat models
283
-
284
-
The Jamba Instruct model is AI21's production-grade Mamba-based large language model (LLM) which leverages AI21's hybrid Mamba-Transformer architecture. It's an instruction-tuned version of AI21's hybrid structured state space model (SSM) transformer Jamba model. The Jamba Instruct model is built for reliable commercial use with respect to quality and performance.
285
279
286
280
287
281
You can learn more about the models in their respective model card:
Response: As of now, it's estimated that there are about 7,000 languages spoken around the world. However, this number can vary as some languages become extinct and new ones develop. It's also important to note that the number of speakers can greatly vary between languages, with some having millions of speakers and others only a few hundred.
403
-
Model: Jamba-Instruct
397
+
Model: AI21-Jamba-Instruct
404
398
Usage:
405
399
Prompt tokens: 19
406
400
Total tokens: 91
@@ -554,14 +548,11 @@ catch (error) {
554
548
555
549
::: zone pivot="programming-language-csharp"
556
550
557
-
## Jamba-Instruct chat models
558
-
559
-
The Jamba Instruct model is AI21's production-grade Mamba-based large language model (LLM) which leverages AI21's hybrid Mamba-Transformer architecture. It's an instruction-tuned version of AI21's hybrid structured state space model (SSM) transformer Jamba model. The Jamba Instruct model is built for reliable commercial use with respect to quality and performance.
560
551
561
552
562
553
You can learn more about the models in their respective model card:
Response: As of now, it's estimated that there are about 7,000 languages spoken around the world. However, this number can vary as some languages become extinct and newonesdevelop. It's also important to note that the number of speakers can greatly vary between languages, with some having millions of speakers and others only a few hundred.
The Jamba Instruct model is AI21's production-grade Mamba-based large language model (LLM) which leverages AI21's hybrid Mamba-Transformer architecture. It's an instruction-tuned version ofAI21's hybrid structured state space model (SSM) transformer Jamba model. The Jamba Instruct model is built for reliable commercial use with respect to quality and performance.
851
839
852
840
853
841
You can learn more about the models in their respective model card:
Copy file name to clipboardExpand all lines: articles/ai-studio/how-to/deploy-models-mistral-nemo.md
-8Lines changed: 0 additions & 8 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -21,8 +21,6 @@ Mistral AI offers two categories of models. Premium models including [Mistral La
21
21
22
22
::: zone pivot="programming-language-python"
23
23
24
-
## Mistral Nemo chat model
25
-
26
24
Mistral Nemo is a cutting-edge Language Model (LLM) boasting state-of-the-art reasoning, world knowledge, and coding capabilities within its size category.
27
25
28
26
Mistral Nemo is a 12B model, making it a powerful drop-in replacement for any system using Mistral 7B, which it supersedes. It supports a context length of 128K, and it accepts only text inputs and generates text outputs.
@@ -461,8 +459,6 @@ except HttpResponseError as ex:
461
459
462
460
::: zone pivot="programming-language-javascript"
463
461
464
-
## Mistral Nemo chat model
465
-
466
462
Mistral Nemo is a cutting-edge Language Model (LLM) boasting state-of-the-art reasoning, world knowledge, and coding capabilities within its size category.
467
463
468
464
Mistral Nemo is a 12B model, making it a powerful drop-in replacement for any system using Mistral 7B, which it supersedes. It supports a context length of 128K, and it accepts only text inputs and generates text outputs.
@@ -920,8 +916,6 @@ catch (error) {
920
916
921
917
::: zone pivot="programming-language-csharp"
922
918
923
-
## Mistral Nemo chat model
924
-
925
919
Mistral Nemo is a cutting-edge Language Model (LLM) boasting state-of-the-art reasoning, world knowledge, and coding capabilities within its size category.
926
920
927
921
Mistral Nemo is a 12B model, making it a powerful drop-in replacement for any system using Mistral 7B, which it supersedes. It supports a context length of 128K, and it accepts only text inputs and generates text outputs.
Mistral Nemo is a cutting-edge Language Model (LLM) boasting state-of-the-art reasoning, world knowledge, and coding capabilities within its size category.
1407
1399
1408
1400
Mistral Nemo is a 12B model, making it a powerful drop-in replacement for any system using Mistral 7B, which it supersedes. It supports a context length of 128K, and it accepts only text inputs and generates text outputs.
0 commit comments