You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: src/content/docs/aws/services/bedrock.md
+40-40Lines changed: 40 additions & 40 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,15 +1,15 @@
1
1
---
2
2
title: "Bedrock"
3
-
linkTitle: "Bedrock"
4
-
description: Use foundation models running on your device with LocalStack!
3
+
description: Get started with Bedrock on LocalStack
5
4
tags: ["Ultimate"]
6
5
---
7
6
8
7
## Introduction
9
8
10
9
Bedrock is a fully managed service provided by Amazon Web Services (AWS) that makes foundation models from various LLM providers accessible via an API.
10
+
11
11
LocalStack allows you to use the Bedrock APIs to test and develop AI-powered applications in your local environment.
12
-
The supported APIs are available on our [API Coverage Page]({{< ref "coverage_bedrock" >}}), which provides information on the extent of Bedrock's integration with LocalStack.
12
+
The supported APIs are available on our [API Coverage Page](), which provides information on the extent of Bedrock's integration with LocalStack.
13
13
14
14
## Getting started
15
15
@@ -37,16 +37,17 @@ This way you avoid long wait times when switching between models on demand with
37
37
38
38
You can view all available foundation models using the [`ListFoundationModels`](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_ListFoundationModels.html) API.
39
39
This will show you which models are available on AWS Bedrock.
40
-
{{< callout "note">}}
40
+
41
+
:::note
41
42
The actual model that will be used for emulation will differ from the ones defined in this list.
42
43
You can define the used model with `DEFAULT_BEDROCK_MODEL`
43
-
{{< / callout >}}
44
+
:::
44
45
45
46
Run the following command:
46
47
47
-
{{< command >}}
48
-
$ awslocal bedrock list-foundation-models
49
-
{{< / command >}}
48
+
```bash
49
+
awslocal bedrock list-foundation-models
50
+
```
50
51
51
52
### Invoke a model
52
53
@@ -56,15 +57,15 @@ However, the actual model will be defined by the `DEFAULT_BEDROCK_MODEL` environ
"text": "You'\''re a chatbot that can only say '\''Hello!'\''"
89
90
}]'
90
-
{{< / command >}}
91
+
```
91
92
92
93
### Model Invocation Batch Processing
93
94
94
95
Bedrock offers the feature to handle large batches of model invocation requests defined in S3 buckets using the [`CreateModelInvocationJob`](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_CreateModelInvocationJob.html) API.
95
96
96
-
First, you need to create a `JSONL` file that contains all your prompts:
97
+
First, you need to create a `JSONL` file named `batch_input.jsonl`that contains all your prompts:
97
98
98
-
{{< command >}}
99
-
$ cat batch_input.jsonl
99
+
```json
100
100
{"prompt": "Tell me a quick fact about Vienna.", "max_tokens": 50, "temperature": 0.5}
101
101
{"prompt": "Tell me a quick fact about Zurich.", "max_tokens": 50, "temperature": 0.5}
102
102
{"prompt": "Tell me a quick fact about Las Vegas.", "max_tokens": 50, "temperature": 0.5}
103
-
{{< / command >}}
103
+
```
104
104
105
105
Then, you need to define buckets for the input as well as the output and upload the file in the input bucket:
106
106
107
-
{{< command >}}
108
-
$ awslocal s3 mb s3://in-bucket
109
-
make_bucket: in-bucket
110
-
111
-
$ awslocal s3 cp batch_input.jsonl s3://in-bucket
112
-
upload: ./batch_input.jsonl to s3://in-bucket/batch_input.jsonl
113
-
114
-
$ awslocal s3 mb s3://out-bucket
115
-
make_bucket: out-bucket
116
-
{{< / command >}}
107
+
```bash
108
+
awslocal s3 mb s3://in-bucket
109
+
awslocal s3 cp batch_input.jsonl s3://in-bucket
110
+
awslocal s3 mb s3://out-bucket
111
+
```
117
112
118
113
Afterwards you can run the invocation job like this:
0 commit comments