You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Bedrock is a fully managed service provided by Amazon Web Services (AWS) that makes foundation models from various LLM providers accessible via an API.
11
13
LocalStack allows you to use the Bedrock APIs to test and develop AI-powered applications in your local environment.
12
-
The supported APIs are available on our [API Coverage Page]({{< ref "coverage_bedrock" >}}), which provides information on the extent of Bedrock's integration with LocalStack.
14
+
The supported APIs are available on our [API Coverage section](#api-coverage), which provides information on the extent of Bedrock's integration with LocalStack.
13
15
14
16
## Getting started
15
17
@@ -37,16 +39,17 @@ This way you avoid long wait times when switching between models on demand with
37
39
38
40
You can view all available foundation models using the [`ListFoundationModels`](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_ListFoundationModels.html) API.
39
41
This will show you which models are available on AWS Bedrock.
40
-
{{< callout "note">}}
42
+
43
+
:::note
41
44
The actual model that will be used for emulation will differ from the ones defined in this list.
42
45
You can define the used model with `DEFAULT_BEDROCK_MODEL`
43
-
{{< / callout >}}
46
+
:::
44
47
45
48
Run the following command:
46
49
47
-
{{< command >}}
48
-
$ awslocal bedrock list-foundation-models
49
-
{{< / command >}}
50
+
```bash
51
+
awslocal bedrock list-foundation-models
52
+
```
50
53
51
54
### Invoke a model
52
55
@@ -56,15 +59,15 @@ However, the actual model will be defined by the `DEFAULT_BEDROCK_MODEL` environ
"text": "You'\''re a chatbot that can only say '\''Hello!'\''"
89
92
}]'
90
-
{{< / command >}}
93
+
```
91
94
92
95
### Model Invocation Batch Processing
93
96
94
97
Bedrock offers the feature to handle large batches of model invocation requests defined in S3 buckets using the [`CreateModelInvocationJob`](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_CreateModelInvocationJob.html) API.
95
98
96
99
First, you need to create a `JSONL` file that contains all your prompts:
97
100
98
-
{{< command >}}
99
-
$ cat batch_input.jsonl
101
+
```bash
102
+
cat batch_input.jsonl
100
103
{"prompt": "Tell me a quick fact about Vienna.", "max_tokens": 50, "temperature": 0.5}
101
104
{"prompt": "Tell me a quick fact about Zurich.", "max_tokens": 50, "temperature": 0.5}
102
105
{"prompt": "Tell me a quick fact about Las Vegas.", "max_tokens": 50, "temperature": 0.5}
103
-
{{< / command >}}
106
+
```
104
107
105
108
Then, you need to define buckets for the input as well as the output and upload the file in the input bucket:
106
109
107
-
{{< command >}}
108
-
$ awslocal s3 mb s3://in-bucket
110
+
```bash
111
+
awslocal s3 mb s3://in-bucket
109
112
make_bucket: in-bucket
110
113
111
-
$ awslocal s3 cp batch_input.jsonl s3://in-bucket
114
+
awslocal s3 cp batch_input.jsonl s3://in-bucket
112
115
upload: ./batch_input.jsonl to s3://in-bucket/batch_input.jsonl
113
116
114
-
$ awslocal s3 mb s3://out-bucket
117
+
awslocal s3 mb s3://out-bucket
115
118
make_bucket: out-bucket
116
-
{{< / command >}}
119
+
```
117
120
118
121
Afterwards you can run the invocation job like this:
0 commit comments