|
1 | | -# SwiftBedrockService |
2 | | - |
3 | | -Work in progress, feel free to open issue, but do not use in your projects. |
4 | | - |
5 | | -## How to add a new model family? |
6 | | - |
7 | | -As an example we will add the Llama 3.1 70B Instruct model from the Meta family as an example. |
8 | | - |
9 | | -"meta.llama3-70b-instruct-v1:0" |
10 | | - |
11 | | -### 1. Add create BedrockModel instance |
12 | | - |
13 | | -```swift |
14 | | -extension BedrockModel { |
15 | | - public static let llama3_70b_instruct: BedrockModel = BedrockModel( |
16 | | - id: "meta.llama3-70b-instruct-v1:0", |
17 | | - modality: LlamaText() |
18 | | - ) |
19 | | -} |
20 | | -``` |
21 | | - |
22 | | -### 2. Create family-specific request and response struct |
23 | | - |
24 | | -Make sure to create a struct that reflects exactly how the body of the request for an invokeModel call to this family should look. Make sure to add the public initializer with parameters `prompt`, `maxTokens` and `temperature` to comply to the `BedrockBodyCodable` protocol. |
25 | | - |
26 | | -```json |
27 | | -{ |
28 | | - "prompt": "\(prompt)", |
29 | | - "temperature": 1, |
30 | | - "top_p": 0.9, |
31 | | - "max_tokens": 200, |
32 | | - "stop": ["END"] |
33 | | -} |
34 | | -``` |
35 | | - |
36 | | -```swift |
37 | | -public struct LlamaRequestBody: BedrockBodyCodable { |
38 | | - let prompt: String |
39 | | - let max_gen_len: Int |
40 | | - let temperature: Double |
41 | | - let top_p: Double |
42 | | - |
43 | | - public init(prompt: String, maxTokens: Int = 512, temperature: Double = 0.5) { |
44 | | - self.prompt = |
45 | | - "<|begin_of_text|><|start_header_id|>user<|end_header_id|>\(prompt)<|eot_id|><|start_header_id|>assistant<|end_header_id|>" |
46 | | - self.max_gen_len = maxTokens |
47 | | - self.temperature = temperature |
48 | | - self.top_p = 0.9 |
49 | | - } |
50 | | -} |
51 | | -``` |
52 | | - |
53 | | -Do the same for the response and ensure to add the `getTextCompletion` method to extract the completion from the response body and to comply to the `ContainsTextCompletion` protocol. |
54 | | - |
55 | | -```json |
56 | | -{ |
57 | | - "generation": "\n\n<response>", |
58 | | - "prompt_token_count": int, |
59 | | - "generation_token_count": int, |
60 | | - "stop_reason" : string |
61 | | -} |
62 | | -``` |
63 | | - |
64 | | -```swift |
65 | | -struct LlamaResponseBody: ContainsTextCompletion { |
66 | | - let generation: String |
67 | | - let prompt_token_count: Int |
68 | | - let generation_token_count: Int |
69 | | - let stop_reason: String |
70 | | - |
71 | | - public func getTextCompletion() throws -> TextCompletion { |
72 | | - TextCompletion(generation) |
73 | | - } |
74 | | -} |
75 | | -``` |
76 | | - |
77 | | -### 3. Add the Modality (TextModality or ImageModality) |
78 | | - |
79 | | -For a text generation create a struct conforming to TextModality. Use the request body and response body you created in [the previous step](#2-create-family-specific-request-and-response-struct). |
80 | | - |
81 | | -```swift |
82 | | -struct LlamaText: TextModality { |
83 | | - func getName() -> String { "Llama Text Generation" } |
84 | | - |
85 | | - func getTextRequestBody(prompt: String, maxTokens: Int, temperature: Double) throws -> BedrockBodyCodable { |
86 | | - LlamaRequestBody(prompt: prompt, maxTokens: maxTokens, temperature: temperature) |
87 | | - } |
88 | | - |
89 | | - func getTextResponseBody(from data: Data) throws -> ContainsTextCompletion { |
90 | | - let decoder = JSONDecoder() |
91 | | - return try decoder.decode(LlamaResponseBody.self, from: data) |
92 | | - } |
93 | | -} |
94 | | -``` |
95 | | - |
96 | | -### 4. Optionally you can create a BedrockModel initializer for your newly implemented models |
97 | | -```swift |
98 | | -extension BedrockModel { |
99 | | - init?(_ id: String) { |
100 | | - switch id { |
101 | | - case "meta.llama3-70b-instruct-v1:0": self = .llama3_70b_instruct |
102 | | - // ... |
103 | | - default: |
104 | | - return nil |
105 | | - } |
106 | | - } |
107 | | -} |
108 | | -``` |
109 | | - |
110 | | - |
111 | | -## How to add a new model? |
112 | | - |
113 | | -If you want to add a model that has a request and response structure that is already implemented you can skip a few steps. Simply create a typealias for the Modality that matches the structure and use it to create a BedrockModel instance. |
114 | | - |
115 | | -```swift |
116 | | -typealias ClaudeNewModel = AnthropicText |
117 | | - |
118 | | -extension BedrockModel { |
119 | | - public static let instant: BedrockModel = BedrockModel( |
120 | | - id: "anthropic.claude-new-model", |
121 | | - modality: ClaudeNewModel() |
122 | | - ) |
123 | | -} |
124 | | -``` |
125 | | - |
126 | | -Note that the model will not automatically be included in the BedrockModel initializer that creates an instance from a raw string value. Consider creating a custom initializer that includes your models. |
| 1 | +# Swift FM Playground |
| 2 | + |
| 3 | +Welcome to the Swift Foundation Model (FM) Playground, an example app to explore how to use **Amazon Bedrock** with the AWS SDK for Swift. |
| 4 | + |
| 5 | +> 🚨 **Important:** This application is for educational purposes and not intended for production use. |
| 6 | +
|
| 7 | +## Overview |
| 8 | + |
| 9 | +> 🚧 Under construction 🚧 |
| 10 | +
|
| 11 | +## Prerequisites |
| 12 | + |
| 13 | +> 🚧 Under construction 🚧 |
| 14 | +
|
| 15 | +## Running the Application |
| 16 | + |
| 17 | +> 🚧 Under construction 🚧 |
| 18 | +
|
| 19 | +## Accessing the Application |
| 20 | + |
| 21 | +To access the application, open `http://localhost:3000` in your web browser. |
| 22 | + |
| 23 | +## Stopping the Application |
| 24 | + |
| 25 | +To halt the application, you will need to stop both the backend and frontend processes. |
| 26 | + |
| 27 | +### Stopping the Frontend |
| 28 | + |
| 29 | +In the terminal where the frontend is running, press `Ctrl + C` to terminate the process. |
| 30 | + |
| 31 | +### Stopping the Backend |
| 32 | + |
| 33 | +Similarly, in the backend terminal, use the `Ctrl + C` shortcut to stop the server. |
0 commit comments