|
| 1 | +--- |
| 2 | +title: ServiceStack.Swift client library rewritten for Swift 6 |
| 3 | +summary: ServiceStack.Swift has been rewritten to take advantage of Swift 6 features, now dependency-free. |
| 4 | +tags: [api,service-reference,swift] |
| 5 | +author: Demis Bellot |
| 6 | +image: https://images.unsplash.com/photo-1534972195531-d756b9bfa9f2?crop=entropy&fit=crop&h=1000&w=2000 |
| 7 | +draft: true |
| 8 | +--- |
| 9 | + |
| 10 | + |
| 11 | + |
| 12 | +As part of the release of [AI Server](/posts/ai-server) we've upgraded all generic service client libraries |
| 13 | +to support multiple file uploads with API requests to take advantage of AI Server APIs |
| 14 | +that accept file uploads like [Image to Image](https://docs.servicestack.net/ai-server/image-to-image), |
| 15 | +[Speech to Text](https://docs.servicestack.net/ai-server/speech-to-text) or its |
| 16 | +[FFmpeg Image](https://docs.servicestack.net/ai-server/transform/image) and |
| 17 | +[Video Transforms](https://docs.servicestack.net/ai-server/transform/video). |
| 18 | + |
| 19 | +## ServiceStack.Swift rewritten for Swift 6 |
| 20 | + |
| 21 | +[ServiceStack.Swift](https://github.com/ServiceStack/ServiceStack.Swift) received the biggest upgrade, |
| 22 | +which was also rewritten to take advantage of Swift 6 features, including Swift promises which replaced the previous |
| 23 | +[PromiseKit](https://github.com/mxcl/PromiseKit) dependency - making it dependency-free. |
| 24 | + |
| 25 | +For example you can request a [Speech to Text](https://docs.servicestack.net/ai-server/speech-to-text) |
| 26 | +transcription by sending an audio file to the `SpeechToText` API using the new `postFilesWithRequest` method: |
| 27 | + |
| 28 | +### Calling AI Server to transcribe an Audio Recording |
| 29 | + |
| 30 | +```swift |
| 31 | +let client = JsonServiceClient(baseUrl: "https://openai.servicestack.net") |
| 32 | +client.bearerToken = apiKey |
| 33 | + |
| 34 | +let request = SpeechToText() |
| 35 | +request.refId = "uniqueUserIdForRequest" |
| 36 | + |
| 37 | +let files = [UploadFile(fileName:"audio.mp3", data:mp3Data, fieldName:"audio")] |
| 38 | + |
| 39 | +let response: GenerationResponse = try client.postFilesWithRequest( |
| 40 | + request:request, files:files) |
| 41 | + |
| 42 | +Inspect.printDump(response) |
| 43 | +``` |
| 44 | + |
| 45 | +### Async Upload Files with API Example |
| 46 | + |
| 47 | +Alternatively use the new `postFilesWithRequestAsync` method to call the API asynchronously |
| 48 | +using [Swift 6 Concurrency](https://docs.swift.org/swift-book/documentation/the-swift-programming-language/concurrency/) |
| 49 | +new **async/await** feature: |
| 50 | + |
| 51 | +```swift |
| 52 | +let response: GenerationResponse = try await client.postFilesWithRequestAsync( |
| 53 | + request:request, files:files) |
| 54 | + |
| 55 | +Inspect.printDump(response) |
| 56 | +``` |
| 57 | + |
| 58 | +### Sending typed Open AI Chat Ollama Requests with Swift |
| 59 | + |
| 60 | +Even if you're not running AI Server you can still use its typed DTOs to call any compatible |
| 61 | +Open AI Chat Compatible API like a self-hosted [Ollama](https://ollama.com) API. To call this |
| 62 | +from Swift: |
| 63 | + |
| 64 | +Include `ServiceStack` package in your projects `Package.swift` |
| 65 | + |
| 66 | +```swift |
| 67 | +dependencies: [ |
| 68 | + .package(url: "https://github.com/ServiceStack/ServiceStack.Swift.git", |
| 69 | + Version(6,0,0)..<Version(7,0,0)), |
| 70 | +], |
| 71 | +``` |
| 72 | + |
| 73 | +Download AI Server's Swift DTOs: |
| 74 | + |
| 75 | +:::copy |
| 76 | +npx get-dtos swift https://openai.servicestack.net |
| 77 | +::: |
| 78 | + |
| 79 | +You'll then be able to call Ollama by sending the OpenAI Chat compatible `OpenAiChatCompletion` |
| 80 | +Request DTO with the `JsonServiceClient`: |
| 81 | + |
| 82 | +```swift |
| 83 | +import Foundation |
| 84 | +import ServiceStack |
| 85 | + |
| 86 | +let ollamaBaseUrl = "http://localhost:11434" |
| 87 | +let client = JsonServiceClient(baseUrl:ollamaBaseUrl) |
| 88 | + |
| 89 | +let request = OpenAiChatCompletion() |
| 90 | +request.model = "mixtral:8x22b" |
| 91 | +let msg = OpenAiMessage() |
| 92 | +msg.role = "user" |
| 93 | +msg.content = "What's the capital of France?" |
| 94 | +request.messages = [msg] |
| 95 | +request.max_tokens = 50 |
| 96 | + |
| 97 | +let result:OpenAiChatResponse = try await client.postAsync( |
| 98 | + "/v1/chat/completions", request:request) |
| 99 | +``` |
| 100 | + |
0 commit comments