Skip to content

Commit 4daf0b3

Browse files
committed
2 parents 3b1d83b + 37b8b79 commit 4daf0b3

File tree

3 files changed

+33
-32
lines changed

3 files changed

+33
-32
lines changed

src/blog/tanstack-ai-alpha-2.md

Lines changed: 14 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -23,13 +23,13 @@ So we split the monolith.
2323
Instead of:
2424

2525
```ts
26-
import { openai } from "@tanstack/ai-openai"
26+
import { openai } from '@tanstack/ai-openai'
2727
```
2828

2929
You now have:
3030

3131
```ts
32-
import { openaiText, openaiImage, openaiVideo } from "@tanstack/ai-openai"
32+
import { openaiText, openaiImage, openaiVideo } from '@tanstack/ai-openai'
3333
```
3434

3535
### Why This Matters
@@ -66,7 +66,7 @@ Before:
6666
```ts
6767
chat({
6868
adapter: openai(),
69-
model: "gpt-4",
69+
model: 'gpt-4',
7070
// now you get typesafety...
7171
})
7272
```
@@ -75,7 +75,7 @@ After:
7575

7676
```ts
7777
chat({
78-
adapter: openaiText("gpt-4"),
78+
adapter: openaiText('gpt-4'),
7979
// immediately get typesafety
8080
})
8181
```
@@ -90,14 +90,14 @@ Quick terminology:
9090
- **Adapter**: TanStack AI's interface to that provider
9191
- **Model**: The specific model (GPT-4, Claude, etc.)
9292

93-
The old `providerOptions` were tied to the *model*, not the provider. Changing from `gpt-4` to `gpt-3.5-turbo` changes those options. So we renamed them:
93+
The old `providerOptions` were tied to the _model_, not the provider. Changing from `gpt-4` to `gpt-3.5-turbo` changes those options. So we renamed them:
9494

9595
```ts
9696
chat({
97-
adapter: openaiText("gpt-4"),
97+
adapter: openaiText('gpt-4'),
9898
modelOptions: {
99-
text: {}
100-
}
99+
text: {},
100+
},
101101
})
102102
```
103103

@@ -108,19 +108,19 @@ Settings like `temperature` work across providers. Our other modalities already
108108
```ts
109109
generateImage({
110110
adapter,
111-
numberOfImages: 3
111+
numberOfImages: 3,
112112
})
113113
```
114114

115115
So we brought chat in line:
116116

117117
```ts
118118
chat({
119-
adapter: openaiText("gpt-4"),
119+
adapter: openaiText('gpt-4'),
120120
modelOptions: {
121-
text: {}
121+
text: {},
122122
},
123-
temperature: 0.6
123+
temperature: 0.6,
124124
})
125125
```
126126

@@ -149,6 +149,7 @@ chat({
149149
**Standard Schema support.** We're dropping the Zod constraint for tools and structured outputs. Bring your own schema validation library.
150150

151151
**On the roadmap:**
152+
152153
- Middleware
153154
- Tool hardening
154155
- Headless UI library for AI components
@@ -166,4 +167,4 @@ We're confident in this direction. We think you'll like it too.
166167

167168
---
168169

169-
*Curious how we got here? Read [The `ai()` Function That Almost Was](/blog/tanstack-ai-the-ai-function-postmortem)—a post-mortem on the API we loved, built, and had to kill.*
170+
_Curious how we got here? Read [The `ai()` Function That Almost Was](/blog/tanstack-ai-the-ai-function-postmortem)—a post-mortem on the API we loved, built, and had to kill._

src/blog/tanstack-ai-the-ai-function-postmortem.md

Lines changed: 15 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
---
2-
title: "The ai() Function That Almost Was"
2+
title: 'The ai() Function That Almost Was'
33
published: 2025-12-26
44
authors:
55
- Alem Tuzlak
@@ -12,24 +12,24 @@ We spent eight days building an API we had to kill. Here's what happened.
1212
One function to rule them all. One function to control all adapters. One function to make it all typesafe.
1313

1414
```ts
15-
import { ai } from "@tanstack/ai"
16-
import { openaiText, openaiImage, openaiSummarize } from "@tanstack/ai-openai"
15+
import { ai } from '@tanstack/ai'
16+
import { openaiText, openaiImage, openaiSummarize } from '@tanstack/ai-openai'
1717

1818
// text generation
1919
ai({
20-
adapter: openaiText("gpt-4"),
20+
adapter: openaiText('gpt-4'),
2121
// ... text options
2222
})
2323

2424
// image generation
2525
ai({
26-
adapter: openaiImage("dall-e-3"),
26+
adapter: openaiImage('dall-e-3'),
2727
// ... image options
2828
})
2929

3030
// summarization
3131
ai({
32-
adapter: openaiSummarize("gpt-4"),
32+
adapter: openaiSummarize('gpt-4'),
3333
// ... summary options
3434
})
3535
```
@@ -72,7 +72,7 @@ To match the theme, we called it `aiOptions`. It would constrain everything to t
7272

7373
```ts
7474
const opts = aiOptions({
75-
adapter: openaiText("gpt-4")
75+
adapter: openaiText('gpt-4'),
7676
})
7777

7878
ai(opts)
@@ -118,7 +118,7 @@ When we finally asked the LLMs directly what they thought of the API, they were
118118

119119
We used agents to do the implementation work. That hid the struggle from us.
120120

121-
If we'd been writing the code by hand, we would have *felt* the challenge of wrestling with the types. That probably would have stopped the idea early.
121+
If we'd been writing the code by hand, we would have _felt_ the challenge of wrestling with the types. That probably would have stopped the idea early.
122122

123123
LLMs won't bark when you tell them to do crazy stuff. They won't criticize your designs unless you ask them to. They just try. And try. And eventually produce something that technically works but shouldn't exist.
124124

@@ -141,8 +141,8 @@ Before landing on separate functions, we tried one more thing: an adapter with s
141141

142142
```ts
143143
const adapter = openai()
144-
adapter.image("model")
145-
adapter.text("model")
144+
adapter.image('model')
145+
adapter.text('model')
146146
```
147147

148148
Looks nicer. Feels more unified. Same problem—still bundles everything.
@@ -154,12 +154,12 @@ We could have done custom bundling in TanStack Start to strip unused parts, but
154154
Separate functions. `chat()`, `generateImage()`, `generateSpeech()`, `generateTranscription()`.
155155

156156
```ts
157-
import { chat } from "@tanstack/ai"
158-
import { openaiText } from "@tanstack/ai-openai"
157+
import { chat } from '@tanstack/ai'
158+
import { openaiText } from '@tanstack/ai-openai'
159159

160160
chat({
161-
adapter: openaiText("gpt-4"),
162-
temperature: 0.6
161+
adapter: openaiText('gpt-4'),
162+
temperature: 0.6,
163163
})
164164
```
165165

@@ -185,4 +185,4 @@ We loved the `ai()` API. We built it. We had to kill it. That's how it goes some
185185

186186
---
187187

188-
*Ready to try what we shipped instead? Read [TanStack AI Alpha 2: Every Modality, Better APIs, Smaller Bundles](/blog/tanstack-ai-alpha-2).*
188+
_Ready to try what we shipped instead? Read [TanStack AI Alpha 2: Every Modality, Better APIs, Smaller Bundles](/blog/tanstack-ai-alpha-2)._

src/blog/tanstack-ai-why-we-split-the-adapters.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
---
2-
title: "Why We Split the Adapters"
2+
title: 'Why We Split the Adapters'
33
published: 2026-01-02
44
authors:
55
- Alem Tuzlak
@@ -69,8 +69,8 @@ One idea was to create an adapter with sub-properties:
6969

7070
```ts
7171
const adapter = openai()
72-
adapter.image("model")
73-
adapter.text("model")
72+
adapter.image('model')
73+
adapter.text('model')
7474
```
7575

7676
Looks nicer. Feels more split. Same problem—it still bundles everything.
@@ -87,4 +87,4 @@ Out of all the possible outcomes, this one is the best. We're confident in the d
8787

8888
---
8989

90-
*See it in action: [TanStack AI Alpha 2: Every Modality, Better APIs, Smaller Bundles](/blog/tanstack-ai-alpha-2)*
90+
_See it in action: [TanStack AI Alpha 2: Every Modality, Better APIs, Smaller Bundles](/blog/tanstack-ai-alpha-2)_

0 commit comments

Comments
 (0)