Skip to content

Commit cc2c962

Browse files
committed
More polishing
1 parent a4993f7 commit cc2c962

File tree

1 file changed

+34
-25
lines changed

1 file changed

+34
-25
lines changed

NEWS.md

Lines changed: 34 additions & 25 deletions
Original file line numberDiff line numberDiff line change
@@ -10,26 +10,34 @@
1010

1111
Additionally ellmer now converts `NULL` to `NA` for `type_boolean()`,
1212
`type_integer()`, `type_number()`, and `type_string()` (#445), and does a
13-
better job for arrays when `required = FALSE` (#384).
13+
better job with arrays when `required = FALSE` (#384).
1414

15-
* `chat_` functions no longer take a turns object, instead use
16-
`Chat$set_turns()` (#427). `Chat$tokens()` has been renamed to
17-
`Chat$get_tokens()` and returns a data frame of tokens, correctly aligned to
18-
the individual turn. The print method now uses this to show how many
19-
input/output tokens used by each turn (#354).
15+
* `chat_` functions no longer have a `turn` argument. If you need to set the
16+
turns, you can now use `Chat$set_turns()` (#427). Additionally,
17+
`Chat$tokens()` has been renamed to `Chat$get_tokens()` and returns a data
18+
frame of tokens, correctly aligned to the individual turn. The print method
19+
now uses this to show how many input/output tokens were used by each turn
20+
(#354).
2021

2122
## New features
2223

23-
* `batch_chat()` and `batch_chat_structured()` allow you to submit multiple
24-
chats to OpenAI and Anthropic's batched interfaces. These only guarantee a
25-
response within 24 hours, but are 50% of the price of regular requests (#143).
24+
* Two new interfaces help you do multiple chats with a single function call:
2625

27-
* `parallel_chat()` and `parallel_chat_structured()` allow you to submit multiple
28-
chats in parallel (#143). This is experimental because I'm not 100% sure that
26+
* `batch_chat()` and `batch_chat_structured()` allow you to submit multiple
27+
chats to OpenAI and Anthropic's batched interfaces. These only guarantee a
28+
response within 24 hours, but are 50% of the price of regular requests
29+
(#143).
30+
31+
* `parallel_chat()` and `parallel_chat_structured()` work with any provider
32+
and allow you to submit multiple chats in parallel (#143). This doesn't give
33+
you any cost savings, but it's can be much, much faster.
34+
35+
This new family of functions is experimental because I'm not 100% sure that
2936
the shape of the user interface is correct, particularly as it pertains to
3037
handling errors.
3138

3239
* `google_upload()` lets you upload files to Google Gemini or Vertex AI (#310).
40+
This allows you to work with videos, PDFs, and other large files with Gemini.
3341

3442
* `models_google_gemini()`, `models_anthropic()`, `models_openai()`,
3543
`models_aws_bedrock()`, `models_ollama()` and `models_vllm()`, list available
@@ -45,8 +53,8 @@
4553
makes it easier to interpolate from prompts stored in the `inst/prompts`
4654
directory inside a package (#164).
4755

48-
* `chat_azure()`, `chat_claude()`, `chat_openai()`, and `chat_gemini()` now
49-
take a `params` argument that coupled with the `params()` helpers, makes it
56+
* `chat_anthropic()`, `chat_azure()`, `chat_openai()`, and `chat_gemini()` now
57+
take a `params` argument, that coupled with the `params()` helper, makes it
5058
easy to specify common model parameters (like `seed` and `temperature`)
5159
across providers. Support for other providers will grow as you request it
5260
(#280).
@@ -73,11 +81,11 @@
7381

7482
* `chat_azure_openai()` replaces `chat_azure()`.
7583
* `chat_aws_bedrock()` replaces `chat_bedrock()`.
76-
* `chat_anthropic()` replaces `chat_claude()`.
84+
* `chat_anthropic()` replaces `chat_anthropic()`.
7785
* `chat_google_gemini()` replaces `chat_gemini()`.
7886

7987
* We have updated the default model for a couple of providers:
80-
* `chat_claude()` uses Sonnet 3.7 (which it also now displays) (#336).
88+
* `chat_anthropic()` uses Sonnet 3.7 (which it also now displays) (#336).
8189
* `chat_openai()` uses GPT-4.1 (#512)
8290

8391
## Developer tooling
@@ -137,40 +145,41 @@
137145
come from ellmer (#341). The default timeout has been increased to
138146
5 minutes (#451, #321).
139147

140-
* `chat_claude()` now supports the thinking content type (#396), and
148+
* `chat_anthropic()` now supports the thinking content type (#396), and
141149
`content_image_url()` (#347). It gains a `beta_header` argument to opt-in
142150
to beta features (#339). It (along with `chat_bedrock()`) no longer chokes
143151
after receiving an output that consists only of whitespace (#376).
144-
Finally, `chat_claude(max_tokens =)` is now deprecated in favour of
145-
`chat_claude(params = )` (#280).
152+
Finally, `chat_anthropic(max_tokens =)` is now deprecated in favour of
153+
`chat_anthropic(params = )` (#280).
146154

147-
* `chat_gemini()` can now handle responses that include citation metadata
148-
(#358). It uses `GEMINI_API_KEY` if set (@t-kalinowski, #513), can
155+
* `chat_google_gemini()` and `chat_google_vertex()` gain more ways to
156+
authenticate. They can use `GEMINI_API_KEY` if set (@t-kalinowski, #513),
149157
authenticate with Google default application credentials (including service
150158
accounts, etc) (#317, @atheriel) and use viewer-based credentials when
151159
running on Posit Connect (#320, @atheriel). Authentication with default
152-
application credentials requires the `gargle` package.
160+
application credentials requires the {gargle} package. They now also can now
161+
handle responses that include citation metadata (#358).
153162

154163
* `chat_ollama()` now works with `tool()` definitions with optional arguments
155164
or empty properties (#342, #348, @gadenbuie), and now accepts `api_key` and
156165
consults the `OLLAMA_API_KEY` environment variable. This is not needed for
157166
local usage, but enables bearer-token authentication when Ollama is running
158167
behind a reverse proxy (#501, @gadenbuie).
159168

160-
* `chat_openai(seed =)` is now deprecated in favour of
161-
`chat_openai(params = )` (#280).
169+
* `chat_openai(seed =)` is now deprecated in favour of `chat_openai(params = )`
170+
(#280).
162171

163172
* `create_tool_def()` can now use any Chat instance (#118, @pedrobtz).
164173

165-
* `live_browser()` now requires `{shinychat}` v0.2.0 or later which provides
174+
* `live_browser()` now requires {shinychat} v0.2.0 or later which provides
166175
access to the app that powers `live_browser()` via `shinychat::chat_app()`,
167176
as well as a Shiny module for easily including a chat interface for an ellmer
168177
`Chat` object in your Shiny apps (#397, @gadenbuie). It now initializes the
169178
UI with the messages from the chat turns, rather than replaying the turns
170179
server-side (#381).
171180

172181
* `Provider` gains `name` and `model` fields (#406). These are now reported when
173-
you print a chat object and used in `token_usage()`.
182+
you print a chat object and are used in `token_usage()`.
174183

175184
# ellmer 0.1.1
176185

0 commit comments

Comments
 (0)