Skip to content

Commit a4993f7

Browse files
committed
Polish news
1 parent 3ac0405 commit a4993f7

File tree

1 file changed

+44
-47
lines changed

1 file changed

+44
-47
lines changed

NEWS.md

Lines changed: 44 additions & 47 deletions
Original file line numberDiff line numberDiff line change
@@ -2,15 +2,15 @@
22

33
## Breaking changes
44

5-
* We have made a number of refinements to the way the ellmer converts JSON
5+
* We have made a number of refinements to the way ellmer converts JSON
66
to R data structures. These are breaking changes, although we don't expect
7-
them to affect much code in the wild. Mostly important tools are now invoked
7+
them to affect much code in the wild. Most importantly, tools are now invoked
88
with their inputs coerced to standard R data structures (#461); opt-out
99
by setting `convert = FALSE` in `tool()`.
1010

11-
We now now converts `NULL` to `NA` for `type_boolean()`, `type_integer()`,
12-
`type_number()`, and `type_string()` (#445), and do a better job with
13-
for arrays with `required = FALSE` (#384).
11+
Additionally ellmer now converts `NULL` to `NA` for `type_boolean()`,
12+
`type_integer()`, `type_number()`, and `type_string()` (#445), and does a
13+
better job for arrays when `required = FALSE` (#384).
1414

1515
* `chat_` functions no longer take a turns object, instead use
1616
`Chat$set_turns()` (#427). `Chat$tokens()` has been renamed to
@@ -29,6 +29,8 @@
2929
the shape of the user interface is correct, particularly as it pertains to
3030
handling errors.
3131

32+
* `google_upload()` lets you upload files to Google Gemini or Vertex AI (#310).
33+
3234
* `models_google_gemini()`, `models_anthropic()`, `models_openai()`,
3335
`models_aws_bedrock()`, `models_ollama()` and `models_vllm()`, list available
3436
models for Google Gemini, Anthropic, OpenAI, AWS Bedrock, Ollama, and VLLM
@@ -37,8 +39,6 @@
3739
Where possible (currently for Gemini, Anthropic, and OpenAI) we include
3840
known token prices (per million tokens).
3941

40-
* `google_upload()` lets you upload files to Google Gemini or Vertex AI (#310).
41-
4242
* `interpolate()` and friends are now vectorised so you can generate multiple
4343
prompts for (e.g.) a data frame of inputs. They also now return a specially
4444
classed object with a custom print method (#445). New `interpolate_package()`
@@ -47,13 +47,13 @@
4747

4848
* `chat_azure()`, `chat_claude()`, `chat_openai()`, and `chat_gemini()` now
4949
take a `params` argument that coupled with the `params()` helpers, makes it
50-
easy to specify common model paramaters (like `seed` and `temperature`)
50+
easy to specify common model parameters (like `seed` and `temperature`)
5151
across providers. Support for other providers will grow as you request it
5252
(#280).
5353

5454
* ellmer now tracks the cost of input and output tokens. The cost is displayed
5555
when you print a `Chat` object, in `tokens_usage()`, and with
56-
`Chat$get_cost()`. You can also request costs in `$parallel_extract_data()`.
56+
`Chat$get_cost()`. You can also request costs in `parallel_chat_structured()`.
5757
We do our best to accurately compute the cost, but you should treat it as an
5858
estimate rather than the exact price. Unfortunately LLM providers currently
5959
make it very difficult to figure out exactly how much your queries cost (#203).
@@ -66,7 +66,7 @@
6666
(#359, @s-spavound).
6767
* `chat_mistral()` for models hosted at <https://mistral.ai> (#319).
6868
* `chat_portkey()` and `models_portkey()` for models hosted at
69-
<https://portkey.ai> (#363, @maciekbanas).
69+
<https://portkey.ai> (#363, @maciekbanas).
7070

7171
* We also renamed (with deprecation) a few functions to make the naming
7272
scheme more consistent (#382, @gadenbuie):
@@ -80,10 +80,28 @@
8080
* `chat_claude()` uses Sonnet 3.7 (which it also now displays) (#336).
8181
* `chat_openai()` uses GPT-4.1 (#512)
8282

83-
## Streaming/async
83+
## Developer tooling
8484

85-
* `echo = "output"` replaces the now-deprecated `echo = "text"` option in
86-
`Chat$chat()`. When using `echo = "output"`, additional output, such as tool
85+
* New `Chat$get_provider()` lets you access the underlying provider object
86+
(#202).
87+
88+
* `Chat$chat_async()` and `Chat$stream_async()` gain a `tool_mode` argument to
89+
decide between `"sequential"` and `"concurrent"` tool calling. This is an
90+
advanced feature that primarily affects asynchronous tools (#488, @gadenbuie).
91+
92+
* `Chat$stream()` and `Chat$stream_async()` gain support for streaming the
93+
additional content types generated during a tool call with a new `stream`
94+
argument. When `stream = "content"` is set, the streaming response yields
95+
`Content` objects, including the `ContentToolRequest` and `ContentToolResult`
96+
objects used to request and return tool calls (#400, @gadenbuie).
97+
98+
* New `Chat$on_tool_request()` and `$on_tool_result()` methods allow you to
99+
register callbacks to run on a tool request or tool result. These callbacks
100+
can be used to implement custom logging or other actions when tools are
101+
called, without modifying the tool function (#493, @gadenbuie).
102+
103+
* `Chat$chat(echo = "output")` replaces the now-deprecated `echo = "text"`
104+
option. When using `echo = "output"`, additional output, such as tool
87105
requests and results, are shown as they occur. When `echo = "none"`, tool
88106
call failures are emitted as warnings (#366, @gadenbuie).
89107

@@ -96,52 +114,33 @@
96114
`ContentToolResult` no longer has an `id` property, instead the tool call
97115
ID can be retrieved from `request@id`.
98116

117+
They also include the error condition in the `error` property when a tool call
118+
fails (#421, @gadenbuie).
119+
99120
* `ContentToolRequest` gains a `tool` property that includes the `tool()`
100121
definition when a request is matched to a tool by ellmer (#423, @gadenbuie).
101122

102-
* `ContentToolResult` objects now include the error condition in the `error`
103-
property when a tool call fails (#421, @gadenbuie).
104-
105-
* `$stream()` and `$stream_async()` gain support for streaming the additional
106-
content types generated during a tool call with a new `stream` argument. When
107-
`stream = "content"` is set, the streaming response yields `Content` objects,
108-
including the `ContentToolRequest` and `ContentToolResult` objects used to
109-
request and return tool calls (#400, @gadenbuie).
110-
111-
* New `Chat$on_tool_request()` and `$on_tool_result()` methods allow you to
112-
register callbacks to run on a tool request or tool result. These callbacks
113-
can be used to implement custom logging or other actions when tools are
114-
called, without modifying the tool function (#493, @gadenbuie).
123+
* `tool()` gains an `.annotations` argument that can be created with the
124+
`tool_annotations()` helper. Tool annotations are described in the
125+
[Model Context Protocol](https://modelcontextprotocol.io/introduction) and can
126+
be used to describe the tool to clients. (#402, @gadenbuie)
115127

116128
* New `tool_reject()` function can be used to reject a tool request with an
117129
explanation for the rejection reason. `tool_reject()` can be called within a
118130
tool function or in a `Chat$on_tool_request()` callback. In the latter case,
119131
rejecting a tool call will ensure that the tool function is not evaluated
120-
(#490 #493, @gadenbuie).
121-
122-
* `$chat_async()` and `$stream_async()` gain a `tool_mode` argument to decide
123-
between `"sequential"` and `"concurrent"` tool calling. This is an advanced
124-
feature that primarily affects asynchronous tools (#488, @gadenbuie).
125-
126-
* Added a Shiny app example in `vignette("streaming-async")` showcasing
127-
asynchronous streaming with `{ellmer}` and `{shinychat}` (#131, @gadenbuie,
128-
@adisarid).
129-
130-
* `tool()` gains an `.annotations` argument that can be created with the
131-
`tool_annotations()` helper. Tool annotations are described in the
132-
[Model Context Protocol](https://modelcontextprotocol.io/introduction) and can
133-
be used to describe the tool to clients. (#402, @gadenbuie)
132+
(#490, #493, @gadenbuie).
134133

135134
## Minor improvements and bug fixes
136135

137136
* All requests now set a custom User-Agent that identifies that the requests
138-
comes from ellmer (#341). The default timeout has been increased to
137+
come from ellmer (#341). The default timeout has been increased to
139138
5 minutes (#451, #321).
140139

141140
* `chat_claude()` now supports the thinking content type (#396), and
142-
`content_image_url()` (#347). It gains gains `beta_header` argument to
143-
opt-in to beta features (#339). It (along with `chat_bedrock()`) no longer
144-
chokes after receiving an output that consists only of whitespace (#376).
141+
`content_image_url()` (#347). It gains a `beta_header` argument to opt-in
142+
to beta features (#339). It (along with `chat_bedrock()`) no longer chokes
143+
after receiving an output that consists only of whitespace (#376).
145144
Finally, `chat_claude(max_tokens =)` is now deprecated in favour of
146145
`chat_claude(params = )` (#280).
147146

@@ -161,13 +160,11 @@
161160
* `chat_openai(seed =)` is now deprecated in favour of
162161
`chat_openai(params = )` (#280).
163162

164-
* `Chat$get_provider()` lets you access the underlying provider object (#202).
165-
166163
* `create_tool_def()` can now use any Chat instance (#118, @pedrobtz).
167164

168165
* `live_browser()` now requires `{shinychat}` v0.2.0 or later which provides
169166
access to the app that powers `live_browser()` via `shinychat::chat_app()`,
170-
as well as Shiny module for easily including a chat interface for an ellmer
167+
as well as a Shiny module for easily including a chat interface for an ellmer
171168
`Chat` object in your Shiny apps (#397, @gadenbuie). It now initializes the
172169
UI with the messages from the chat turns, rather than replaying the turns
173170
server-side (#381).

0 commit comments

Comments
 (0)