11# -*- mode: org; -*-
22
3- * 0.9.8 2025-03-11
3+ * 0.9.8 2025-03-13
44
55Version 0.9.8 adds support for new Gemini, Anthropic, OpenAI,
66Perplexity, and DeepSeek models, introduces LLM tool use/function
@@ -13,6 +13,15 @@ feature and control of LLM "reasoning" content.
1313- ~gemini-pro~ has been removed from the list of Gemini models, as
1414 this model is no longer supported by the Gemini API.
1515
16+ - Sending an active region in Org mode will now apply Org
17+ mode-specific rules to the text, such as branching context.
18+
19+ - The following obsolete variables and functions have been removed:
20+ - ~gptel-send-menu~: Use ~gptel-menu~ instead.
21+ - ~gptel-host~: Use ~gptel-make-openai~ instead.
22+ - ~gptel-playback~: Use ~gptel-stream~ instead.
23+ - ~gptel--debug~: Use ~gptel-log-level~ instead.
24+
1625** New models and backends
1726
1827- Add support for several new Gemini models including
@@ -35,7 +44,7 @@ feature and control of LLM "reasoning" content.
3544 support for the DeepSeek API, including support for handling
3645 "reasoning" output.
3746
38- ** Notable new features and UI changes
47+ ** New features and UI changes
3948
4049- ~gptel-rewrite~ now supports iterating on responses.
4150
@@ -121,9 +130,10 @@ feature and control of LLM "reasoning" content.
121130- (Org mode) Org property drawers are now stripped from the prompt
122131 text before sending queries. You can control this behavior or
123132 specify additional Org elements to ignore via
124- ~gptel-org-ignore-elements~.
133+ ~gptel-org-ignore-elements~. (For more complex pre-processing you
134+ can use ~gptel-prompt-filter-hook~.)
125135
126- ** Bug fixes
136+ ** Notable Bug fixes
127137
128138- Fix response mix-up when running concurrent requests in Org mode
129139 buffers.
@@ -145,58 +155,54 @@ model/backend configuration.
145155
146156- Add support for Anthropic's Claude 3.5 Haiku.
147157
148- - Add support for xAI (contributed by @WuuBoLin) .
158+ - Add support for xAI.
149159
150- - Add support for Novita AI (contributed by @jasonhp) .
160+ - Add support for Novita AI.
151161
152- ** Notable new features and UI changes
162+ ** New features and UI changes
153163
154164- gptel's directives (see ~gptel-directives~) can now be dynamic, and
155165 include more than the system message. You can "pre-fill" a
156166 conversation with canned user/LLM messages. Directives can now be
157167 functions that dynamically generate the system message and
158168 conversation history based on the current context. This paves the
159169 way for fully flexible task-specific templates, which the UI does
160- not yet support in full. This design was suggested by
161- @meain. (#375)
170+ not yet support in full.
162171
163172- gptel's rewrite interface has been reworked. If using a streaming
164173 endpoint, the rewritten text is streamed in as a preview placed over
165174 the original. In all cases, clicking on the preview brings up a
166175 dispatch you can use to easily diff, ediff, merge, accept or reject
167176 the changes (4ae9c1b2), and you can configure gptel to run one of
168- these actions automatically. See the README for examples. This
169- design was suggested by @meain. (#375)
177+ these actions automatically. See the README for examples.
170178
171179- ~gptel-abort~, used to cancel requests in progress, now works across
172- the board, including when not using Curl or with ~gptel-rewrite~
173- (7277c00).
180+ the board, including when not using Curl or with ~gptel-rewrite~.
174181
175182- The ~gptel-request~ API now explicitly supports streaming responses
176- (7277c00) , making it easy to write your own helpers or features with
183+ , making it easy to write your own helpers or features with
177184 streaming support. The API also supports ~gptel-abort~ to stop and
178185 clean up responses.
179186
180187- You can now unset the system message -- different from setting it to
181188 an empty string. gptel will also automatically disable the system
182- message when using models that don't support it (0a2c07a) .
189+ message when using models that don't support it.
183190
184191- Support for including PDFs with requests to Anthropic models has
185192 been added. (These queries are cached, so you pay only 10% of the
186193 token cost of the PDF in follow-up queries.) Note that document
187194 support (PDFs etc) for Gemini models has been available since
188- v0.9.5. (0f173ba, #459)
195+ v0.9.5.
189196
190197- When defining a gptel model or backend, you can specify arbitrary
191198 parameters to be sent with each request. This includes the (many)
192199 API options across all APIs that gptel does not yet provide explicit
193- support for (bcbbe67e). This feature was suggested by @tillydray
194- (#471).
200+ support for.
195201
196202- New transient command option to easily remove all included context
197- chunks (a844612), suggested by @metachip and @gavinhughes .
203+ chunks.
198204
199- ** Bug fixes
205+ ** Notable Bug fixes
200206- Pressing ~RET~ on included files in the context inspector buffer now
201207 pops up the file correctly.
202208- API keys are stripped of whitespace before sending.
0 commit comments