- gptel’s default ChatGPT backend has been removed.
gptel-backendandgptel-modelnow default tonil, and there are no registered backends out of the box. However gptel remains usable without configuration: ifgptel-sendis called without a backend set, the ChatGPT backend is created on the fly and used. - The models
gpt-41-copilot,gpt-5andclaude-opus-41have been removed from the default list of GitHub Copilot models. These models are no longer available in the GitHub Copilot API.
- xAI backend: Add support for
grok-4-1-fast-reasoning,grok-4-1-fast-non-reasoning,grok-4-fast-reasoning, andgrok-4-fast-non-reasoning. - - GitHub Copilot backend: Add support for
gpt-5.1-codex-, =gpt-5.1-codex-mini, =claude-sonnet-4.6andgemini-3.1-pro-preview. - Gemini backend: add support for
gemini-3.1-flash-lite-preview; add deprecation notice forgemini-3-pro-preview.
- When using
setoptor the customize interface,gptel-backendcan now be specified as a list instead of an opaque object. See its documentation for details. - When using
gptel-send, tool calls that require confirmation can now be examined in full in a dedicated inspection buffer, where they are displayed as Elisp forms.The tool name and tool call arguments can also be modified in-place now. These modifications must be in-place; deleting tool calls or adding new ones to the inspection buffer is not supported.
- New hooks
gptel-pre-tool-call-functionsandgptel-post-tool-call-functionsrun before and after each tool call, respectively. These hooks receive details of the (planned or finished) tool call and provide fine-grained control over them. These hooks work withgptel-send, including when invoked from gptel’s Transient menu or from Elisp.gptel-pre-tool-call-functionscan be used to modify tool call arguments, short-circuit the call and provide the results, block the tool call but continue the request with a message for the LLM, or stop the request entirely.gptel-post-tool-call-functionscan be used to modify tool call results, block the tool call but continue the request, or stop the request entirely. - New variable
gptel-bedrock-aws-cli-commandto set the path to the AWS CLI command for the Bedrock backend. Defaults to “ews”.
gptel-backendcan now be set from customize buffers. These are produced by, for example,M-x customize-group ⮐ gptel. Previouslygptel-backendwas displayed in a read-only way, and could even break the display of the customize buffer depending on its value.- Breaking change to the
gptel-requestAPI: Tool call arguments are passed togptel-requestcallbacks as a plist, not a list. The plist keys are the function argument names as specified in the tool definition. This does not affect ~gptel-send~ or (to my best knowledge) any of the packages using gptel.Example: The previous behavior was
(funcall callback `(tool-call ,web-search ("emacs" 10) ,tool-cb))where
web-searchis agptel-tool, andcallbackandtool-cbare thegptel-requestand tool callback respectively. The new behavior is:(funcall callback `(tool-call ,web-search (:query "emacs" :count 10) ,tool,cb))where
:queryand:countcorrespond to argumentsqueryandcountin the definition ofweb-search.Note that this is a bug fix. It’s how the API has been documented and was supposed to work in the first place.
- The models
gpt-5-codex,o3,o3-mini,o4-mini,claude-3.5-sonnet,claude-3.7-sonnet,claude-3.7-sonnet-thought,claude-opus-4andgemini-2.0-flash-001have been removed from the default list of GitHub Copilot models. These models are no longer available in the GitHub Copilot API. gptel-track-medianow controls whether links to media files are tracked only in chat buffers. Previously it also controlled whether media files added to the context explicitly viagptel-add-filewere sent. This is considered a bug and has now been fixed.
- GitHub Copilot backend: Add support for
gpt-5.2,gpt-5.2-codex,gpt-41-copilot,claude-opus-4.5,claude-opus-4.6,gemini-3-pro-previewandgemini-3-flash-preview. - Anthropic backend: Add support for
claude-opus-4-6andclaude-sonnet-4-6. - Bedrock backend: Add support for
claude-opus-4-5,claude-opus-4-6,claude-sonnet-4-6andnova-2-lite. - Add support for
gemini-3.1-pro-preview,gemini-3-pro-previewandgemini-3-flash-preview. - Add support for
gpt-5.1.
- Running
gptel-addin IBuffer now adds marked buffers or the buffer at point to gptel’s context, and runninggptel-addwith a negative prefix-arg removes them. This is similar to its behavior in Dired. To add the literal contents of the IBuffer to the context, you can select a text region first. - When redirecting LLM responses to the kill ring or echo area, gptel now omits tool call results, as these tend to be very noisy. Kill ring redirection now correctly captures the full response from the LLM, including pre- and post-tool-call text.
gptel-rewritenow supports tool calling. Ifgptel-toolsis non-nil the LLM can, for instance, read files to fetch more context for the rewrite action.- If a preset has been applied in a gptel chat buffer, saving the buffer to a file causes the preset to be recorded along with the other metadata (model, backend, tools etc). This makes it possible to associate any collection of gptel settings/preferences with the chat file, and not just the few properties that gptel writes to the file otherwise. But resuming this chat with the preset settings applied requires that the preset be defined, so the chat file will be less self-contained.
gptel-sendnow works in Vterm buffers in a limited way. Responses will be inserted into Vterm buffers, but without streaming. The respond-in-place option to overwrite queries with responses in Vterm buffers is supported as well, but might be buggy if your shell prompt is “rich” and has many dynamic elements.Support for
gptel-sendin Term/Ansi-Term and Eat buffers is not yet available but planned.
- Function-valued system messages/directives are now evaluated in the buffer from which the gptel request is sent, so they can use the context of the current buffer correctly. (Previously they were evaluated in a temporary buffer used to construct the query, leading to unexpected behavior.)
- When using OpenAI-compatible APIs (such as Deepseek), models that call tools within their “reasoning” phase are now correctly handled by gptel.
- The models
gpt-4-copilotando1have been removed from the default list of GitHub Copilot models. These models are no longer available in the GitHub Copilot API. - Link handling in gptel chat buffers has changed, hopefully for the
better. When
gptel-track-mediais non-nil, gptel follows links in the prompt and includes their contents with queries. Previously, links to files had to be placed “standalone”, surrounded by blank lines, for the files to be included in the prompt. This limitation has been removed – all supported links in the prompt will be followed now.The “standalone” limitation was imposed to make included links stand out visually and avoid accidental inclusions, but in practice users were often confused about whether a link would be sent. gptel now prominently annotates links that will be followed and sent (see below), so it should be visually obvious when links will be followed. You can revert to the old behavior by customizing gptel, see below.
- The model
claude-3-sonnet-20240229has been removed from the default list of Anthropic models. This model is no longer available in the Anthropic API. - The models
gemini-1.5-flash-8b,gemini-1.5-flash,gemini-1.5-pro-latest,gemini-2.0-flash-thinking-exp-01-21,gemini-2.0-flash-lite-preview-02-05,gemini-2.5-flash-lite-preview-06-17,gemini-2.5-pro-preview-06-05,gemini-2.5-pro-preview-05-06,gemini-2.5-flash-preview-05-20,gemini-2.5-pro-preview-03-25andgemini-2.5-pro-exp-03-25have been removed from the default list of Gemini models. These models are either no longer available, or they have been superseded by their stable, non-preview versions. If required, you can add these models back to the Gemini backend in your personal configuration:(push 'gemini-2.5-pro-preview-05-06 (gptel-backend-models (gptel-get-backend "Gemini")))
- GitHub Copilot backend: Add support for
gpt-5-codex,claude-sonnet-4.5andclaude-haiku-4.5 - Add support for
claude-sonnet-4-5-20250929andclaude-haiku-4-5-20251001. - Add support for
gemini-pro-latest,gemini-flash-latestandgemini-flash-lite-latest. These models point to the latest Gemini models of the corresponding type. - Add support for
gemini-2.5-flash-preview-09-2025andgemini-2.5-flash-lite-preview-09-2025.
- New minor-mode
gptel-highlight-modeto highlight LLM responses and more. An oft-requested feature, gptel can now highlight responses by decorating the (left) margin or fringe, and apply a face to the response region. To use it, just turn ongptel-highlight-modein any buffer (and not just dedicated chat buffers). You can customize the type of decoration performed viagptel-highlight-methods, which see. - Link annotations: When
gptel-track-mediais enabled in gptel chat buffers, gptel follows (Markdown/Org) links to files in the prompt and includes these files with queries. However, it was not clear if a link type was supported and would be included, making this feature unreliable and difficult to use.Now all links in the prompt are explicitly annotated in real-time in gptel buffers. Links that will not be sent are marked as such, and the link tooltip explains why. Links that will be sent are explicitly indicated as well.
- New user options
gptel-markdown-validate-linkandgptel-org-validate-link: These control whether links in Markdown/Org buffers are followed and their sources included in gptel’s prompt. Their value should be a function that determines if a link is to be considered valid for inclusion with the gptel query. By default they allow all links, but they can be customized to require “standalone” link placement, which is gptel’s past behavior. - gptel preset specifications can now modify the current values of gptel
options instead of replacing them, allowing better composition of
presets with your Emacs environment and with each other.
For example, it is common to want to add more LLM tools via a preset to an existing set in gptel-tools. To this end, add a small, declarative DSL for use in gptel preset definitions. For example, you can now do the following:
(gptel-make-preset 'websearch :tools '(:append ("search_web" "read_url")))
to add to the current list in
gptel-toolsinstead of replacing it. See the documentation ofgptel-make-presetfor more details. - You can now apply a preset from gptel’s menu using
completing-readinstead of the menu. This is bound to@in the presets menu, so that@ @in gptel’s menu will bring up thecompleting-readprompter.This is an interim solution to the problem of the gptel presets menu not scaling well to more than about 25 presets. This menu is intended to be redesigned eventually.
- Tool result and reasoning blocks are now folded by default in Markdown
and text buffers. You can cycle their folded state by pressing
Tabwith the cursor on the opening or closing line containing the code fences. gptel-requestis now a standalone library, independent of gptel and its UI. This is intended- to provide a clean separation between
gptel-request(the LLM querying library) andgptel(the LLM interaction UI). - To make it simpler to create alternative UIs for gptel, wherein the
package author may simply
(require 'gptel-request)to access the gptel-request API. - To make it so gptel does not need to be loaded to use
gptel-request.
The
gptel-requestfeature does not provide any response handling, and expects the user to provide a response callback. If you want to reusegptel-send’s response handler you can(require 'gptel).For logistical reasons, the
gptel-requestlibrary will continue to be shipped withgptel.- to provide a clean separation between
- New user option
gptel-context: This variable can be used to specify additional context sources for gptel queries, usually files or buffers. It serves the longstanding requests of enabling buffer-local context specification, as well as context specification in gptel presets and programmatic gptel use. As always, in a preset definition this corresponds to the key with name of the variable with the “gptel-” prefix stripped:(gptel-make-preset 'with-docs :context '("./README.md" "./README" "./README.org"))
Each entry in
gptel-contextis a file path or a buffer object, but other kinds of specification are possible. See its documentation for details. gptel-mcp-connectcan now start MCP servers synchronously. This is useful for scripting purposes, when MCP tools need to be available before performing other actions. One common use is starting MCP servers when applying a gptel preset.- “gitignored” files are omitted by default when adding directories to gptel’s
context. This setting can be controlled via the user option
gptel-context-restrict-to-project-files. (This only applies to directories, individual files specified viagptel-add-filewill always be added to the context.) gptel-make-bedrocknow checks for theAWS_BEARER_TOKEN_BEDROCKenvironment variable parameter and uses it for Bedrock API key based authentication if present. See https://docs.aws.amazon.com/bedrock/latest/userguide/api-keys.html.
- The suffix
-latesthas been dropped from Grok models, as they are no longer required. So the modelsgrok-3-latest,grok-3-mini-latesthave been renamed to justgrok-3,grok-3-miniand so on. - The models
gemini-exp-1206,gemini-2.5-pro-preview-03-25,gemini-2.5-pro-preview-05-06,gemini-2.5-flash-preview-04-17have been removed from the default list of Gemini models. The first one is no longer available, and the others are superseded by their stable, non-preview versions. If required, you can add these models back to the Gemini backend in your personal configuration:(push 'gemini-2.5-pro-preview-03-25 (gptel-backend-models (gptel-get-backend "Gemini")))
- Add support for
grok-code-fast-1. - Add support for
gpt-5,gpt-5-miniandgpt-5-nano. - Add support for
claude-opus-4-1-20250805. - Add support for
gemini-2.5-pro,gemini-2.5-flash,gemini-2.5-flash-lite-preview-06-17. - Add support for Open WebUI. Open WebUI provides an OpenAI-compatible API, so the “support” is just a new section of the README with instructions.
- Add support for Moonshot (Kimi), in a similar sense.
- Add support for the AI/ML API, in a similar sense.
- Add support for
grok-4.
gptel-rewritenow no longer pops up a Transient menu. Instead, it reads a rewrite instruction and starts the rewrite immediately. This is intended to reduce the friction of usinggptel-rewrite. You can still bring up the Transient menu by pressingM-RETinstead ofRETwhen supplying the rewrite instruction. If no region is selected and there are pending rewrites, the rewrite menu is displayed.gptel-rewritewill now produce more refined merge conflicts when using the merge action. It works by feeding the original and rewritten text to git (when it is available).- New command
gptel-gh-loginto authenticate with GitHub Copilot. The authentication step happens automatically when you use gptel, so invoking it manually is not required. But you can use this command to change accounts or refresh your login if required. - gptel now supports handling reasoning/thinking blocks in responses
from xAI’s Grok models. This is controlled by
gptel-include-reasoning, in the same way that it handles other APIs. - When including a file in the context, the abbreviated full path of
the file is included is now included instead of the basename.
Specifically,
/home/user/path/to/fileis included as~/path/to/file. This is to provide additional context for LLM actions, including tool-use in subsequent conversation turns. This applies to context included viagptel-addor as a link in a buffer. - Structured output support:
gptel-requestcan now take an optional schema argument to constrain LLM output to the specified JSON schema. The JSON schema can be provided as- an elisp object, a nested plist structure.
- A JSON schema serialized to a string.
- A shorthand object/array description, described in the manual (and
the documentation of
gptel--dispatch-schema-type.)
This feature works with all major backends: OpenAI, Anthropic, Gemini, llama-cpp and Ollama. It is presently supported by some but not all “OpenAI-compatible API” providers.
Note that this is only available via the
gptel-requestAPI, and currently unsupported bygptel-send. - gptel’s log buffer and logging settings are now accessible from
gptel’s Transient menu. To see these turn on the full interface by
setting
gptel-expert-commands. - Presets: You can now specify
:request-params(API-specific request parameters) in a preset. - From the dry-run inspector buffer, you can now copy the Curl command for the request. Like when continuing the query, the request is constructed from the contents of the buffer, which is editable.
- gptel now handles Ollama models that return both reasoning content and tool calls in a single request.
- The “Prompt from minibuffer” option in gptel’s Transient menu
behaves slightly differently now. If a region is active in the
buffer, it can optionally be included in the prompt. The keybinding
to toggle this is displayed during the minibuffer-read.
Additionally, when reading a prompt or instructions from the minibuffer you can switch to a dedicated composition buffer via
C-c C-e.
gptel-org-branching-contextis now a global variable. It was buffer-local by default in past releases.- The following models have been removed from the default ChatGPT backend:
o1-preview: useo1instead.gpt-4-turbo-preview: usegpt-4oorgpt-4-turboinstead.gpt-4-32k,gpt-4-0125-previewandgpt-4-1106-preview: usegpt-4oorgpt-4instead.
Alternatively, you can add these models back to the backend in your personal configuration:
(push 'gpt-4-turbo-preview (gptel-backend-models (gptel-get-backend "ChatGPT")))
- Only relevant if you use
gptel-requestin your elisp code, interactive gptel usage is unaffected:gptel-requestnow takes a new, optional:transformsargument. Any prompt modifications (like adding context to requests) must now be specified via this argument. See the definition ofgptel-sendfor an example.
- Add support for
gpt-4.1,gpt-4.1-mini,gpt-4.1-nano,o3ando4-mini. - Add support for
gemini-2.5-pro-exp-03-25,gemini-2.5-flash-preview-04-17,gemini-2.5-pro-preview-05-06andgemini-2.5-pro-preview-06-05. - Add support for
claude-sonnet-4-20250514andclaude-opus-4-20250514. - Add support for AWS Bedrock models. You can create an AWS Bedrock
gptel backend with
gptel-make-bedrock. Please note: AWS Bedrock support requires Curl 8.9.0 or higher. - You can now create an xAI backend with
gptel-make-xai. (xAI was supported before but the model configuration is now handled for you by this function.) - Add support for GitHub Copilot Chat. See the README and
gptel-make-gh-copilot. Please note: this is only the chat component of GitHub Copilot. Copilot’scompletion-at-point(tab-completion) functionality is not supported by gptel. - Add support for Sambanova. This is an OpenAI-compatible API so you
can create a backend with
gptel-make-openai, see the README for details. - Add support for Mistral Le Chat. This is an an OpenAI-compatible
API so you can create a backend with
gptel-make-openai, see the README for details.
- gptel now supports handling reasoning/thinking blocks in responses
from Gemini models. This is controlled by
gptel-include-reasoning, in the same way that it handles other APIs. - The new option
gptel-curl-extra-argscan be used to specify extra arguments to the Curl command used for the request. This is the global version of the gptel-backend-specific:curl-argsslot, which can be used to specify Curl arguments when using a specific backend. - Tools now run in the buffer from which the request originates. This can be significant when tools read or manipulate Emacs’ state.
- gptel can access MCP server tools by integrating with the mcp.el
package, which is at https://github.com/lizqwerscott/mcp.el.
(mcp.el is available on MELPA.) To help with the integration, two
new commands are provided:
gptel-mcp-connectandgptel-mcp-disconnect. You can use these to start MCP servers selectively and add tools to gptel. These commands are also available from gptel’s tools menu.These commands are currently not autoloaded by gptel. To access them, require the
gptel-integrationsfeature. - You can now define “presets”, which are a bundle of gptel options,
such as the backend, model, system message, included tools,
temperature and so on. This set of options can be applied together,
making it easy to switch between different tasks using gptel. From
gptel’s transient menu, you can save the current configuration as a
preset or apply another one. Presets can be applied globally,
buffer-locally or for the next request only. To persist presets
across Emacs sessions, define presets in your configuration using
gptel-make-preset. - When using
gptel-sendfrom anywhere in Emacs, you can now include a “cookie” of the form@preset-namein the prompt text to apply that preset before sending. The preset is applied for that request only. This is an easy way to specify models, tools, system messages (etc) on the fly. In chat buffers the preset cookie is fontified and available for completion viacompletion-at-point. - For scripting purposes, provide a
gptel-with-presetmacro to create an environment with a preset applied. - Links to plain-text files in chat buffers can be followed, and their
contents included with the request. Using Org or Markdown links is
an easy, intuitive, persistent and buffer-local way to specify
context. To enable this behavior, turn on
gptel-track-media. This is a pre-existing option that also controls whether image/document links are followed and sent (when the model supports it). - A new hook
gptel-prompt-transform-functionsis provided for arbitrary transformations of the prompt prior to sending a request. This hook runs in a temporary buffer containing the text to be sent. Any aspect of the request (the text, destination, request parameters, response handling preferences) can be modified buffer-locally here. These hook functions can be asynchronous. - The user option
gptel-use-curlcan now be used to specify a Curl path. - The current kill can be added to gptel’s context. To enable this,
turn on
gptel-expert-commandsand use gptel’s transient menu. - The tools menu (
gptel-tools) has been redesigned. It now displays tool categories and associated tools in two columns, and it should scale better to any number of tools. As a bonus, the new menu requires half as many keystrokes as before to enable individual tools or toggle categories.
- Fix more Org markup conversion edge cases involving nested Markdown delimiters.
Version 0.9.8 adds support for new Gemini, Anthropic, OpenAI,
Perplexity, and DeepSeek models, introduces LLM tool use/function
calling, a redesign of gptel-menu, includes new customization hooks,
dry-run options and refined settings, improvements to the rewrite
feature and control of LLM “reasoning” content.
gemini-prohas been removed from the list of Gemini models, as this model is no longer supported by the Gemini API.- Sending an active region in Org mode will now apply Org mode-specific rules to the text, such as branching context.
- The following obsolete variables and functions have been removed:
gptel-send-menu: Usegptel-menuinstead.gptel-host: Usegptel-make-openaiinstead.gptel-playback: Usegptel-streaminstead.gptel--debug: Usegptel-log-levelinstead.
- Add support for several new Gemini models including
gemini-2.0-flash,gemini-2.0-pro-expandgemini-2.0-flash-thinking-exp, among others. - Add support for the Anthropic model
claude-3-7-sonnet-20250219, including its “reasoning” output. - Add support for OpenAI’s
o1,o3-miniandgpt-4.5-previewmodels. - Add support for Perplexity. While gptel supported Perplexity in earlier releases by reusing its OpenAI support, there is now first class support for the Perplexity API, including citations.
- Add support for DeepSeek. While gptel supported DeepSeek in earlier releases by reusing its OpenAI support, there is now first class support for the DeepSeek API, including support for handling “reasoning” output.
gptel-rewritenow supports iterating on responses.- gptel supports the ability to simulate/dry-run requests so you can see exactly what will be sent. This payload preview can now be edited in place and the request continued.
- Directories can now be added to gptel’s global context. Doing so will add all files in the directory recursively.
- “Oneshot” settings: when using gptel’s Transient menus, request parameters, directives and tools can now be set for the next request only in addition to globally across the Emacs session and buffer-locally. This is useful for making one-off requests with different settings.
gptel-modecan now be used in all modes derived fromtext-mode.- gptel now tries to handle LLM responses that are in mixed Org/Markdown markup correctly.
- Add
gptel-org-convert-responseto toggle the automatic conversion of (possibly) Markdown-formatted LLM responses to Org markup where appropriate. - You can now look up registered gptel backends using the
gptel-get-backendfunction. This is intended to make scripting and configuring gptel easier.gptel-get-backendis a generalized variable so you can (un)set backends withsetf. - Tool use: gptel now supports LLM tool use, or function calling. Essentially you can equip the LLM with capabilities (such as filesystem access, web search, control of Emacs or introspection of Emacs’ state and more) that it can use to perform tasks for you. gptel runs these tools using argument values provided by the LLMs. This requires specifying tools, which are elisp functions with plain text descriptions of their arguments and results. gptel does not include any tools out of the box yet.
- You can look up registered gptel tools using the
gptel-get-toolfunction. This is intended to make scripting and configuring gptel easier.gptel-get-toolis a generalized variable so you can (un)set tools withsetf. - New hooks for customization:
gptel-prompt-filter-hookruns in a temporary buffer containing the text to be sent, before the full query is created. It can be used for arbitrary text transformations to the source text.gptel-post-request-hookruns after the request is sent, and (possibly) before any response is received. This is intended for preparatory/reset code.gptel-post-rewrite-hookruns after agptel-rewriterequest is successfully and fully received.
gptel-menuhas been redesigned. It now shows a verbose description of what will be sent and where the output will go. This is intended to provide clarity on gptel’s default prompting behavior, as well as the effect of the various prompt/response redirection it provides. Incompatible combinations of options are now disallowed.- The spacing between the end of the prompt and the beginning of the
response in buffers is now customizable via
gptel-response-separator, and can be any string. gptel-context-remove-allis now an interactive command.- gptel now handles “reasoning” content produced by LLMs. Some LLMs
include in their response a “thinking” or “reasoning” section. This
text improves the quality of the LLM’s final output, but may not be
interesting to you by itself. The new user option
gptel-include-reasoningcontrols whether and how gptel displays this content. - (Anthropic API only) Some LLM backends can cache content sent to it
by gptel, so that only the newly included part of the text needs to
be processed on subsequent conversation turns. This results in
faster and significantly cheaper processing. The new user option
gptel-cachecan be used to specify caching preferences for prompts, the system message and/or tool definitions. This is supported only by the Anthropic API right now. - (Org mode) Org property drawers are now stripped from the prompt
text before sending queries. You can control this behavior or
specify additional Org elements to ignore via
gptel-org-ignore-elements. (For more complex pre-processing you can usegptel-prompt-filter-hook.)
- Fix response mix-up when running concurrent requests in Org mode buffers.
- gptel now works around an Org fontification bug where streaming responses in Org mode buffers sometimes caused source code blocks to remain unfontified.
Version 0.9.7 adds dynamic directives, a better rewrite interface, streaming support to the gptel request API, and more flexible model/backend configuration.
gptel-rewrite-menu has been obsoleted. Use gptel-rewrite instead.
- Add support for OpenAI’s
o1-previewando1-mini. - Add support for Anthropic’s Claude 3.5 Haiku.
- Add support for xAI.
- Add support for Novita AI.
- gptel’s directives (see
gptel-directives) can now be dynamic, and include more than the system message. You can “pre-fill” a conversation with canned user/LLM messages. Directives can now be functions that dynamically generate the system message and conversation history based on the current context. This paves the way for fully flexible task-specific templates, which the UI does not yet support in full. - gptel’s rewrite interface has been reworked. If using a streaming endpoint, the rewritten text is streamed in as a preview placed over the original. In all cases, clicking on the preview brings up a dispatch you can use to easily diff, ediff, merge, accept or reject the changes (4ae9c1b2), and you can configure gptel to run one of these actions automatically. See the README for examples.
gptel-abort, used to cancel requests in progress, now works across the board, including when not using Curl or withgptel-rewrite.- The
gptel-requestAPI now explicitly supports streaming responses , making it easy to write your own helpers or features with streaming support. The API also supportsgptel-abortto stop and clean up responses. - You can now unset the system message – different from setting it to an empty string. gptel will also automatically disable the system message when using models that don’t support it.
- Support for including PDFs with requests to Anthropic models has been added. (These queries are cached, so you pay only 10% of the token cost of the PDF in follow-up queries.) Note that document support (PDFs etc) for Gemini models has been available since v0.9.5.
- When defining a gptel model or backend, you can specify arbitrary parameters to be sent with each request. This includes the (many) API options across all APIs that gptel does not yet provide explicit support for.
- New transient command option to easily remove all included context chunks.
- Pressing
RETon included files in the context inspector buffer now pops up the file correctly. - API keys are stripped of whitespace before sending.
- Multiple UI, backend and prompt construction bugs have been fixed.