Skip to content

Commit 4c5f7e6

Browse files
committed
gptel: Bump version to v0.9.8
* gptel.el: Update version and package description.
1 parent 2835d93 commit 4c5f7e6

File tree

1 file changed

+14
-9
lines changed

1 file changed

+14
-9
lines changed

gptel.el

Lines changed: 14 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@
33
;; Copyright (C) 2023 Karthik Chikmagalur
44

55
;; Author: Karthik Chikmagalur <[email protected]>
6-
;; Version: 0.9.7
6+
;; Version: 0.9.8
77
;; Package-Requires: ((emacs "27.1") (transient "0.7.4") (compat "29.1.4.1"))
88
;; Keywords: convenience, tools
99
;; URL: https://github.com/karthink/gptel
@@ -36,7 +36,7 @@
3636
;;
3737
;; - The services ChatGPT, Azure, Gemini, Anthropic AI, Anyscale, Together.ai,
3838
;; Perplexity, Anyscale, OpenRouter, Groq, PrivateGPT, DeepSeek, Cerebras,
39-
;; Github Models, xAI and Kagi (FastGPT & Summarizer).
39+
;; Github Models, Novita AI, xAI and Kagi (FastGPT & Summarizer).
4040
;; - Local models via Ollama, Llama.cpp, Llamafiles or GPT4All
4141
;;
4242
;; Additionally, any LLM service (local or remote) that provides an
@@ -51,6 +51,7 @@
5151
;; - Supports conversations and multiple independent sessions.
5252
;; - Supports tool-use to equip LLMs with agentic capabilities.
5353
;; - Supports multi-modal models (send images, documents).
54+
;; - Supports "reasoning" content in LLM responses.
5455
;; - Save chats as regular Markdown/Org/Text files and resume them later.
5556
;; - You can go back and edit your previous prompts or LLM responses when
5657
;; continuing a conversation. These will be fed back to the model.
@@ -70,9 +71,11 @@
7071
;; - For Gemini: define a gptel-backend with `gptel-make-gemini', which see.
7172
;; - For Anthropic (Claude): define a gptel-backend with `gptel-make-anthropic',
7273
;; which see.
73-
;; - For Together.ai, Anyscale, Perplexity, Groq, OpenRouter, DeepSeek, Cerebras or
74+
;; - For Together.ai, Anyscale, Groq, OpenRouter, DeepSeek, Cerebras or
7475
;; Github Models: define a gptel-backend with `gptel-make-openai', which see.
7576
;; - For PrivateGPT: define a backend with `gptel-make-privategpt', which see.
77+
;; - For Perplexity: define a backend with `gptel-make-perplexity', which see.
78+
;; - For Deepseek: define a backend with `gptel-make-deepseek', which see.
7679
;; - For Kagi: define a gptel-backend with `gptel-make-kagi', which see.
7780
;;
7881
;; For local models using Ollama, Llama.cpp or GPT4All:
@@ -125,11 +128,12 @@
125128
;; Include more context with requests:
126129
;;
127130
;; If you want to provide the LLM with more context, you can add arbitrary
128-
;; regions, buffers or files to the query with `gptel-add'. To add text or
129-
;; media files, call `gptel-add' in Dired or use the dedicated `gptel-add-file'.
131+
;; regions, buffers, files or directories to the query with `gptel-add'. To add
132+
;; text or media files, call `gptel-add' in Dired or use the dedicated
133+
;; `gptel-add-file'.
130134
;;
131-
;; You can also add context from gptel's menu instead (gptel-send with a prefix
132-
;; arg), as well as examine or modify context.
135+
;; You can also add context from gptel's menu instead (`gptel-send' with a
136+
;; prefix arg), as well as examine or modify context.
133137
;;
134138
;; When context is available, gptel will include it with each LLM query.
135139
;;
@@ -156,8 +160,9 @@
156160
;; will always use these settings, allowing you to create mostly reproducible
157161
;; LLM chat notebooks.
158162
;;
159-
;; Finally, gptel offers a general purpose API for writing LLM ineractions
160-
;; that suit your workflow, see `gptel-request'.
163+
;; Finally, gptel offers a general purpose API for writing LLM ineractions that
164+
;; suit your workflow. See `gptel-request', and `gptel-fsm' for more advanced
165+
;; usage.
161166

162167
;;; Code:
163168
(declare-function markdown-mode "markdown-mode")

0 commit comments

Comments
 (0)