|
3 | 3 | ;; Copyright (C) 2023 Karthik Chikmagalur |
4 | 4 |
|
5 | 5 | ;; Author: Karthik Chikmagalur <[email protected]> |
6 | | -;; Version: 0.9.7 |
| 6 | +;; Version: 0.9.8 |
7 | 7 | ;; Package-Requires: ((emacs "27.1") (transient "0.7.4") (compat "29.1.4.1")) |
8 | 8 | ;; Keywords: convenience, tools |
9 | 9 | ;; URL: https://github.com/karthink/gptel |
|
36 | 36 | ;; |
37 | 37 | ;; - The services ChatGPT, Azure, Gemini, Anthropic AI, Anyscale, Together.ai, |
38 | 38 | ;; Perplexity, Anyscale, OpenRouter, Groq, PrivateGPT, DeepSeek, Cerebras, |
39 | | -;; Github Models, xAI and Kagi (FastGPT & Summarizer). |
| 39 | +;; Github Models, Novita AI, xAI and Kagi (FastGPT & Summarizer). |
40 | 40 | ;; - Local models via Ollama, Llama.cpp, Llamafiles or GPT4All |
41 | 41 | ;; |
42 | 42 | ;; Additionally, any LLM service (local or remote) that provides an |
|
51 | 51 | ;; - Supports conversations and multiple independent sessions. |
52 | 52 | ;; - Supports tool-use to equip LLMs with agentic capabilities. |
53 | 53 | ;; - Supports multi-modal models (send images, documents). |
| 54 | +;; - Supports "reasoning" content in LLM responses. |
54 | 55 | ;; - Save chats as regular Markdown/Org/Text files and resume them later. |
55 | 56 | ;; - You can go back and edit your previous prompts or LLM responses when |
56 | 57 | ;; continuing a conversation. These will be fed back to the model. |
|
70 | 71 | ;; - For Gemini: define a gptel-backend with `gptel-make-gemini', which see. |
71 | 72 | ;; - For Anthropic (Claude): define a gptel-backend with `gptel-make-anthropic', |
72 | 73 | ;; which see. |
73 | | -;; - For Together.ai, Anyscale, Perplexity, Groq, OpenRouter, DeepSeek, Cerebras or |
| 74 | +;; - For Together.ai, Anyscale, Groq, OpenRouter, DeepSeek, Cerebras or |
74 | 75 | ;; Github Models: define a gptel-backend with `gptel-make-openai', which see. |
75 | 76 | ;; - For PrivateGPT: define a backend with `gptel-make-privategpt', which see. |
| 77 | +;; - For Perplexity: define a backend with `gptel-make-perplexity', which see. |
| 78 | +;; - For Deepseek: define a backend with `gptel-make-deepseek', which see. |
76 | 79 | ;; - For Kagi: define a gptel-backend with `gptel-make-kagi', which see. |
77 | 80 | ;; |
78 | 81 | ;; For local models using Ollama, Llama.cpp or GPT4All: |
|
125 | 128 | ;; Include more context with requests: |
126 | 129 | ;; |
127 | 130 | ;; If you want to provide the LLM with more context, you can add arbitrary |
128 | | -;; regions, buffers or files to the query with `gptel-add'. To add text or |
129 | | -;; media files, call `gptel-add' in Dired or use the dedicated `gptel-add-file'. |
| 131 | +;; regions, buffers, files or directories to the query with `gptel-add'. To add |
| 132 | +;; text or media files, call `gptel-add' in Dired or use the dedicated |
| 133 | +;; `gptel-add-file'. |
130 | 134 | ;; |
131 | | -;; You can also add context from gptel's menu instead (gptel-send with a prefix |
132 | | -;; arg), as well as examine or modify context. |
| 135 | +;; You can also add context from gptel's menu instead (`gptel-send' with a |
| 136 | +;; prefix arg), as well as examine or modify context. |
133 | 137 | ;; |
134 | 138 | ;; When context is available, gptel will include it with each LLM query. |
135 | 139 | ;; |
|
156 | 160 | ;; will always use these settings, allowing you to create mostly reproducible |
157 | 161 | ;; LLM chat notebooks. |
158 | 162 | ;; |
159 | | -;; Finally, gptel offers a general purpose API for writing LLM ineractions |
160 | | -;; that suit your workflow, see `gptel-request'. |
| 163 | +;; Finally, gptel offers a general purpose API for writing LLM ineractions that |
| 164 | +;; suit your workflow. See `gptel-request', and `gptel-fsm' for more advanced |
| 165 | +;; usage. |
161 | 166 |
|
162 | 167 | ;;; Code: |
163 | 168 | (declare-function markdown-mode "markdown-mode") |
|
0 commit comments