|
1 | 1 | ## Latest News |
2 | 2 |
|
3 | | -GPT4All v3.9.0 was released on February 4th. Changes include: |
4 | | - |
5 | | -* **LocalDocs Fix:** LocalDocs no longer shows an error on later messages with reasoning models. |
6 | | -* **DeepSeek Fix:** DeepSeek-R1 reasoning (in 'think' tags) no longer appears in chat names and follow-up questions. |
7 | | -* **Windows ARM Improvements:** |
8 | | - * Graphical artifacts on some SoCs have been fixed. |
9 | | - * A crash when adding a collection of PDFs to LocalDocs has been fixed. |
10 | | -* **Template Parser Fixes:** Chat templates containing an unclosed comment no longer freeze GPT4All. |
11 | | -* **New Models:** OLMoE and Granite MoE models are now supported. |
12 | | - |
13 | | -GPT4All v3.8.0 was released on January 30th. Changes include: |
14 | | - |
15 | | -* **Native DeepSeek-R1-Distill Support:** GPT4All now has robust support for the DeepSeek-R1 family of distillations. |
16 | | - * Several model variants are now available on the downloads page. |
17 | | - * Reasoning (wrapped in "think" tags) is displayed similarly to the Reasoner model. |
18 | | - * The DeepSeek-R1 Qwen pretokenizer is now supported, resolving the loading failure in previous versions. |
19 | | - * The model is now configured with a GPT4All-compatible prompt template by default. |
20 | | -* **Chat Templating Overhaul:** The template parser has been *completely* replaced with one that has much better compatibility with common models. |
21 | | -* **Code Interpreter Fixes:** |
22 | | - * An issue preventing the code interpreter from logging a single string in v3.7.0 has been fixed. |
23 | | - * The UI no longer freezes while the code interpreter is running a computation. |
24 | | -* **Local Server Fixes:** |
25 | | - * An issue preventing the server from using LocalDocs after the first request since v3.5.0 has been fixed. |
26 | | - * System messages are now correctly hidden from the message history. |
| 3 | +GPT4All v3.10.0 was released on February 24th. Changes include: |
| 4 | + |
| 5 | +* **Remote Models:** |
| 6 | + * The Add Model page now has a dedicated tab for remote model providers. |
| 7 | + * Groq, OpenAI, and Mistral remote models are now easier to configure. |
| 8 | +* **CUDA Compatibility:** GPUs with CUDA compute capability 5.0 such as the GTX 750 are now supported by the CUDA backend. |
| 9 | +* **New Model:** The non-MoE Granite model is now supported. |
| 10 | +* **Translation Updates:** |
| 11 | + * The Italian translation has been updated. |
| 12 | + * The Simplified Chinese translation has been significantly improved. |
| 13 | +* **Better Chat Templates:** The default chat templates for OLMoE 7B 0924/0125 and Granite 3.1 3B/8B have been improved. |
| 14 | +* **Whitespace Fixes:** DeepSeek-R1-based models now have better whitespace behavior in their output. |
| 15 | +* **Crash Fixes:** Several issues that could potentially cause GPT4All to crash have been fixed. |
0 commit comments