Skip to content

Commit 48edec2

Browse files
update README with markdown editor screenshot
1 parent 2387742 commit 48edec2

File tree

1 file changed

+4
-0
lines changed

1 file changed

+4
-0
lines changed

README.md

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -6,6 +6,8 @@
66

77
![Inquisitive](/images/inquisitive-screenshot.png)
88

9+
![Inquisitive](/images/inquisitive-markdown-editor.png)
10+
911
## Features
1012

1113
* Upload files of various formats and store them in vector database for Retrieval-Augmented Generation (RAG)
@@ -224,6 +226,8 @@ In the beginning, I wanted something simpler which could be built over a weekend
224226

225227
* **Adding notes** - After feeding my notes to Inquisitive, retrieving relevant information was quick enough (`/notes`). It is still early to comment about this feature. But having the ability to see notes in markdown and edit it and refresh the updated data in vector db on edit, seeing all the reference notes in the sidebar - made things lot more convenient when it came to searching and organizing notes. My only gripe is, not so good editor for markdwon. Currently I'm using Streamlit's in-built text-area component for adding notes, which could have been better.
226228

229+
* Edit: Added experimental support for mardkown editor based on [easymde](https://github.com/Ionaru/easy-markdown-editor)
230+
227231
* **Discussion/QnA session with LLM** - Quality of this depends a lot upon the model. Models with 7B+ parameter give really good result provided the machine has dedicated gpu. For machines with only cpu, models with 1-2B+ parameters can give response a bit quickly, but they are mostly irrelevant and not upto the mark and too much hallucination. So machines with only CPU, can use Inquisitive mainly for organizing the knowledge-base and searching efficiently within it, but not for relevant discussion/QnA purpose. People can try out multiple models and see what works for them best. I tried using LLM with following different setup.
228232

229233
* *Intel Based Mac (no dedicated gpu)* - Any model greater that 1.5b parameter was hardly usable. I got some good experience with `deepseek-r1:1.5b` compared to other models in similar range. But experience was still not upto the mark. I'll prefer not to use discussion mode in such cases and just use `inquisitive` for searching and organizing the personal knowledge base.

0 commit comments

Comments
 (0)