Skip to content

Commit 4fa76f8

Browse files
committed
📝 Docs: add detailed docstring fro the _generate_chatbot_content function and add chatbot case study on the README with chatbot information
1 parent a04188a commit 4fa76f8

File tree

2 files changed

+48
-4
lines changed

2 files changed

+48
-4
lines changed

‎README.md‎

Lines changed: 32 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -210,6 +210,35 @@ This advanced case study demonstrates the application of VueGen in a real-world
210210
> [!NOTE]
211211
> The EMP case study is available online as [HTML][emp-html-demo] and [Streamlit][emp-st-demo] reports.
212212
213+
**3. ChatBot Component**
214+
215+
This case study highlights VueGen’s capability to embed a chatbot component into a report subsection,
216+
enabling interactive conversations inside the report.
217+
218+
Two API modes are supported:
219+
220+
- **Ollama-style streaming chat completion**
221+
If a `model` parameter is specified in the config file, VueGen assumes the chatbot is using Ollama’s [/api/chat endpoint][ollama_chat].
222+
Messages are handled as chat history, and the assistant responses are streamed in real time for a smooth and responsive experience.
223+
This mode supports LLMs such as `llama3`, `deepsek`, or `mistral`.
224+
225+
> [!TIP]
226+
> See [Ollama’s website][ollama] for more details.
227+
228+
- **Standard prompt-response API**
229+
If no `model` is provided, VueGen uses a simpler prompt-response flow.
230+
A single prompt is sent to an endpoint, and a structured JSON object is expected in return.
231+
Currently, the response can include:
232+
- `text`: the main textual reply
233+
- `links`: a list of source URLs (optional)
234+
- `HTML content`: an HTML snippet with a Pyvis network visualization (optional)
235+
236+
This response structure is currently customized for an internal knowledge graph assistant, but VueGen is being actively developed
237+
to support more flexible and general-purpose response formats in future releases.
238+
239+
> [!NOTE]
240+
> You can see a [configuration file example][config-chatbot] for the chatbot component in the `docs/example_config_files` folder.
241+
213242
## Web application deployment
214243

215244
Once a Streamlit report is generated, it can be deployed as a web application to make it accessible online. There are multiple ways to achieve this:
@@ -269,6 +298,9 @@ We appreciate your feedback! If you have any comments, suggestions, or run into
269298
[ci-docs]: https://github.com/Multiomics-Analytics-Group/vuegen/actions/workflows/docs.yml
270299
[emp-html-demo]: https://multiomics-analytics-group.github.io/vuegen/
271300
[emp-st-demo]: https://earth-microbiome-vuegen-demo.streamlit.app/
301+
[ollama_chat]: https://github.com/ollama/ollama/blob/main/docs/api.md#generate-a-chat-completion
302+
[ollama]: https://ollama.com/
303+
[config-chatbot]: https://github.com/Multiomics-Analytics-Group/vuegen/blob/main/docs/example_config_files/Chatbot_example_config.yaml
272304
[issues]: https://github.com/Multiomics-Analytics-Group/vuegen/issues
273305
[pulls]: https://github.com/Multiomics-Analytics-Group/vuegen/pulls
274306
[vuegen-preprint]: https://doi.org/10.1101/2025.03.05.641152

‎src/vuegen/streamlit_reportview.py‎

Lines changed: 16 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -758,7 +758,7 @@ def _generate_markdown_content(self, markdown) -> List[str]:
758758

759759
def _generate_html_content(self, html) -> List[str]:
760760
"""
761-
Generate content for an HTML component in a Streamlit app.
761+
Generate content for an HTML component.
762762
763763
Parameters
764764
----------
@@ -818,7 +818,8 @@ def _generate_html_content(self, html) -> List[str]:
818818

819819
def _generate_apicall_content(self, apicall) -> List[str]:
820820
"""
821-
Generate content for a Markdown component.
821+
Generate content for an API component. This method handles the API call and formats
822+
the response for display in the Streamlit app.
822823
823824
Parameters
824825
----------
@@ -862,12 +863,23 @@ def _generate_apicall_content(self, apicall) -> List[str]:
862863

863864
def _generate_chatbot_content(self, chatbot) -> List[str]:
864865
"""
865-
Generate content for a ChatBot component, supporting both standard and Ollama-style APIs.
866+
Generate content to render a ChatBot component, supporting standard and Ollama-style streaming APIs.
867+
868+
This method builds and returns a list of strings, which are later executed to create the chatbot
869+
interface in a Streamlit app. It includes user input handling, API interaction logic, response parsing,
870+
and conditional rendering of text, source links, and HTML subgraphs.
871+
872+
The function distinguishes between two chatbot modes:
873+
- **Ollama-style streaming API**: Identified by the presence of `chatbot.model`. Uses streaming
874+
JSON chunks from the server to simulate a real-time response.
875+
- **Standard API**: Assumes a simple POST request with a prompt and a full JSON response with text,
876+
and other fields like links, HTML graphs, etc.
866877
867878
Parameters
868879
----------
869880
chatbot : ChatBot
870-
The ChatBot component to generate content for.
881+
The ChatBot component to generate content for, containing configuration such as title, model,
882+
API endpoint, headers, and caption.
871883
872884
Returns
873885
-------

0 commit comments

Comments
 (0)