You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
* ♻️ Refactor: modify apicall class signatute to add method and request_body as attributes and modify the scripts accordingly
* 🐛 Fix: parse request body to make it compatible with request
* 📝 Docs: update apicall config example
* 📝 Docs: update README with link to Dockerfiles and image version t run vuegen with Docker
* ♻️ Change chatbot class signature and make_api_request method from APICall class
Define dynamic_request_body argument on the make_api_request to handle chatbot component instead of static request_body from config. Also, adapt chatbot class signature with these changes.
* ♻️ Refactor(streamlit_report.py): modify the _generate_chatbot_content to handle a JSON file retrieved from the BKGR API
* ♻️ Refactor: set model as optional parameter for chatbot class and modify the _generate_chatbot_content function to handle both Olla-style and standard API chatbots
Also, fix bug in the _format_text function to recognize caption type
* 🩹 Add black format
* 📝 Docs: add detailed docstring fro the _generate_chatbot_content function and add chatbot case study on the README with chatbot information
* 📝 Docs: add links to cong and folders for the case studies
* 📝 Docs: update README to add infor about folder structure, --quartocheks argument, and tynitex installation for pdf reports
* 🩹 Add balck format
Copy file name to clipboardExpand all lines: README.md
+82-3Lines changed: 82 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -83,9 +83,18 @@ quarto check
83
83
> [!TIP]
84
84
> If quarto is not installed, you can download the command-line interface from the [Quarto website][quarto-cli] for your operating system.
85
85
86
+
For PDF reports, you need to have a LaTeX distribution installed. This can be done with quarto using the following command:
87
+
88
+
```bash
89
+
quarto install tinytex
90
+
```
91
+
92
+
> [!TIP]
93
+
> Also, you can add the `--quarto_checks` argument to the VueGen command to check and install the required dependencies automatically.
94
+
86
95
### Docker
87
96
88
-
If you prefer not to install VueGen on your system, a pre-configured Docker container is available. It includes all dependencies, ensuring a fully reproducible execution environment. See the [Execution section](#execution) for details on running VueGen with Docker. The official Docker image is available at [quay.io/dtu_biosustain_dsp/vuegen][vuegen-docker-quay].
97
+
If you prefer not to install VueGen on your system, a pre-configured Docker container is available. It includes all dependencies, ensuring a fully reproducible execution environment. See the [Execution section](#execution) for details on running VueGen with Docker. The official Docker images are available at [quay.io/dtu_biosustain_dsp/vuegen][vuegen-docker-quay]. The Dockerfiles to build the images are available [here][docker-folder].
89
98
90
99
### Nextflow and nf-core
91
100
@@ -94,7 +103,7 @@ VueGen is also available as a [nf-core][nfcore] module, customised for compatibi
94
103
## Execution
95
104
96
105
> [!IMPORTANT]
97
-
> Here we use the `Earth_microbiome_vuegen_demo_notebook` directory and the `Earth_microbiome_vuegen_demo_notebook.yaml` configuration file as examples, which are available in the `docs/example_data` and `docs/example_config_files` folders, respectively. Make sure to clone this reposiotry to access these contents, or use your own directory and configuration file.
106
+
> Here we use the `Earth_microbiome_vuegen_demo_notebook`[directory][emp-dir] and the `Earth_microbiome_vuegen_demo_notebook.yaml`[configuration file][emp-config] as examples, which are available in the `docs/example_data` and `docs/example_config_files` folders, respectively. Make sure to clone this reposiotry to access these contents, or use your own directory and configuration file.
98
107
99
108
Run VueGen using a directory with the following command:
> By default, the `streamlit_autorun` argument is set to False, but you can use it in case you want to automatically run the streamlit app.
107
116
117
+
### Folder structure
118
+
119
+
Your input directory must follow a **nested folder structure**, where first-level folders are treated as **sections** and second-level folders as **subsections**, containing the components (plots, tables, networks, Markdown text, and HTML files).
120
+
121
+
Here is an example layout:
122
+
```
123
+
report_folder/
124
+
├── section1/
125
+
│ └── subsection1/
126
+
│ ├── table.csv
127
+
│ ├── image1.png
128
+
│ └── chart.json
129
+
├── section2/
130
+
│ ├── subsection1/
131
+
│ │ ├── summary_table.xls
132
+
│ │ └── network_plot.graphml
133
+
│ └── subsection2/
134
+
│ ├── report.html
135
+
│ └── summary.md
136
+
```
137
+
138
+
> [!WARNING]
139
+
> VueGen currently requires each section to contain at least one subsection folder. Defining only sections (with no subsections) or using deeper nesting levels (i.e., sub-subsections) will result in errors. In upcoming releases, we plan to support more flexible directory structures.
140
+
141
+
The titles for sections, subsections, and components are extracted from the corresponding folder and file names, and afterward, users can add descriptions, captions, and other details to the configuration file. Component types are inferred from the file extensions and names.
142
+
The order of sections, subsections, and components can be defined using numerical suffixes in folder and file names.
143
+
108
144
It's also possible to provide a configuration file instead of a directory:
If a configuration file is given, users can specify titles and descriptions for sections and subsections, as well as component paths and required attributes, such as file format and delimiter for dataframes, plot types, and other details.
151
+
114
152
The current report types supported by VueGen are:
115
153
116
154
- Streamlit
@@ -130,7 +168,7 @@ Instead of installing VueGen locally, you can run it directly from a Docker cont
@@ -201,6 +239,9 @@ This introductory case study uses a predefined directory with plots, dataframes,
201
239
202
240
🔗 [![Open in Colab][colab_badge]][colab_link_intro_demo]
203
241
242
+
> [!NOTE]
243
+
> The [configuration file][predef-dir-config] is available in the `docs/example_config_files` folder, and the [directory][predef-dir] with example data is in the `docs/example_data` folder.
244
+
204
245
**2. Earth Microbiome Project Data**
205
246
206
247
This advanced case study demonstrates the application of VueGen in a real-world scenario using data from the [Earth Microbiome Project (EMP)][emp]. The EMP is an initiative to characterize global microbial taxonomic and functional diversity. The notebook process the EMP data, create plots, dataframes, and other components, and organize outputs within a directory to produce reports. Report content and structure can be adapted by modifying the configuration file. Each report consists of sections on exploratory data analysis, metagenomics, and network analysis.
@@ -209,6 +250,36 @@ This advanced case study demonstrates the application of VueGen in a real-world
209
250
210
251
> [!NOTE]
211
252
> The EMP case study is available online as [HTML][emp-html-demo] and [Streamlit][emp-st-demo] reports.
253
+
> The [configuration file][emp-config] is available in the `docs/example_config_files` folder, and the [directory][emp-dir] with example data is in the `docs/example_data` folder.
254
+
255
+
**3. ChatBot Component**
256
+
257
+
This case study highlights VueGen’s capability to embed a chatbot component into a report subsection,
258
+
enabling interactive conversations inside the report.
259
+
260
+
Two API modes are supported:
261
+
262
+
-**Ollama-style streaming chat completion**
263
+
If a `model` parameter is specified in the config file, VueGen assumes the chatbot is using Ollama’s [/api/chat endpoint][ollama_chat].
264
+
Messages are handled as chat history, and the assistant responses are streamed in real time for a smooth and responsive experience.
265
+
This mode supports LLMs such as `llama3`, `deepsek`, or `mistral`.
266
+
267
+
> [!TIP]
268
+
> See [Ollama’s website][ollama] for more details.
269
+
270
+
-**Standard prompt-response API**
271
+
If no `model` is provided, VueGen uses a simpler prompt-response flow.
272
+
A single prompt is sent to an endpoint, and a structured JSON object is expected in return.
273
+
Currently, the response can include:
274
+
-`text`: the main textual reply
275
+
-`links`: a list of source URLs (optional)
276
+
-`HTML content`: an HTML snippet with a Pyvis network visualization (optional)
277
+
278
+
This response structure is currently customized for an internal knowledge graph assistant, but VueGen is being actively developed
279
+
to support more flexible and general-purpose response formats in future releases.
280
+
281
+
> [!NOTE]
282
+
> You can see a [configuration file example][config-chatbot] for the chatbot component in the `docs/example_config_files` folder.
212
283
213
284
## Web application deployment
214
285
@@ -261,13 +332,17 @@ We appreciate your feedback! If you have any comments, suggestions, or run into
0 commit comments