Skip to content

Commit 89c17e4

Browse files
committed
v1.5.0
Signed-off-by: bigcat88 <[email protected]>
1 parent d830a6f commit 89c17e4

File tree

5 files changed

+36
-4
lines changed

5 files changed

+36
-4
lines changed

CHANGELOG.md

Lines changed: 16 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -2,6 +2,22 @@
22

33
All notable changes to this project will be documented in this file.
44

5+
## [1.5.0 - 2025-05-19]
6+
7+
Visionatrix service has been updated from version `2.3.0` to `2.4.1`.
8+
9+
### Added
10+
11+
- New **ACE-Step Audio music flow**.
12+
- Enabling/disabling ComfyUI `Smart Memory` now should be done from the **Settings -> Workers** UI page. **By default** `Smart memory` from now is enabled.
13+
- Enabling/disabling ComfyUI `save metadata` option now should be done from the **Settings -> Settings** UI page. **By default** `save metadata` will be disabled.
14+
- Changing ComfyUI **cache** settings(`lru`, `none`) now should be done from the **Settings -> Workers** UI page.
15+
- UI option to enable processing `VAE` on the CPU, located in the **Settings -> Workers** page.
16+
17+
### Changed
18+
19+
- CUDA: Updated from `12.6` to `12.8`.
20+
521
## [1.4.1 - 2025-05-08]
622

723
### Added

Dockerfile

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -59,7 +59,7 @@ RUN --mount=type=cache,target=/root/.cache/pip \
5959
elif [ "$BUILD_TYPE" = "cpu" ]; then \
6060
venv/bin/python -m pip install torch==2.7.0 torchvision torchaudio --index-url https://download.pytorch.org/whl/cpu; \
6161
else \
62-
venv/bin/python -m pip install torch==2.7.0 torchvision torchaudio; \
62+
venv/bin/python -m pip install torch==2.7.0 torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu128; \
6363
fi
6464

6565
COPY ex_app/lib/exclude_nodes.py ex_app/lib/exclude_flows.py /ex_app/lib/

README.md

Lines changed: 16 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -63,6 +63,22 @@ We offer two types of installation:
6363
6464
For more information, please refer to the original [Visionatrix documentation](https://visionatrix.github.io/VixFlowsDocs/).
6565

66+
## Freeing GPU Memory After Execution
67+
68+
By default, models remain resident in GPU memory after each task. To control this behavior, we’ve added two options on the **Settings → Workers** page:
69+
70+
1. **Smart Memory** (enable/disable)
71+
2. **Cache Type** (select caching strategy)
72+
73+
If you need GPU memory to be released after task execution, we recommend:
74+
75+
1. **For systems with > 64 GB RAM**
76+
Disable **Smart Memory**. This will offload GPU memory to your system RAM, freeing up VRAM.
77+
78+
2. **For systems with ≤ 64 GB RAM**
79+
a. Disable **Smart Memory**.
80+
b. Set **Cache Type** to **None**.
81+
6682
## Questions
6783

6884
Feel free to ask questions or report issues.

appinfo/info.xml

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -34,7 +34,7 @@ Share your custom workflows, big or small. For guidance on creating AI workflows
3434
3535
📚 For more details, visit the [repository](https://github.com/cloud-py-api/visionatrix) readme.
3636
]]></description>
37-
<version>1.4.1</version>
37+
<version>1.5.0</version>
3838
<licence>MIT</licence>
3939
<author mail="[email protected]" homepage="https://github.com/andrey18106">Andrey Borysenko</author>
4040
<author mail="[email protected]" homepage="https://github.com/bigcat88">Alexander Piskun</author>
@@ -56,7 +56,7 @@ Share your custom workflows, big or small. For guidance on creating AI workflows
5656
<docker-install>
5757
<registry>ghcr.io</registry>
5858

59-
<image-tag>2.3.0</image-tag>
59+
<image-tag>2.4.1</image-tag>
6060
</docker-install>
6161
<routes>
6262
<route>

ex_app/lib/main.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -393,7 +393,7 @@ def start_visionatrix() -> None:
393393
# Run worker in background and redirect output to worker.log
394394
worker_log = open("worker.log", "wb")
395395
subprocess.Popen(
396-
[visionatrix_python, "-m", "visionatrix", "run", "--mode=WORKER", "--disable-smart-memory"],
396+
[visionatrix_python, "-m", "visionatrix", "run", "--mode=WORKER"],
397397
stdout=worker_log,
398398
stderr=subprocess.STDOUT,
399399
)

0 commit comments

Comments
 (0)