Skip to content

Commit 7f3f8f6

Browse files
committed
Deployed 27db4e0 to 2.0 with MkDocs 1.6.0 and mike 2.1.3
1 parent 27db4e0 commit 7f3f8f6

File tree

5 files changed

+161
-85
lines changed

5 files changed

+161
-85
lines changed

2.0/migration/index.html

Lines changed: 25 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -753,6 +753,15 @@
753753
</span>
754754
</a>
755755

756+
</li>
757+
758+
<li class="md-nav__item">
759+
<a href="#migration-from-custom-backend-versions" class="md-nav__link">
760+
<span class="md-ellipsis">
761+
Migration from Custom Backend Versions
762+
</span>
763+
</a>
764+
756765
</li>
757766

758767
</ul>
@@ -3569,6 +3578,15 @@
35693578
</span>
35703579
</a>
35713580

3581+
</li>
3582+
3583+
<li class="md-nav__item">
3584+
<a href="#migration-from-custom-backend-versions" class="md-nav__link">
3585+
<span class="md-ellipsis">
3586+
Migration from Custom Backend Versions
3587+
</span>
3588+
</a>
3589+
35723590
</li>
35733591

35743592
</ul>
@@ -3712,6 +3730,13 @@ <h3 id="migration-from-llama-box">Migration from llama-box</h3>
37123730
<p class="admonition-title">Note</p>
37133731
<p>Distributed inference across multiple workers is currently not supported with custom inference backends.</p>
37143732
</div>
3733+
<h3 id="migration-from-custom-backend-versions">Migration from Custom Backend Versions</h3>
3734+
<p>If you were using a custom backend version in GPUStack versions prior to v2.0.0, please note that those versions relied on Python virtual environments, which are <strong>no longer supported</strong> as of v2.0.0. All inference backends now run in containerized environments.</p>
3735+
<p>To continue using your models, you’ll need to <strong>recreate them</strong> using one of the following approaches:</p>
3736+
<p><strong>Option 1 - Use a Built-in Backend Version:</strong></p>
3737+
<p>GPUStack v2.0.0+ provides multiple pre-configured versions of built-in inference backends. Recreate your model deployment and select the built-in backend version that best matches your model’s requirements.</p>
3738+
<p><strong>Option 2 - Add a Custom Version to a Built-in Backend:</strong></p>
3739+
<p>If none of the built-in versions meet your needs, you can extend a built-in inference backend by adding a custom version. For detailed instructions, refer to this guide: <a href="../user-guide/inference-backend-management/#example-add-a-custom-version-to-the-built-in-vllm-inference-backend">Add a Custom Version to the Built-in vLLM Inference Backend</a>.</p>
37153740

37163741

37173742

2.0/search/search_index.json

Lines changed: 1 addition & 1 deletion
Large diffs are not rendered by default.

0 commit comments

Comments
 (0)