Skip to content

Commit bda2c63

Browse files
reluctantfuturistiamemilio
authored andcommitted
docs: update documentation links (llamastack#3459)
# What does this PR do? * Updates documentation links from readthedocs to llamastack.github.io ## Test Plan * Manual testing
1 parent b569164 commit bda2c63

21 files changed

+975
-971
lines changed

.github/ISSUE_TEMPLATE/config.yml

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -2,10 +2,10 @@ blank_issues_enabled: false
22

33
contact_links:
44
- name: Have you read the docs?
5-
url: https://llama-stack.readthedocs.io/en/latest/index.html
5+
url: https://llamastack.github.io/latest/providers/external/index.html
66
about: Much help can be found in the docs
77
- name: Start a discussion
8-
url: https://github.com/meta-llama/llama-stack/discussions/new
8+
url: https://github.com/llamastack/llama-stack/discussions/new/
99
about: Start a discussion on a topic
1010
- name: Chat on Discord
1111
url: https://discord.gg/llama-stack

CONTRIBUTING.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -187,7 +187,7 @@ Note that the provider "description" field will be used to generate the provider
187187

188188
### Building the Documentation
189189

190-
If you are making changes to the documentation at [https://llama-stack.readthedocs.io/en/latest/](https://llama-stack.readthedocs.io/en/latest/), you can use the following command to build the documentation and preview your changes. You will need [Sphinx](https://www.sphinx-doc.org/en/master/) and the readthedocs theme.
190+
If you are making changes to the documentation at [https://llamastack.github.io/latest/](https://llamastack.github.io/latest/), you can use the following command to build the documentation and preview your changes. You will need [Sphinx](https://www.sphinx-doc.org/en/master/) and the readthedocs theme.
191191

192192
```bash
193193
# This rebuilds the documentation pages.
@@ -205,4 +205,4 @@ If you modify or add new API endpoints, update the API documentation accordingly
205205
uv run ./docs/openapi_generator/run_openapi_generator.sh
206206
```
207207

208-
The generated API documentation will be available in `docs/_static/`. Make sure to review the changes before committing.
208+
The generated API documentation will be available in `docs/_static/`. Make sure to review the changes before committing.

README.md

Lines changed: 11 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@
77
[![Unit Tests](https://github.com/meta-llama/llama-stack/actions/workflows/unit-tests.yml/badge.svg?branch=main)](https://github.com/meta-llama/llama-stack/actions/workflows/unit-tests.yml?query=branch%3Amain)
88
[![Integration Tests](https://github.com/meta-llama/llama-stack/actions/workflows/integration-tests.yml/badge.svg?branch=main)](https://github.com/meta-llama/llama-stack/actions/workflows/integration-tests.yml?query=branch%3Amain)
99

10-
[**Quick Start**](https://llama-stack.readthedocs.io/en/latest/getting_started/index.html) | [**Documentation**](https://llama-stack.readthedocs.io/en/latest/index.html) | [**Colab Notebook**](./docs/getting_started.ipynb) | [**Discord**](https://discord.gg/llama-stack)
10+
[**Quick Start**](https://llamastack.github.io/latest/getting_started/index.html) | [**Documentation**](https://llamastack.github.io/latest/index.html) | [**Colab Notebook**](./docs/getting_started.ipynb) | [**Discord**](https://discord.gg/llama-stack)
1111

1212

1313
### ✨🎉 Llama 4 Support 🎉✨
@@ -109,7 +109,7 @@ By reducing friction and complexity, Llama Stack empowers developers to focus on
109109

110110
### API Providers
111111
Here is a list of the various API providers and available distributions that can help developers get started easily with Llama Stack.
112-
Please checkout for [full list](https://llama-stack.readthedocs.io/en/latest/providers/index.html)
112+
Please checkout for [full list](https://llamastack.github.io/latest/providers/index.html)
113113

114114
| API Provider Builder | Environments | Agents | Inference | VectorIO | Safety | Telemetry | Post Training | Eval | DatasetIO |
115115
|:--------------------:|:------------:|:------:|:---------:|:--------:|:------:|:---------:|:-------------:|:----:|:--------:|
@@ -140,7 +140,7 @@ Please checkout for [full list](https://llama-stack.readthedocs.io/en/latest/pro
140140
| NVIDIA NEMO | Hosted | ||| | ||||
141141
| NVIDIA | Hosted | | | | | ||||
142142

143-
> **Note**: Additional providers are available through external packages. See [External Providers](https://llama-stack.readthedocs.io/en/latest/providers/external.html) documentation.
143+
> **Note**: Additional providers are available through external packages. See [External Providers](https://llamastack.github.io/latest/providers/external/index.html) documentation.
144144
145145
### Distributions
146146

@@ -149,24 +149,24 @@ Here are some of the distributions we support:
149149

150150
| **Distribution** | **Llama Stack Docker** | Start This Distribution |
151151
|:---------------------------------------------:|:-------------------------------------------------------------------------------------------------------------------------------------------------------------:|:------------------------------------------------------------------------------------------------------------------------:|
152-
| Starter Distribution | [llamastack/distribution-starter](https://hub.docker.com/repository/docker/llamastack/distribution-starter/general) | [Guide](https://llama-stack.readthedocs.io/en/latest/distributions/self_hosted_distro/starter.html) |
153-
| Meta Reference | [llamastack/distribution-meta-reference-gpu](https://hub.docker.com/repository/docker/llamastack/distribution-meta-reference-gpu/general) | [Guide](https://llama-stack.readthedocs.io/en/latest/distributions/self_hosted_distro/meta-reference-gpu.html) |
152+
| Starter Distribution | [llamastack/distribution-starter](https://hub.docker.com/repository/docker/llamastack/distribution-starter/general) | [Guide](https://llamastack.github.io/latest/distributions/self_hosted_distro/starter.html) |
153+
| Meta Reference | [llamastack/distribution-meta-reference-gpu](https://hub.docker.com/repository/docker/llamastack/distribution-meta-reference-gpu/general) | [Guide](https://llamastack.github.io/latest/distributions/self_hosted_distro/meta-reference-gpu.html) |
154154
| PostgreSQL | [llamastack/distribution-postgres-demo](https://hub.docker.com/repository/docker/llamastack/distribution-postgres-demo/general) | |
155155

156156
### Documentation
157157

158-
Please checkout our [Documentation](https://llama-stack.readthedocs.io/en/latest/index.html) page for more details.
158+
Please checkout our [Documentation](https://llamastack.github.io/latest/index.html) page for more details.
159159

160160
* CLI references
161-
* [llama (server-side) CLI Reference](https://llama-stack.readthedocs.io/en/latest/references/llama_cli_reference/index.html): Guide for using the `llama` CLI to work with Llama models (download, study prompts), and building/starting a Llama Stack distribution.
162-
* [llama (client-side) CLI Reference](https://llama-stack.readthedocs.io/en/latest/references/llama_stack_client_cli_reference.html): Guide for using the `llama-stack-client` CLI, which allows you to query information about the distribution.
161+
* [llama (server-side) CLI Reference](https://llamastack.github.io/latest/references/llama_cli_reference/index.html): Guide for using the `llama` CLI to work with Llama models (download, study prompts), and building/starting a Llama Stack distribution.
162+
* [llama (client-side) CLI Reference](https://llamastack.github.io/latest/references/llama_stack_client_cli_reference.html): Guide for using the `llama-stack-client` CLI, which allows you to query information about the distribution.
163163
* Getting Started
164-
* [Quick guide to start a Llama Stack server](https://llama-stack.readthedocs.io/en/latest/getting_started/index.html).
164+
* [Quick guide to start a Llama Stack server](https://llamastack.github.io/latest/getting_started/index.html).
165165
* [Jupyter notebook](./docs/getting_started.ipynb) to walk-through how to use simple text and vision inference llama_stack_client APIs
166166
* The complete Llama Stack lesson [Colab notebook](https://colab.research.google.com/drive/1dtVmxotBsI4cGZQNsJRYPrLiDeT0Wnwt) of the new [Llama 3.2 course on Deeplearning.ai](https://learn.deeplearning.ai/courses/introducing-multimodal-llama-3-2/lesson/8/llama-stack).
167167
* A [Zero-to-Hero Guide](https://github.com/meta-llama/llama-stack/tree/main/docs/zero_to_hero_guide) that guide you through all the key components of llama stack with code samples.
168168
* [Contributing](CONTRIBUTING.md)
169-
* [Adding a new API Provider](https://llama-stack.readthedocs.io/en/latest/contributing/new_api_provider.html) to walk-through how to add a new API provider.
169+
* [Adding a new API Provider](https://llamastack.github.io/latest/contributing/new_api_provider.html) to walk-through how to add a new API provider.
170170

171171
### Llama Stack Client SDKs
172172

@@ -193,4 +193,4 @@ Thanks to all of our amazing contributors!
193193

194194
<a href="https://github.com/meta-llama/llama-stack/graphs/contributors">
195195
<img src="https://contrib.rocks/image?repo=meta-llama/llama-stack" />
196-
</a>
196+
</a>

docs/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
# Llama Stack Documentation
22

3-
Here's a collection of comprehensive guides, examples, and resources for building AI applications with Llama Stack. For the complete documentation, visit our [ReadTheDocs page](https://llama-stack.readthedocs.io/en/latest/index.html).
3+
Here's a collection of comprehensive guides, examples, and resources for building AI applications with Llama Stack. For the complete documentation, visit our [Github page](https://llamastack.github.io/latest/getting_started/index.html).
44

55
## Render locally
66

docs/getting_started.ipynb

Lines changed: 24 additions & 25 deletions
Original file line numberDiff line numberDiff line change
@@ -11,11 +11,11 @@
1111
"\n",
1212
"# Llama Stack - Building AI Applications\n",
1313
"\n",
14-
"<img src=\"https://llama-stack.readthedocs.io/en/latest/_images/llama-stack.png\" alt=\"drawing\" width=\"500\"/>\n",
14+
"<img src=\"https://llamastack.github.io/latest/_images/llama-stack.png\" alt=\"drawing\" width=\"500\"/>\n",
1515
"\n",
1616
"[Llama Stack](https://github.com/meta-llama/llama-stack) defines and standardizes the set of core building blocks needed to bring generative AI applications to market. These building blocks are presented in the form of interoperable APIs with a broad set of Service Providers providing their implementations.\n",
1717
"\n",
18-
"Read more about the project here: https://llama-stack.readthedocs.io/en/latest/index.html\n",
18+
"Read more about the project here: https://llamastack.github.io/latest/getting_started/index.html\n",
1919
"\n",
2020
"In this guide, we will showcase how you can build LLM-powered agentic applications using Llama Stack.\n",
2121
"\n",
@@ -75,7 +75,7 @@
7575
},
7676
{
7777
"cell_type": "code",
78-
"execution_count": 1,
78+
"execution_count": null,
7979
"id": "J2kGed0R5PSf",
8080
"metadata": {
8181
"colab": {
@@ -113,17 +113,17 @@
113113
}
114114
],
115115
"source": [
116-
"import os \n",
116+
"import os\n",
117117
"import subprocess\n",
118118
"import time\n",
119119
"\n",
120-
"!pip install uv \n",
120+
"!pip install uv\n",
121121
"\n",
122122
"if \"UV_SYSTEM_PYTHON\" in os.environ:\n",
123123
" del os.environ[\"UV_SYSTEM_PYTHON\"]\n",
124124
"\n",
125125
"# this command installs all the dependencies needed for the llama stack server with the together inference provider\n",
126-
"!uv run --with llama-stack llama stack build --distro together --image-type venv \n",
126+
"!uv run --with llama-stack llama stack build --distro together --image-type venv\n",
127127
"\n",
128128
"def run_llama_stack_server_background():\n",
129129
" log_file = open(\"llama_stack_server.log\", \"w\")\n",
@@ -134,19 +134,19 @@
134134
" stderr=log_file,\n",
135135
" text=True\n",
136136
" )\n",
137-
" \n",
137+
"\n",
138138
" print(f\"Starting Llama Stack server with PID: {process.pid}\")\n",
139139
" return process\n",
140140
"\n",
141141
"def wait_for_server_to_start():\n",
142142
" import requests\n",
143143
" from requests.exceptions import ConnectionError\n",
144144
" import time\n",
145-
" \n",
145+
"\n",
146146
" url = \"http://0.0.0.0:8321/v1/health\"\n",
147147
" max_retries = 30\n",
148148
" retry_interval = 1\n",
149-
" \n",
149+
"\n",
150150
" print(\"Waiting for server to start\", end=\"\")\n",
151151
" for _ in range(max_retries):\n",
152152
" try:\n",
@@ -157,12 +157,12 @@
157157
" except ConnectionError:\n",
158158
" print(\".\", end=\"\", flush=True)\n",
159159
" time.sleep(retry_interval)\n",
160-
" \n",
160+
"\n",
161161
" print(\"\\nServer failed to start after\", max_retries * retry_interval, \"seconds\")\n",
162162
" return False\n",
163163
"\n",
164164
"\n",
165-
"# use this helper if needed to kill the server \n",
165+
"# use this helper if needed to kill the server\n",
166166
"def kill_llama_stack_server():\n",
167167
" # Kill any existing llama stack server processes\n",
168168
" os.system(\"ps aux | grep -v grep | grep llama_stack.core.server.server | awk '{print $2}' | xargs kill -9\")\n"
@@ -242,7 +242,7 @@
242242
},
243243
{
244244
"cell_type": "code",
245-
"execution_count": 4,
245+
"execution_count": null,
246246
"id": "E1UFuJC570Tk",
247247
"metadata": {
248248
"colab": {
@@ -407,9 +407,9 @@
407407
"from llama_stack_client import LlamaStackClient\n",
408408
"\n",
409409
"client = LlamaStackClient(\n",
410-
" base_url=\"http://0.0.0.0:8321\", \n",
410+
" base_url=\"http://0.0.0.0:8321\",\n",
411411
" provider_data = {\n",
412-
" \"tavily_search_api_key\": os.environ['TAVILY_SEARCH_API_KEY'], \n",
412+
" \"tavily_search_api_key\": os.environ['TAVILY_SEARCH_API_KEY'],\n",
413413
" \"together_api_key\": os.environ['TOGETHER_API_KEY']\n",
414414
" }\n",
415415
")"
@@ -1177,7 +1177,7 @@
11771177
},
11781178
{
11791179
"cell_type": "code",
1180-
"execution_count": 13,
1180+
"execution_count": null,
11811181
"id": "WS8Gu5b0APHs",
11821182
"metadata": {
11831183
"colab": {
@@ -1207,7 +1207,7 @@
12071207
"from termcolor import cprint\n",
12081208
"\n",
12091209
"agent = Agent(\n",
1210-
" client, \n",
1210+
" client,\n",
12111211
" model=\"meta-llama/Llama-3.3-70B-Instruct\",\n",
12121212
" instructions=\"You are a helpful assistant. Use websearch tool to help answer questions.\",\n",
12131213
" tools=[\"builtin::websearch\"],\n",
@@ -1249,7 +1249,7 @@
12491249
},
12501250
{
12511251
"cell_type": "code",
1252-
"execution_count": 14,
1252+
"execution_count": null,
12531253
"id": "GvLWltzZCNkg",
12541254
"metadata": {
12551255
"colab": {
@@ -1367,7 +1367,7 @@
13671367
" chunk_size_in_tokens=512,\n",
13681368
")\n",
13691369
"rag_agent = Agent(\n",
1370-
" client, \n",
1370+
" client,\n",
13711371
" model=model_id,\n",
13721372
" instructions=\"You are a helpful assistant\",\n",
13731373
" tools = [\n",
@@ -2154,7 +2154,7 @@
21542154
},
21552155
{
21562156
"cell_type": "code",
2157-
"execution_count": 21,
2157+
"execution_count": null,
21582158
"id": "vttLbj_YO01f",
21592159
"metadata": {
21602160
"colab": {
@@ -2217,7 +2217,7 @@
22172217
"from termcolor import cprint\n",
22182218
"\n",
22192219
"agent = Agent(\n",
2220-
" client, \n",
2220+
" client,\n",
22212221
" model=model_id,\n",
22222222
" instructions=\"You are a helpful assistant\",\n",
22232223
" tools=[\"mcp::filesystem\"],\n",
@@ -2283,7 +2283,7 @@
22832283
},
22842284
{
22852285
"cell_type": "code",
2286-
"execution_count": 22,
2286+
"execution_count": null,
22872287
"id": "4iCO59kP20Zs",
22882288
"metadata": {
22892289
"colab": {
@@ -2317,7 +2317,7 @@
23172317
"from llama_stack_client import Agent, AgentEventLogger\n",
23182318
"\n",
23192319
"agent = Agent(\n",
2320-
" client, \n",
2320+
" client,\n",
23212321
" model=\"meta-llama/Llama-3.3-70B-Instruct\",\n",
23222322
" instructions=\"You are a helpful assistant. Use web_search tool to answer the questions.\",\n",
23232323
" tools=[\"builtin::websearch\"],\n",
@@ -2846,7 +2846,7 @@
28462846
},
28472847
{
28482848
"cell_type": "code",
2849-
"execution_count": 29,
2849+
"execution_count": null,
28502850
"id": "44e05e16",
28512851
"metadata": {},
28522852
"outputs": [
@@ -2880,8 +2880,7 @@
28802880
"!curl -O https://raw.githubusercontent.com/meta-llama/llama-models/refs/heads/main/Llama_Repo.jpeg\n",
28812881
"\n",
28822882
"from IPython.display import Image\n",
2883-
"Image(\"Llama_Repo.jpeg\", width=256, height=256)\n",
2884-
"\n"
2883+
"Image(\"Llama_Repo.jpeg\", width=256, height=256)\n"
28852884
]
28862885
},
28872886
{

0 commit comments

Comments
 (0)