Skip to content

Commit fb587fe

Browse files
authored
All tips and important blocks to Starlight built-ins (#19885)
* All tips and important blocks to Starlight built-ins * These all need to be MDX for the Aside tag to work.
1 parent 7513a19 commit fb587fe

File tree

29 files changed

+124
-84
lines changed

29 files changed

+124
-84
lines changed

docs/docs/getting_started/concepts.md renamed to docs/docs/getting_started/concepts.mdx

Lines changed: 5 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -62,7 +62,8 @@ A query engine is an end-to-end flow that allows you to ask questions over your
6262
[**Chat Engines**](/python/framework/module_guides/deploying/chat_engines):
6363
A chat engine is an end-to-end flow for having a conversation with your data (multiple back-and-forth instead of a single question-and-answer).
6464

65-
!!! tip
66-
* Tell me how to [customize things](/python/framework/getting_started/faq)
67-
* Continue learning with our [understanding LlamaIndex](/python/framework/understanding) guide
68-
* Ready to dig deep? Check out the [component guides](/python/framework/module_guides)
65+
<Aside type="tip">
66+
* Tell me how to [customize things](/python/framework/getting_started/faq)
67+
* Continue learning with our [understanding LlamaIndex](/python/framework/understanding) guide
68+
* Ready to dig deep? Check out the [component guides](/python/framework/module_guides)
69+
</Aside>

docs/docs/getting_started/faq.md renamed to docs/docs/getting_started/faq.mdx

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -4,8 +4,9 @@ sidebar:
44
---
55
# Frequently Asked Questions (FAQ)
66

7-
!!! tip
8-
If you haven't already, [install LlamaIndex](/python/framework/getting_started/installation) and complete the [starter tutorial](/python/framework/getting_started/starter_example). If you run into terms you don't recognize, check out the [high-level concepts](/python/framework/getting_started/concepts).
7+
<Aside type="tip">
8+
If you haven't already, [install LlamaIndex](/python/framework/getting_started/installation) and complete the [starter tutorial](/python/framework/getting_started/starter_example). If you run into terms you don't recognize, check out the [high-level concepts](/python/framework/getting_started/concepts).
9+
</Aside>
910

1011
In this section, we start with the code you wrote for the [starter example](/python/framework/getting_started/starter_example) and show you the most common ways you might want to customize it for your use case:
1112

docs/docs/getting_started/installation.md renamed to docs/docs/getting_started/installation.mdx

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -33,8 +33,9 @@ This is a starter bundle of packages, containing
3333
By default, we use the OpenAI `gpt-3.5-turbo` model for text generation and `text-embedding-ada-002` for retrieval and embeddings. In order to use this, you must have an OPENAI_API_KEY set up as an environment variable.
3434
You can obtain an API key by logging into your OpenAI account and [creating a new API key](https://platform.openai.com/account/api-keys).
3535

36-
!!! tip
37-
You can also [use one of many other available LLMs](/python/framework/module_guides/models/llms/usage_custom). You may need additional environment keys + tokens setup depending on the LLM provider.
36+
<Aside type="tip">
37+
You can also [use one of many other available LLMs](/python/framework/module_guides/models/llms/usage_custom). You may need additional environment keys + tokens setup depending on the LLM provider.
38+
</Aside>
3839

3940
[Check out our OpenAI Starter Example](/python/framework/getting_started/starter_example)
4041

docs/docs/getting_started/starter_example.md renamed to docs/docs/getting_started/starter_example.mdx

Lines changed: 16 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -6,12 +6,14 @@ sidebar:
66

77
This tutorial will show you how to get started building agents with LlamaIndex. We'll start with a basic example and then show how to add RAG (Retrieval-Augmented Generation) capabilities.
88

9-
!!! tip
10-
Make sure you've followed the [installation](/python/framework/getting_started/installation) steps first.
9+
<Aside type="tip">
10+
Make sure you've followed the [installation](/python/framework/getting_started/installation) steps first.
11+
</Aside>
1112

12-
!!! tip
13-
Want to use local models?
14-
If you want to do our starter tutorial using only local models, [check out this tutorial instead](/python/framework/getting_started/starter_example_local).
13+
<Aside type="tip">
14+
Want to use local models?
15+
If you want to do our starter tutorial using only local models, [check out this tutorial instead](/python/framework/getting_started/starter_example_local).
16+
</Aside>
1517

1618
## Set your OpenAI API key
1719

@@ -25,8 +27,9 @@ export OPENAI_API_KEY=XXXXX
2527
set OPENAI_API_KEY=XXXXX
2628
```
2729

28-
!!! tip
29-
If you are using an OpenAI-Compatible API, you can use the `OpenAILike` LLM class. You can find more information in the [OpenAILike LLM](https://docs.llamaindex.ai/en/stable/api_reference/llms/openai_like/) integration and [OpenAILike Embeddings](https://docs.llamaindex.ai/en/stable/api_reference/embeddings/openai_like/) integration.
30+
<Aside type="tip">
31+
If you are using an OpenAI-Compatible API, you can use the `OpenAILike` LLM class. You can find more information in the [OpenAILike LLM](https://docs.llamaindex.ai/en/stable/api_reference/llms/openai_like/) integration and [OpenAILike Embeddings](https://docs.llamaindex.ai/en/stable/api_reference/embeddings/openai_like/) integration.
32+
</Aside>
3033

3134
## Basic Agent Example
3235

@@ -72,8 +75,9 @@ What happened is:
7275
- The agent selected the `multiply` tool and wrote the arguments to the tool
7376
- The agent received the result from the tool and interpolated it into the final response
7477

75-
!!! tip
76-
As you can see, we are using `async` python functions. Many LLMs and models support async calls, and using async code is recommended to improve performance of your application. To learn more about async code and python, we recommend this [short section on async + python](/python/framework/getting_started/async_python).
78+
<Aside type="tip">
79+
As you can see, we are using `async` python functions. Many LLMs and models support async calls, and using async code is recommended to improve performance of your application. To learn more about async code and python, we recommend this [short section on async + python](/python/framework/getting_started/async_python).
80+
</Aside>
7781

7882
## Adding Chat History
7983

@@ -177,8 +181,9 @@ index = load_index_from_storage(storage_context)
177181
query_engine = index.as_query_engine()
178182
```
179183

180-
!!! tip
181-
If you used a [vector store integration](/python/framework/module_guides/storing/vector_stores) besides the default, chances are you can just reload from the vector store:
184+
<Aside type="tip">
185+
If you used a [vector store integration](/python/framework/module_guides/storing/vector_stores) besides the default, chances are you can just reload from the vector store:
186+
</Aside>
182187

183188
```python
184189
index = VectorStoreIndex.from_vector_store(vector_store)

docs/docs/getting_started/starter_example_local.md renamed to docs/docs/getting_started/starter_example_local.mdx

Lines changed: 9 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -9,8 +9,9 @@ This tutorial will show you how to get started building agents with LlamaIndex.
99

1010
We will use [`BAAI/bge-base-en-v1.5`](https://huggingface.co/BAAI/bge-base-en-v1.5) as our embedding model and `llama3.1 8B` served through `Ollama`.
1111

12-
!!! tip
13-
Make sure you've followed the [installation](/python/framework/getting_started/installation) steps first.
12+
<Aside type="tip">
13+
Make sure you've followed the [installation](/python/framework/getting_started/installation) steps first.
14+
</Aside>
1415

1516
## Setup
1617

@@ -86,8 +87,9 @@ What happened is:
8687
- The agent selected the `multiply` tool and wrote the arguments to the tool
8788
- The agent received the result from the tool and interpolated it into the final response
8889

89-
!!! tip
90-
As you can see, we are using `async` python functions. Many LLMs and models support async calls, and using async code is recommended to improve performance of your application. To learn more about async code and python, we recommend this [short section on async + python](/python/framework/getting_started/async_python).
90+
<Aside type="tip">
91+
As you can see, we are using `async` python functions. Many LLMs and models support async calls, and using async code is recommended to improve performance of your application. To learn more about async code and python, we recommend this [short section on async + python](/python/framework/getting_started/async_python).
92+
</Aside>
9193

9294
## Adding Chat History
9395

@@ -216,8 +218,9 @@ query_engine = index.as_query_engine(
216218
)
217219
```
218220

219-
!!! tip
220-
If you used a [vector store integration](/python/framework/module_guides/storing/vector_stores) besides the default, chances are you can just reload from the vector store:
221+
<Aside type="tip">
222+
If you used a [vector store integration](/python/framework/module_guides/storing/vector_stores) besides the default, chances are you can just reload from the vector store:
223+
</Aside>
221224

222225
```python
223226
index = VectorStoreIndex.from_vector_store(

docs/docs/module_guides/deploying/agents/index.md renamed to docs/docs/module_guides/deploying/agents/index.mdx

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -48,8 +48,9 @@ The `FunctionAgent` is a type of agent that uses an LLM provider's function/tool
4848

4949
You can visit [the agents guide](/python/framework/understanding/agent) to learn more about agents and their capabilities.
5050

51-
!!! tip
52-
Some models might not support streaming LLM output. While streaming is enabled by default, if you encounter an error, you can always set `FunctionAgent(..., streaming=False)` to disable streaming.
51+
<Aside type="tip">
52+
Some models might not support streaming LLM output. While streaming is enabled by default, if you encounter an error, you can always set `FunctionAgent(..., streaming=False)` to disable streaming.
53+
</Aside>
5354

5455
## Tools
5556

docs/docs/module_guides/deploying/agents/memory.md renamed to docs/docs/module_guides/deploying/agents/memory.mdx

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -168,8 +168,9 @@ As the memory is used, the short-term memory will fill up. Once the short-term m
168168

169169
When memory is retrieved, the short-term and long-term memories are merged together. The `Memory` object will ensure that the short-term memory + long-term memory content is less than or equal to the `token_limit`. If it is longer, the `.truncate()` method will be called on the memory blocks, using the `priority` to determine the truncation order.
170170

171-
!!! tip
172-
By default, tokens are counted using tiktoken. To customize this, you can set the `tokenizer_fn` argument to a custom callable that given a string, returns a list. The length of the list is then used to determine the token count.
171+
<Aside type="tip">
172+
By default, tokens are counted using tiktoken. To customize this, you can set the `tokenizer_fn` argument to a custom callable that given a string, returns a list. The length of the list is then used to determine the token count.
173+
</Aside>
173174

174175
Once the memory has collected enough information, we might see something like this from the memory:
175176

docs/docs/module_guides/deploying/chat_engines/index.md renamed to docs/docs/module_guides/deploying/chat_engines/index.mdx

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -9,8 +9,9 @@ Think ChatGPT, but augmented with your knowledge base.
99
Conceptually, it is a **stateful** analogy of a [Query Engine](/python/framework/module_guides/deploying/query_engine).
1010
By keeping track of the conversation history, it can answer questions with past context in mind.
1111

12-
!!! tip
13-
If you want to ask standalone question over your data (i.e. without keeping track of conversation history), use [Query Engine](/python/framework/module_guides/deploying/query_engine) instead.
12+
<Aside type="tip">
13+
If you want to ask standalone question over your data (i.e. without keeping track of conversation history), use [Query Engine](/python/framework/module_guides/deploying/query_engine) instead.
14+
</Aside>
1415

1516
## Usage Pattern
1617

docs/docs/module_guides/deploying/chat_engines/usage_pattern.md renamed to docs/docs/module_guides/deploying/chat_engines/usage_pattern.mdx

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -8,8 +8,9 @@ Build a chat engine from index:
88
chat_engine = index.as_chat_engine()
99
```
1010

11-
!!! tip
12-
To learn how to build an index, see [Indexing](/python/framework/module_guides/indexing/index_guide)
11+
<Aside type="tip">
12+
To learn how to build an index, see [Indexing](/python/framework/module_guides/indexing/index_guide)
13+
</Aside>
1314

1415
Have a conversation with your data:
1516

docs/docs/module_guides/deploying/query_engine/index.md renamed to docs/docs/module_guides/deploying/query_engine/index.mdx

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -8,8 +8,9 @@ A query engine takes in a natural language query, and returns a rich response.
88
It is most often (but not always) built on one or many [indexes](/python/framework/module_guides/indexing) via [retrievers](/python/framework/module_guides/querying/retriever).
99
You can compose multiple query engines to achieve more advanced capability.
1010

11-
!!! tip
12-
If you want to have a conversation with your data (multiple back-and-forth instead of a single question & answer), take a look at [Chat Engine](/python/framework/module_guides/deploying/chat_engines)
11+
<Aside type="tip">
12+
If you want to have a conversation with your data (multiple back-and-forth instead of a single question & answer), take a look at [Chat Engine](/python/framework/module_guides/deploying/chat_engines)
13+
</Aside>
1314

1415
## Usage Pattern
1516

0 commit comments

Comments
 (0)