Skip to content

Commit 498f995

Browse files
Merge pull request #270717 from wmwxwa/patch-8
Move related concepts pt 2.md
2 parents 7a527b2 + 1a76823 commit 498f995

File tree

1 file changed

+16
-16
lines changed

1 file changed

+16
-16
lines changed

articles/cosmos-db/vector-database.md

Lines changed: 16 additions & 16 deletions
Original file line numberDiff line numberDiff line change
@@ -62,6 +62,22 @@ An embedding is a special format of data representation that machine learning mo
6262

6363
Vector search is a method that helps you find similar items based on their data characteristics rather than by exact matches on a property field. This technique is useful in applications such as searching for similar text, finding related images, making recommendations, or even detecting anomalies. It works by taking the vector representations (lists of numbers) of your data that you created by using a machine learning model by using an embeddings API, such as [Azure OpenAI Embeddings](../ai-services/openai/how-to/embeddings.md) or [Hugging Face on Azure](https://azure.microsoft.com/solutions/hugging-face-on-azure). It then measures the distance between the data vectors and your query vector. The data vectors that are closest to your query vector are the ones that are found to be most similar semantically. Using a native vector search feature offers an efficient way to store, index, and search high-dimensional vector data directly alongside other application data. This approach removes the necessity of migrating your data to costlier alternative vector databases and provides a seamless integration of your AI-driven applications.
6464

65+
### Prompts and prompt engineering
66+
67+
A prompt refers to a specific text or information that can serve as an instruction to an LLM, or as contextual data that the LLM can build upon. A prompt can take various forms, such as a question, a statement, or even a code snippet. Prompts can serve as:
68+
69+
- Instructions provide directives to the LLM
70+
- Primary content: gives information to the LLM for processing
71+
- Examples: help condition the model to a particular task or process
72+
- Cues: direct the LLM's output in the right direction
73+
- Supporting content: represents supplemental information the LLM can use to generate output
74+
75+
The process of creating good prompts for a scenario is called prompt engineering. For more information about prompts and best practices for prompt engineering, see Azure OpenAI Service [prompt engineering techniques](../ai-services/openai/concepts/advanced-prompt-engineering.md).
76+
77+
### Tokens
78+
79+
Tokens are small chunks of text generated by splitting the input text into smaller segments. These segments can either be words or groups of characters, varying in length from a single character to an entire word. For instance, the word hamburger would be divided into tokens such as ham, bur, and ger while a short and common word like pear would be considered a single token. LLMs like ChatGPT, GPT-3.5, or GPT-4 break words into tokens for processing.
80+
6581
Here are multiple ways to implement RAG on your data by using our vector database functionalities:
6682

6783
## How to implement integrated vector database functionalities
@@ -124,22 +140,6 @@ A simple RAG pattern using Azure Cosmos DB for NoSQL could be:
124140

125141
The RAG pattern, with prompt engineering, serves the purpose of enhancing response quality by offering more contextual information to the model. RAG enables the model to apply a broader knowledge base by incorporating relevant external sources into the generation process, resulting in more comprehensive and informed responses. For more information on "grounding" LLMs, see [grounding LLMs](https://techcommunity.microsoft.com/t5/fasttrack-for-azure/grounding-llms/ba-p/3843857).
126142

127-
### Prompts and prompt engineering
128-
129-
A prompt refers to a specific text or information that can serve as an instruction to an LLM, or as contextual data that the LLM can build upon. A prompt can take various forms, such as a question, a statement, or even a code snippet. Prompts can serve as:
130-
131-
- Instructions provide directives to the LLM
132-
- Primary content: gives information to the LLM for processing
133-
- Examples: help condition the model to a particular task or process
134-
- Cues: direct the LLM's output in the right direction
135-
- Supporting content: represents supplemental information the LLM can use to generate output
136-
137-
The process of creating good prompts for a scenario is called prompt engineering. For more information about prompts and best practices for prompt engineering, see Azure OpenAI Service [prompt engineering techniques](../ai-services/openai/concepts/advanced-prompt-engineering.md).
138-
139-
### Tokens
140-
141-
Tokens are small chunks of text generated by splitting the input text into smaller segments. These segments can either be words or groups of characters, varying in length from a single character to an entire word. For instance, the word hamburger would be divided into tokens such as ham, bur, and ger while a short and common word like pear would be considered a single token. LLMs like ChatGPT, GPT-3.5, or GPT-4 break words into tokens for processing.
142-
143143
## Related content
144144

145145
- [Azure Cosmos DB for MongoDB vCore Integrated Vector Database](mongodb/vcore/vector-search.md)

0 commit comments

Comments
 (0)