Skip to content

Commit 63631f4

Browse files
committed
add an include
1 parent bcc4b27 commit 63631f4

File tree

5 files changed

+5
-29
lines changed

5 files changed

+5
-29
lines changed

articles/app-service/includes/tutorial-ai-slm/faq.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -16,7 +16,7 @@ For example, the [Phi-3 mini model with a 4K context length from Hugging Face](h
1616

1717
### How use my own SLM sidecar?
1818

19-
The sample respository contains a sample SLM container that you can use as a sidecar. It runs a FastAPI application that listens on port 8000, as specified in its [Dockerfile](https://github.com/Azure-Samples/ai-slm-in-app-service-sidecar/blob/main/bring_your_own_slm/src/phi-3-sidecar/Dockerfile). The application uses [ONNX Runtime](https://onnxruntime.ai/docs/) to load the Phi-3 model, then forwards the HTTP POST data to the model and streams the response from the model back to the client. For more information, see [model_api.py](https://github.com/Azure-Samples/ai-slm-in-app-service-sidecar/blob/main/src/phi-3-sidecar/model_api.py).
19+
The sample repository contains a sample SLM container that you can use as a sidecar. It runs a FastAPI application that listens on port 8000, as specified in its [Dockerfile](https://github.com/Azure-Samples/ai-slm-in-app-service-sidecar/blob/main/bring_your_own_slm/src/phi-3-sidecar/Dockerfile). The application uses [ONNX Runtime](https://onnxruntime.ai/docs/) to load the Phi-3 model, then forwards the HTTP POST data to the model and streams the response from the model back to the client. For more information, see [model_api.py](https://github.com/Azure-Samples/ai-slm-in-app-service-sidecar/blob/main/src/phi-3-sidecar/model_api.py).
2020

2121
To build the sidecar image yourself, you need to install Docker Desktop locally on your machine.
2222

articles/app-service/tutorial-ai-slm-dotnet.md

Lines changed: 1 addition & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -11,13 +11,7 @@ ms.topic: tutorial
1111

1212
This tutorial guides you through deploying a ASP.NET Core chatbot application integrated with the Phi-3 sidecar extension on Azure App Service. By following the steps, you'll learn how to set up a scalable web app, add an AI-powered sidecar for enhanced conversational capabilities, and test the chatbot's functionality.
1313

14-
Hosting your own small language model (SLM) offers several advantages:
15-
16-
- By hosting the model yourself, you maintain full control over your data. This ensures sensitive information is not exposed to third-party services, which is critical for industries with strict compliance requirements.
17-
- Self-hosted models can be fine-tuned to meet specific use cases or domain-specific requirements.
18-
- Hosting the model close to your application or users minimizes network latency, resulting in faster response times and a better user experience.
19-
- You can scale the deployment based on your specific needs and have full control over resource allocation, ensuring optimal performance for your application.
20-
- Hosting your own model allows for greater flexibility in experimenting with new features, architectures, or integrations without being constrained by third-party service limitations.
14+
[!INCLUDE [advantages](includes/tutorial-ai-slm/advantages.md)]
2115

2216
## Prerequisites
2317

articles/app-service/tutorial-ai-slm-expressjs.md

Lines changed: 1 addition & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -11,13 +11,7 @@ ms.topic: tutorial
1111

1212
This tutorial guides you through deploying a Express.js-based chatbot application integrated with the Phi-3 sidecar extension on Azure App Service. By following the steps, you'll learn how to set up a scalable web app, add an AI-powered sidecar for enhanced conversational capabilities, and test the chatbot's functionality.
1313

14-
Hosting your own small language model (SLM) offers several advantages:
15-
16-
- By hosting the model yourself, you maintain full control over your data. This ensures sensitive information is not exposed to third-party services, which is critical for industries with strict compliance requirements.
17-
- Self-hosted models can be fine-tuned to meet specific use cases or domain-specific requirements.
18-
- Hosting the model close to your application or users minimizes network latency, resulting in faster response times and a better user experience.
19-
- You can scale the deployment based on your specific needs and have full control over resource allocation, ensuring optimal performance for your application.
20-
- Hosting your own model allows for greater flexibility in experimenting with new features, architectures, or integrations without being constrained by third-party service limitations.
14+
[!INCLUDE [advantages](includes/tutorial-ai-slm/advantages.md)]
2115

2216
## Prerequisites
2317

articles/app-service/tutorial-ai-slm-fastapi.md

Lines changed: 1 addition & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -10,13 +10,7 @@ ms.topic: tutorial
1010
# Tutorial: Run chatbot in App Service with a Phi-3 sidecar extension (FastAPI)
1111
This tutorial guides you through deploying a FastAPI-based chatbot application integrated with the Phi-3 sidecar extension on Azure App Service. By following the steps, you'll learn how to set up a scalable web app, add an AI-powered sidecar for enhanced conversational capabilities, and test the chatbot's functionality.
1212

13-
Hosting your own small language model (SLM) offers several advantages:
14-
15-
- By hosting the model yourself, you maintain full control over your data. This ensures sensitive information is not exposed to third-party services, which is critical for industries with strict compliance requirements.
16-
- Self-hosted models can be fine-tuned to meet specific use cases or domain-specific requirements.
17-
- Hosting the model close to your application or users minimizes network latency, resulting in faster response times and a better user experience.
18-
- You can scale the deployment based on your specific needs and have full control over resource allocation, ensuring optimal performance for your application.
19-
- Hosting your own model allows for greater flexibility in experimenting with new features, architectures, or integrations without being constrained by third-party service limitations.
13+
[!INCLUDE [advantages](includes/tutorial-ai-slm/advantages.md)]
2014

2115
## Prerequisites
2216

articles/app-service/tutorial-ai-slm-spring-boot.md

Lines changed: 1 addition & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -11,13 +11,7 @@ ms.topic: tutorial
1111

1212
This tutorial guides you through deploying a Spring Boot-based chatbot application integrated with the Phi-3 sidecar extension on Azure App Service. By following the steps, you'll learn how to set up a scalable web app, add an AI-powered sidecar for enhanced conversational capabilities, and test the chatbot's functionality.
1313

14-
Hosting your own small language model (SLM) offers several advantages:
15-
16-
- By hosting the model yourself, you maintain full control over your data. This ensures sensitive information is not exposed to third-party services, which is critical for industries with strict compliance requirements.
17-
- Self-hosted models can be fine-tuned to meet specific use cases or domain-specific requirements.
18-
- Hosting the model close to your application or users minimizes network latency, resulting in faster response times and a better user experience.
19-
- You can scale the deployment based on your specific needs and have full control over resource allocation, ensuring optimal performance for your application.
20-
- Hosting your own model allows for greater flexibility in experimenting with new features, architectures, or integrations without being constrained by third-party service limitations.
14+
[!INCLUDE [advantages](includes/tutorial-ai-slm/advantages.md)]
2115

2216
## Prerequisites
2317

0 commit comments

Comments
 (0)