You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/ai-foundry/foundry-local/concepts/foundry-local-architecture.md
+2-4Lines changed: 2 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -39,7 +39,7 @@ The Foundry Local architecture consists of these main components:
39
39
40
40
The Foundry Local Service includes an OpenAI-compatible REST server that provides a standard interface for working with the inference engine. It's also possible to manage models over REST. Developers use this API to send requests, run models, and get results programmatically.
41
41
42
-
-**Endpoint**: The endpoint is *dynamically allocated* when the service starts. You can find the endpoint by running the `foundry service status` command. When using Foundry Local in your applications, we recommend using the SDK that automatically handles the endpoint for you. For more details on how to use the Foundry Local SDK, read the [Integrated inferencing SDKs with Foundry Local](../how-to/how-to-integrate-with-inference-sdks.md) article.
42
+
-**Endpoint**: The endpoint is _dynamically allocated_ when the service starts. You can find the endpoint by running the `foundry service status` command. When using Foundry Local in your applications, we recommend using the SDK that automatically handles the endpoint for you. For more details on how to use the Foundry Local SDK, read the [Integrated inferencing SDKs with Foundry Local](../how-to/how-to-integrate-with-inference-sdks.md) article.
43
43
-**Use Cases**:
44
44
- Connect Foundry Local to your custom applications
45
45
- Execute models through HTTP requests
@@ -111,9 +111,7 @@ The Foundry CLI is a powerful tool for managing models, the inference engine, an
111
111
112
112
#### Inferencing SDK integration
113
113
114
-
Foundry Local supports integration with various SDKs, such as the OpenAI SDK, enabling developers to use familiar programming interfaces to interact with the local inference engine.
115
-
116
-
-**Supported SDKs**: Python, JavaScript, C#, and more.
114
+
Foundry Local supports integration with various SDKs in most languages, such as the OpenAI SDK, enabling developers to use familiar programming interfaces to interact with the local inference engine.
117
115
118
116
> [!TIP]
119
117
> To learn more about integrating with inferencing SDKs, read [Integrate inferencing SDKs with Foundry Local](../how-to/how-to-integrate-with-inference-sdks.md).
Copy file name to clipboardExpand all lines: articles/ai-foundry/foundry-local/reference/reference-best-practice.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -43,7 +43,7 @@ Foundry Local is designed for on-device inference and *not* distributed, contain
43
43
| --- | --- | --- |
44
44
| Slow inference | CPU-only model with large parameter count | Use GPU-optimized model variants when available |
45
45
| Model download failures | Network connectivity issues | Check your internet connection and run `foundry cache list` to verify cache status |
46
-
| The service fails to start | Port conflicts or permission issues | Try `foundry service restart` or report an issue with logs using `foundry zip-logs`|
46
+
| The service fails to start | Port conflicts or permission issues | Try `foundry service restart` or [report an issue](https://github.com/microsoft/Foundry-Local/issues) with logs using `foundry zip-logs`|
Foundry Local is an on-device AI inference solution offering performance, privacy, customization, and cost advantages. It integrates seamlessly into your existing workflows and applications through an intuitive CLI and REST API.
21
+
Foundry Local is an on-device AI inference solution offering performance, privacy, customization, and cost advantages. It integrates seamlessly into your existing workflows and applications through an intuitive CLI, SDK, and REST API.
22
22
23
23
## Key features
24
24
@@ -28,7 +28,7 @@ Foundry Local is an on-device AI inference solution offering performance, privac
28
28
29
29
-**Cost Efficiency**: Eliminate recurring cloud service costs by using your existing hardware, making AI more accessible.
30
30
31
-
-**Seamless Integration**: Connect with your applications through API endpoints or the CLI, with easy scaling to Azure AI Foundry as your needs grow.
31
+
-**Seamless Integration**: Connect with your applications through an SDK, API endpoints, or the CLI, with easy scaling to Azure AI Foundry as your needs grow.
32
32
33
33
## Use cases
34
34
@@ -52,4 +52,3 @@ Install and run your first model by following the [Get started with Foundry Loca
52
52
53
53
-[Get started with Foundry Local](get-started.md)
54
54
-[How to compile Hugging Face models to run on Foundry Local](how-to/how-to-compile-hugging-face-models.md)
0 commit comments