Skip to content

Commit 35bc29d

Browse files
authored
Update reference-best-practice.md
New Issue re: device-wide install
1 parent 3001f7a commit 35bc29d

File tree

1 file changed

+2
-1
lines changed

1 file changed

+2
-1
lines changed

articles/ai-foundry/foundry-local/reference/reference-best-practice.md

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -47,6 +47,7 @@ Foundry Local is designed for on-device inference and *not* distributed, contain
4747
| Model download failures | Network connectivity issues | Check your internet connection and run `foundry cache list` to verify cache status |
4848
| The service fails to start | Port conflicts or permission issues | Try `foundry service restart` or [report an issue](https://github.com/microsoft/Foundry-Local/issues) with logs using `foundry zip-logs` |
4949
| Qualcomm NPU error (`Qnn error code 5005: "Failed to load from EpContext model. qnn_backend_manager."`) | Qualcomm NPU error | Under investigation |
50+
| `winget install Microsoft.FoundryLocal --scope machine` fails with “The current system configuration does not support the installation of this package.” | Winget blocks MSIX machine-scope installs due to an OS bug when using provisioning APIs from a packaged context | Use `Add-AppxProvisionedPackage` instead. Download the `.msix` and its dependency, then run in **elevated** PowerShell: `Add-AppxProvisionedPackage -Online -PackagePath .\FoundryLocal.msix -DependencyPackagePath .\VcLibs.appx -SkipLicense`. This installs Foundry Local for all users.|
5051

5152
### Improving performance
5253

@@ -56,4 +57,4 @@ If you experience slow inference, consider the following strategies:
5657
- Use GPU acceleration when available
5758
- Identify bottlenecks by monitoring memory usage during inference.
5859
- Try more quantized model variants (like INT8 instead of FP16)
59-
- Adjust batch sizes for non-interactive workloads
60+
- Adjust batch sizes for non-interactive workloads

0 commit comments

Comments
 (0)