Skip to content

Commit e540657

Browse files
committed
Success Stories page initial stage
1 parent 98baab7 commit e540657

File tree

1 file changed

+93
-21
lines changed

1 file changed

+93
-21
lines changed

docs/source/success-stories.md

Lines changed: 93 additions & 21 deletions
Original file line numberDiff line numberDiff line change
@@ -6,51 +6,123 @@ Discover how organizations are leveraging ExecuTorch to deploy AI models at scal
66

77
---
88

9-
## 🎯 Featured Success Stories
9+
## Featured Success Stories
1010

1111
::::{grid} 1
1212
:gutter: 3
1313

14-
:::{grid-item-card} **🚀 Story 1: [Title Placeholder]**
14+
:::{grid-item-card} **Meta's Family of Apps**
1515
:class-header: bg-primary text-white
1616

17-
**Industry:** [Industry]
18-
**Hardware:** [Hardware Platform]
19-
**Impact:** [Key Metrics]
17+
**Industry:** Social Media & Messaging
18+
**Hardware:** Android & iOS Devices
19+
**Impact:** Billions of users, latency reduction
2020

21-
[Placeholder Description] - Brief overview of the challenge, solution, and results achieved.
21+
Powers Instagram, WhatsApp, Facebook, and Messenger with real-time on-device AI for content ranking, recommendations, and privacy-preserving features at scale.
2222

23-
24-
[Read Full Story →](#story-1-details)
23+
[Read Blog →](https://engineering.fb.com/2025/07/28/android/executorch-on-device-ml-meta-family-of-apps/)
2524
:::
2625

27-
:::{grid-item-card} **⚡ Story 2: [Title Placeholder]**
26+
:::{grid-item-card} **Meta Quest & Ray-Ban Smart Glasses**
2827
:class-header: bg-success text-white
2928

30-
**Industry:** [Industry]
31-
**Hardware:** [Hardware Platform]
32-
**Impact:** [Key Metrics]
29+
**Industry:** AR/VR & Wearables
30+
**Hardware:** Quest 3, Ray-Ban Meta Smart Glasses, Meta Ray-Ban Display
3331

34-
[Placeholder Description] - Brief overview of the challenge, solution, and results achieved.
32+
Enables immersive mixed reality with real-time computer vision, hand tracking, voice commands, and translation on power-constrained wearable devices.
33+
:::
3534

35+
:::{grid-item-card} **Liquid AI: Efficient, Flexible On-Device Intelligence**
36+
:class-header: bg-info text-white
3637

38+
**Industry:** Artificial Intelligence / Edge Computing
39+
**Hardware:** CPU via PyTorch ExecuTorch
40+
**Impact:** 2× faster inference, lower latency, seamless multimodal deployment
3741

38-
[Read Full Story →](#story-2-details)
42+
Liquid AI builds foundation models that make AI work where the cloud can't. In its LFM2 series, the team uses PyTorch ExecuTorch within the LEAP Edge SDK to deploy high-performance multimodal models efficiently across devices. ExecuTorch provides the flexibility to support custom architectures and processing pipelines while reducing inference latency through graph optimization and caching. Together, they enable faster, more efficient, privacy-preserving AI that runs entirely on the edge.
43+
44+
[Read Blog →](https://www.liquid.ai/blog/how-liquid-ai-uses-executorch-to-power-efficient-flexible-on-device-intelligence)
3945
:::
4046

41-
:::{grid-item-card} **🧠 Story 3: [Title Placeholder]**
42-
:class-header: bg-info text-white
47+
:::{grid-item-card} **PrivateMind: Complete Privacy with On-Device AI**
48+
:class-header: bg-warning text-white
49+
50+
**Industry:** Privacy & Personal Computing
51+
**Hardware:** iOS & Android Devices
52+
**Impact:** 100% on-device processing
53+
54+
PrivateMind delivers a fully private AI assistant using ExecuTorch's .pte format. Built with React Native ExecuTorch, it supports LLaMA, Qwen, Phi-4, and custom models with offline speech-to-text and PDF chat capabilities.
55+
56+
[Visit →](https://privatemind.swmansion.com)
57+
:::
58+
59+
:::{grid-item-card} **NimbleEdge: On-Device Agentic AI Platform**
60+
:class-header: bg-danger text-white
61+
62+
**Industry:** AI Infrastructure
63+
**Hardware:** iOS & Android Devices
64+
**Impact:** 30% higher TPS on iOS, faster time-to-market with Qwen/Gemma models
65+
66+
NimbleEdge successfully integrated ExecuTorch with its open-source DeliteAI platform to enable agentic workflows orchestrated in Python on mobile devices. The extensible ExecuTorch ecosystem allowed implementation of on-device optimization techniques leveraging contextual sparsity. ExecuTorch significantly accelerated the release of "NimbleEdge AI" for iOS, enabling models like Qwen 2.5 with tool calling support and achieving up to 30% higher transactions per second.
67+
68+
[Visit →](https://nimbleedge.com)[Blog →](https://www.nimbleedge.com/blog/meet-nimbleedge-ai-the-first-truly-private-on-device-assistant)[iOS App →](https://apps.apple.com/in/app/nimbleedge-ai/id6746237456)
69+
:::
70+
71+
::::
72+
73+
---
74+
75+
## Featured Ecosystem Integrations and Interoperability
4376

44-
**Industry:** [Industry]
45-
**Hardware:** [Hardware Platform]
46-
**Impact:** [Key Metrics]
77+
::::{grid} 2 2 3 3
78+
:gutter: 2
4779

48-
[Placeholder Description] - Brief overview of the challenge, solution, and results achieved.
80+
:::{grid-item-card} **Hugging Face Transformers**
81+
:class-header: bg-secondary text-white
4982

83+
Popular models from Hugging Face easily export to ExecuTorch format for on-device deployment.
5084

51-
[Read Full Story →](#story-3-details)
85+
[Learn More →](https://github.com/huggingface/optimum-executorch/)
86+
:::
87+
88+
:::{grid-item-card} **React Native ExecuTorch**
89+
:class-header: bg-secondary text-white
90+
91+
Declarative toolkit for running AI models and LLMs in React Native apps with privacy-first, on-device execution.
92+
93+
[Explore →](https://docs.swmansion.com/react-native-executorch/)[Blog →](https://expo.dev/blog/how-to-run-ai-models-with-react-native-executorch)
94+
:::
95+
96+
:::{grid-item-card} **torchao**
97+
:class-header: bg-secondary text-white
98+
99+
PyTorch-native quantization and optimization library for preparing efficient models for ExecuTorch deployment.
100+
101+
[Blog →](https://pytorch.org/blog/torchao-quantized-models-and-quantization-recipes-now-available-on-huggingface-hub/)[Qwen Example →](https://huggingface.co/pytorch/Qwen3-4B-INT8-INT4)[Phi Example →](https://huggingface.co/pytorch/Phi-4-mini-instruct-INT8-INT4)
102+
:::
103+
104+
:::{grid-item-card} **Unsloth**
105+
:class-header: bg-secondary text-white
106+
107+
Optimize LLM fine-tuning with faster training and reduced VRAM usage, then deploy efficiently with ExecuTorch.
108+
109+
[Example Model →](https://huggingface.co/metascroy/Llama-3.2-1B-Instruct-int8-int4)
52110
:::
53111

54112
::::
55113

56114
---
115+
116+
## Featured Demos
117+
118+
- **Text and Multimodal LLM demo mobile apps** - Text (Llama, Qwen3, Phi-4) and multimodal (Gemma3, Voxtral) mobile demo apps. [Try →](https://github.com/meta-pytorch/executorch-examples/tree/main/llm)
119+
120+
- **Voxtral** - Deploy audio-text-input LLM on CPU (via XNNPACK) and on CUDA. [Try →](https://github.com/pytorch/executorch/blob/main/examples/models/voxtral/README.md)
121+
122+
- **LoRA adapter** - Export two LoRA adapters that share a single foundation weight file, saving memory and disk space. [Try →](https://github.com/meta-pytorch/executorch-examples/tree/main/program-data-separation/cpp/lora_example)
123+
124+
- **OpenVINO from Intel** - Deploy [Yolo12](https://github.com/pytorch/executorch/tree/main/examples/models/yolo12), [Llama](https://github.com/pytorch/executorch/tree/main/examples/openvino/llama), and [Stable Diffusion](https://github.com/pytorch/executorch/tree/main/examples/openvino/stable_diffusion) on [OpenVINO from Intel](https://www.intel.com/content/www/us/en/developer/articles/community/optimizing-executorch-on-ai-pcs.html).
125+
126+
- **Demo title** - Brief description of the demo [Try →](#)
127+
128+
*Want to showcase your demo? [Submit here →](https://github.com/pytorch/executorch/issues)*

0 commit comments

Comments
 (0)