Releases: mlcommons/mobile_app_open
MLPerf Mobile App v5.0.4
Description
MLPerf Mobile Inference Benchmark is an open-source benchmark suite for measuring how fast mobile devices (e.g. phones, laptops) can run AI tasks. The benchmark is supported by the MLPerf Mobile App which currently supports Android and iOS. Please see the MLPerf Mobile Inference benchmark paper for a detailed description of the benchmarks along with the motivation and guiding principles behind the benchmark suite.
Release Notes
🚀 New Features
- Added support for Samsung
- Exynos 2600
- Added support for Qualcomm
- Snapdragon 8 Elite Gen 5
- Snapdragon 8 Gen 5
- Snapdragon 8s Gen 4
- Snapdragon 7 Gen 4
- Snapdragon 6 Gen 4
- Warning dialog for side-loading local resources to clearly inform users when manual installation is required.
- Upgrade prompt to notify users when a newer app version is available.
🐛 Bug Fixes
- Fixed a crash caused by permission denial in the Android Play Store version.
- Disabled the Stable Diffusion benchmark for the Pixel backend to prevent issues.
⚙️ CI / Build & Infrastructure
- Android builds are now supported on macOS.
- Updated iOS CI workflow to use the macOS 14 runner.
- Added a merge_group trigger to support merge queue workflows.
- Updated iOS build compatibility for Xcode 16.4+.
- General stability, compatibility, and build system improvements.
Full Changelog: v5.0.2...v5.0.4
Supported OSs
The MLPerf Mobile app supports Android 11 (API level 30) and above, as well as iOS 13.1 or later.
Supported SOCs
Mediatek
- Dimensity 9000 series (9000/9000+/9200/9200+/9300/9300+/9400)
- Dimensity 8000 series (8000/8020/8050/8100/8200/8300)
Qualcomm
- Snapdragon 8 Elite Gen 5
- Snapdragon 8 Elite
- Snapdragon 8 Gen 5
- Snapdragon 8 Gen 3
- Snapdragon 8s Gen 4
- Snapdragon 8s Gen 3
- Snapdragon 8 Gen 2
- Snapdragon 7 Gen 4
- Snapdragon 7 Gen 3
- Snapdragon 7s Gen 3
- Snapdragon 6 Gen 4
- Snapdragon 4 Gen 2
- Default fallback for all other Snapdragon mobile platforms
Samsung
- Exynos 2600
- Exynos 2500
- Exynos 2400
- Exynos 2300
- Exynos 2200
- Exynos 2100
Google Pixel
- Pixel 10/9/8/7/6 and Pixel 10/9/8/7/6 Pro (Tensor G5/G4/G3/G2/G1 SoC)
The MLPerf Mobile App will also run on a host of other devices via our default path, which uses TensorFlow Lite on Android devices.
Installation instructions
- Allow installation of unknown apps in Settings > Apps > Special Access.
- Download the MLPerf Mobile APK.
- Find the APK in 'Downloads' or file browser.
- Tap the APK file. Approve installation when prompted.
- Confirm 'Install'.
- Once installed, tap 'Open' to launch MLPerf Mobile.
MLPerf Mobile App v5.0.2
Description
MLPerf Mobile Inference Benchmark is an open-source benchmark suite for measuring how fast mobile devices (e.g. phones, laptops) can run AI tasks. The benchmark is supported by the MLPerf Mobile App which currently supports Android and iOS. Please see the MLPerf Mobile Inference benchmark paper for a detailed description of the benchmarks along with the motivation and guiding principles behind the benchmark suite.
Release Notes
- Added support for Exynos 2500
Full Changelog: v5.0.1...v5.0.2
Supported OSs
The MLPerf Mobile app supports Android 11 (API level 30) and above, as well as iOS 13.1 or later.
Supported SOCs
Mediatek
- Dimensity 9000 series (9000/9000+/9200/9200+/9300/9300+/9400)
- Dimensity 8000 series (8000/8020/8050/8100/8200/8300)
Qualcomm
- Snapdragon 8 Elite
- Snapdragon 8 Gen 3
- Snapdragon 8s Gen 3
- Snapdragon 8 Gen 2
- Snapdragon 7 Gen 3
- Snapdragon 7s Gen 3
- Snapdragon 4 Gen 2
- Default fallback for all other Snapdragon mobile platforms
Samsung
- Exynos 2500
- Exynos 2400
- Exynos 2300
- Exynos 2200
- Exynos 2100
Google Pixel
- Pixel 10/9/8/7/6 and Pixel 10/9/8/7/6 Pro (Tensor G5/G4/G3/G2/G1 SoC)
The MLPerf Mobile App will also run on a host of other devices via our default path, which uses TensorFlow Lite on Android devices.
Installation instructions
- Allow installation of unknown apps in Settings > Apps > Special Access.
- Download the MLPerf Mobile APK.
- Find the APK in 'Downloads' or file browser.
- Tap the APK file. Approve installation when prompted.
- Confirm 'Install'.
- Once installed, tap 'Open' to launch MLPerf Mobile.
MLPerf Mobile App v5.0.1
Description
MLPerf Mobile Inference Benchmark is an open-source benchmark suite for measuring how fast mobile devices (e.g. phones, laptops) can run AI tasks. The benchmark is supported by the MLPerf Mobile App which currently supports Android and iOS. Please see the MLPerf Mobile Inference benchmark paper for a detailed description of the benchmarks along with the motivation and guiding principles behind the benchmark suite.
Release Notes
This release includes important updates across device compatibility, backend improvements, app configuration, and UI fixes.
✨ Features & Improvements
• Added support for Mediatek mt6991 backend.
• Enhanced compatibility details for MLPerf app devices.
• Improved registration flow and introduced a simple resource flow.
• Made quick mode the default run mode.
• Increased cooldown slider max value to 30 minutes.
• Updated about, licensing, and privacy information.
• iOS app renamed to MLPerf Mobile and updated for Xcode 16.2.
• Increased Android targetSdkVersion to 35.
• Added support for models with different quantization.
• Resource UI and task screen usability improvements (scrollable task descriptions, overflow fixes, smoother iOS overscroll).
• Download button added for missing resources.
🛠️ CI/CD & Testing
• CI updated to use full commit SHA in GitHub Actions.
• Integrated Samsung Galaxy Tab S10 Plus for MTK tests and Galaxy S25 Ultra (Android 15.0) for QTI tests.
• Updated expected accuracy for QTI backend.
• CI improvements: BrowserStack log downloads, quick mode testing, larger datasets for integration tests.
• Fixes for BrowserStack failures on Qualcomm devices.
• Native debug symbols uploaded for improved debugging.
🐞 Fixes
• Fixed broken app drawer.
• Fixed iOS overscroll shaking issue on resource screen.
• Fixed task title overflow and improved scroll handling.
Full Changelog: v5.0...v5.0.1
Supported OSs
The MLPerf Mobile app supports Android 11 (API level 30) and above, as well as iOS 13.1 or later.
Supported SOCs
Mediatek
- Dimensity 9000 series (9000/9000+/9200/9200+/9300/9300+/9400)
- Dimensity 8000 series (8000/8020/8050/8100/8200/8300)
Qualcomm
- Snapdragon 8 Elite
- Snapdragon 8 Gen 3
- Snapdragon 8s Gen 3
- Snapdragon 8 Gen 2
- Snapdragon 7 Gen 3
- Snapdragon 7s Gen 3
- Snapdragon 4 Gen 2
- Default fallback for all other Snapdragon mobile platforms
Samsung
- Exynos 2400
- Exynos 2300
- Exynos 2200
- Exynos 2100
Google Pixel
- Pixel 10/9/8/7/6 and Pixel 10/9/8/7/6 Pro (Tensor G5/G4/G3/G2/G1 SoC)
The MLPerf Mobile App will also run on a host of other devices via our default path, which uses TensorFlow Lite on Android devices.
Installation instructions
- Allow installation of unknown apps in Settings > Apps > Special Access.
- Download the MLPerf Mobile APK.
- Find the APK in 'Downloads' or file browser.
- Tap the APK file. Approve installation when prompted.
- Confirm 'Install'.
- Once installed, tap 'Open' to launch MLPerf Mobile.
# SHA256
377dcbb8b0411a6497060d92700652a964a7b636caa44786a5dd5eb5e0a8173f 2025-09-09_mlperfbench-qsmgt-571.apk
MLPerf Mobile App v5.0
Description
MLPerf Mobile Inference Benchmark is an open-source benchmark suite for measuring how fast mobile devices (e.g. phones, laptops) can run AI tasks. The benchmark is supported by the MLPerf Mobile App which currently supports Android and iOS. Please see the MLPerf Mobile Inference benchmark paper for a detailed description of the benchmarks along with the motivation and guiding principles behind the benchmark suite.
Release Notes
New Features
- Download resources per task – Optimized resource management to improve performance and efficiency.
- Auto-remove resources that failed checksum validation to ensure data integrity.
Bug fixes and other changes
- Various UI layout fixes to enhance the user interface and usability.
- Resolved security hotspots reported by Sonar Cloud to enhance app security.
- Integrated CodeQL analysis for improved vulnerability detection.
- Improved build validation by checking
.pbtxtfiles during the build step. - Implemented a fallback mechanism to test single-backend APK when the unified-backend APK fails.
Full Changelog: v4.1...v5.0
Supported OSs
The MLPerf Mobile app supports Android 11 (API level 30) and above, as well as iOS 13.1 or later.
Supported SOCs
Mediatek
- Dimensity 9000 series (9000/9000+/9200/9200+/9300/9300+)
- Dimensity 8000 series (8000/8020/8050/8100/8200)
Qualcomm
- Snapdragon 8 Elite
- Snapdragon 8 Gen 3
- Snapdragon 8s Gen 3
- Snapdragon 8 Gen 2
- Snapdragon 7 Gen 3
- Snapdragon 7s Gen 3
- Snapdragon 4 Gen 2
- Default fallback for all other Snapdragon Mobile Platforms
Samsung
- Exynos 2400
- Exynos 2300
- Exynos 2200
- Exynos 2100
- Exynos 990
Google Pixel
- Pixel 8 and Pixel 8 Pro (Tensor G3 SoC)
- Pixel 7 and Pixel 7 Pro (Tensor G2 ScC)
- Pixel 6 and Pixel 6 Pro (Tensor G1 ScC)
The MLPerf Mobile App will also run on a host of other devices via our default path, which uses TensorFlow Lite on Android devices.
Installation instructions
- Allow installation of unknown apps in Settings > Apps > Special Access.
- Download the MLPerf Mobile APK.
- Find the APK in 'Downloads' or file browser.
- Tap the APK file. Approve installation when prompted.
- Confirm 'Install'.
- Once installed, tap 'Open' to launch MLPerf Mobile.
# SHA256
2f7a0754f7405cac0cec856d838ce561f4007ebb4a6fb05cdb00cb4e3b5e7c0a mlperfbench-v5.0-qsmgt.apk
MLPerf Mobile App v4.1
Description
MLPerf Mobile Inference Benchmark is an open-source benchmark suite for measuring how fast mobile devices (e.g. phones, laptops) can run AI tasks. The benchmark is supported by the MLPerf Mobile App which currently supports Android and iOS. Please see the MLPerf Mobile Inference benchmark paper for a detailed description of the benchmarks along with the motivation and guiding principles behind the benchmark suite.
Release Notes
New features
- Introduced a new benchmark task: Stable Diffusion
- Added checksum verification for datasets
- Provided support for utilizing multiple model files
- Offered the option to download all resources (models, datasets) on-demand
- Added support for multiple ML pipelines
Bug fixes and other changes
- Migrated the Flutter SDK from version 3.7 to version 3.19
- Implemented Firebase App Check
- Enhanced the performance of the MobileNetV4 Core ML model
- Resolved a crash issue caused by permission denied on the Android Play Store version
- Switched from GitHub to Cloudflare for hosting models and datasets
- Updated and improved the CI/CD system
Full Changelog: v4.0...v4.1
Supported SOCs
Mediatek
- Dimensity 9000 series (9000/9000+/9200/9200+/9300/9300+)
- Dimensity 8000 series (8000/8020/8050/8100/8200)
Qualcomm
- Snapdragon 8 Gen 3
- Snapdragon 8s Gen 3
- Snapdragon 7 Gen 3
- Snapdragon 8 Gen 2
- Snapdragon 7+ Gen 2
- Snapdragon 8+ Gen 1
- Snapdragon 8 Gen 1
- Snapdragon 7 Gen 1
- Snapdragon 888
- Snapdragon 778
Samsung
- Exynos 2400
- Exynos 2300
- Exynos 2200
- Exynos 2100
- Exynos 990
Google Pixel
- Pixel 8 and Pixel 8 Pro (Tensor G3 SoC)
- Pixel 7 and Pixel 7 Pro (Tensor G2 ScC)
- Pixel 6 and Pixel 6 Pro (Tensor G1 ScC)
The MLPerf Mobile App will also run on a host of other devices via our default path, which uses TensorFlow Lite on Android devices.
Installation instructions
- Allow installation of unknown apps in Settings > Apps > Special Access.
- Download the MLPerf Mobile APK.
- Find the APK in 'Downloads' or file browser.
- Tap the APK file. Approve installation when prompted.
- Confirm 'Install'.
- Once installed, tap 'Open' to launch MLPerf Mobile.
SHA256 of mlperfbench-v4.1-qsmgt.apk: 39d3d54c1c8b014aca6a47d49ada51a7c6342b581856e6af61457af98a7e11f9
MLPerf Mobile App v4.0
Description
MLPerf Mobile Inference Benchmark is an open-source benchmark suite for measuring how fast mobile devices (e.g. phones, laptops) can run AI tasks. The benchmark is supported by the MLPerf Mobile App which currently supports Android and iOS. Please see the MLPerf Mobile Inference benchmark paper for a detailed description of the benchmarks along with the motivation and guiding principles behind the benchmark suite.
Release Notes
New features
- Added a new benchmark task: Image Classification v2
- Implemented a new design for various screens.
- Added the ability to manage your uploaded results directly in the app.
- Added a web interface to view uploaded results (https://mlperf-mobile.mlcommons.org)
Bug fixes and other changes
- Integrated with Firebase Crashlytics to identify and fix app crashes.
- Updated and improved CI&CD system.
Full Changelog: v3.1...v4.0
Supported SOCs
Mediatek
- Dimensity 9000 series (9000/9000+/9200/9200+/9300/9300+)
- Dimensity 8000 series (8000/8020/8050/8100/8200)
Qualcomm
- Snapdragon 8 Gen 3
- Snapdragon 8s Gen 3
- Snapdragon 7 Gen 3
- Snapdragon 8 Gen 2
- Snapdragon 7+ Gen 2
- Snapdragon 8+ Gen 1
- Snapdragon 8 Gen 1
- Snapdragon 7 Gen 1
- Snapdragon 888
- Snapdragon 778
Samsung
- Exynos 2400
- Exynos 2300
- Exynos 2200
- Exynos 2100
- Exynos 990
Google Pixel
- Pixel 8 and Pixel 8 Pro (Tensor G3 SoC)
- Pixel 7 and Pixel 7 Pro (Tensor G2 ScC)
- Pixel 6 and Pixel 6 Pro (Tensor G1 ScC)
MLPerf Mobile 4.0 will also run on a host of other devices via our default path, which uses TensorFlow Lite on Android devices.
Installation instructions
- Allow installation of unknown apps in Settings > Apps > Special Access.
- Download the MLPerf Mobile APK.
- Find the APK in 'Downloads' or file browser.
- Tap the APK file. Approve installation when prompted.
- Confirm 'Install'.
- Once installed, tap 'Open' to launch MLPerf Mobile.
SHA256 of mlperfbench-v4.0-qsmgt.apk: 816ceb4358b5dc5c90f3e67f72cbadd94531f265799403124a1b1dfe0b56a9a9
MLPerf Mobile App v3.1
Description
MLPerf Mobile Inference Benchmark is an open-source benchmark suite for measuring how fast mobile devices (e.g. phones, laptops) can run AI tasks. The benchmark is supported by the MLPerf Mobile App which currently supports Android and iOS. Please see the MLPerf Mobile Inference benchmark paper for a detailed description of the benchmarks along with the motivation and guiding principles behind the benchmark suite.
Release Notes
New features
- Added result upload and download.
- Added result filtering and sorting in the history screen.
- Added delegate choices for multiple backends.
- Qualcomm now supports default backend
Bug fixes and other changes
- Resolved an issue where cooldown pause not cancelled correctly.
- Added PNG support and PNG-based SNUSR dataset.
- Added support for pbtxt instead of header file for backend settings.
- Updated menu navigation in the main screen for a cleaner UI.
- Added max_duration flag.
- Updated the loadgen to the latest v3.1 version.
- Migrated from Flutter v3.3.5 to v3.7.6
- Updated and improved CI&CD system.
Full Changelog: v3.0...v3.1
Supported SOCs
Samsung
- Exynos 990
- Exynos 2100
- Exynos 2200
- Exynos 2300
- Exynos 2400
Qualcomm
- Snapdragon 8 Gen 2
- Snapdragon 7+ Gen 2
- Snapdragon 8+ Gen 1
- Snapdragon 8 Gen 1
- Snapdragon 7 Gen 1
- Snapdragon 888
- Snapdragon 778
- Default backend
Mediatek
- Dimensity 9000 series (9000/9000+/9200/9200+)
- Dimensity 8000 series (8000/8020/8050/8100/8200)
Pixel
- Pixel 6 and Pixel 6 PRO using Tensor G1 SOC
- Pixel 7 and Pixel 7 PRO using Tensor G2 SOC
Installation instructions
- Allow installation of unknown apps in Settings > Apps > Special Access.
- Download the MLPerf Mobile APK.
- Find the APK in 'Downloads' or file browser.
- Tap the APK file. Approve installation when prompted.
- Confirm 'Install'.
- Once installed, tap 'Open' to launch MLPerf Mobile.
SHA256 of mlperfbench-v3.1-qsmgt.apk: 92f6cfdad9fa7c3ec5c1728c70107fd38d07e03e59c372089026c0c75789b3e5
MLPerf Mobile App v3.0
Description
MLPerf Mobile Inference Benchmark is an open-source benchmark suite for measuring how fast mobile devices (e.g. phones, laptops) can run AI tasks. The benchmark is supported by the MLPerf Mobile App which currently supports Android and iOS. Please see the MLPerf Mobile Inference benchmark paper for a detailed description of the benchmarks along with the motivation and guiding principles behind the benchmark suite.
Release Notes
New features
- Added a new Super Resolution task with the SNUSR dataset.
- Added a new Core ML backend for the iOS platform.
- Included additional device information in the result log (SoC, model, platform, etc.)
- Added an Accuracy Only test mode.
Bug fixes and other changes
- Fixed a bug where the accuracy value was not valid.
- Fixed a bug where the integration test would crash.
- Fixed a bug where the dataset info was not saved correctly.
- Fixed a bug where the backend loading error would be ignored.
- Added missing permission to access media on Android 13.
- Updated the loadgen to the latest v2.1 version.
- Migrated from Flutter v2.10.5 to v3.3.5
Supported SOCs
Samsung
- Exynos 990
- Exynos 2100
- Exynos 2200
Qualcomm
- Snapdragon 8 Gen 2
- Snapdragon 7+ Gen 2
- Snapdragon 8+ Gen 1
- Snapdragon 8 Gen 1
- Snapdragon 7 Gen 1
- Snapdragon 888
- Snapdragon 778
Mediatek
- Dimensity 9000 series (9000/9000+/9200/9200+)
- Dimensity 8000 series (8000/8020/8050/8100/8200)
Pixel
- Pixel 6 and Pixel 6 PRO using Tensor G1 SOC
- Pixel 7 and Pixel 7 PRO using Tensor G2 SOC
Installation instructions
- Allow installation of unknown apps in Settings > Apps > Special Access.
- Download the MLPerf Mobile APK.
- Find the APK in 'Downloads' or file browser.
- Tap the APK file. Approve installation when prompted.
- Confirm 'Install'.
- Once installed, tap 'Open' to launch MLPerf Mobile.
SHA256 of mlperfbench-v3.0-qsmgt.apk: 195a110ab318f153631eb904abcffdda8c291c4e1ad9413ac68a5d447a1d0a1f
MLPerf Windows Command Line App v3.0
Installation instructions:
https://github.com/mlcommons/mobile_app_open/blob/submission-v3.0/mobile_back_qti/README.md
Supported SOC:
Snapdragon 8CX Gen 3 (Windows on Arm)
SHA256 checksum:
2023-03-13_mlperfbench_windows_qualcomm.zip : 7ab12364cfa9d9aea90af51d48242afdfd3e222fc729cc155ecc8d3151ed7ea7
v2.1 Flutter Android
A Flutter-based Android APK
with following vendor's backends:
- Q - QTI (SNPE SDK
1.65.0.3676was used in this build.) - S - Samsung SLSI
- M - MediaTek
- G - Google Pixel
- T - TFLite
and tasks:
- Image Classification
- Object Detection
- v2.0 Image Segmentation
- Language Understanding
- Image Classification (Offline)