Skip to content

Commit 6036d14

Browse files
committed
Redirecting (beta) Efficient mobile interpreter in Android and iOS to ExecuTorch.
1 parent 09ce1af commit 6036d14

File tree

1 file changed

+4
-195
lines changed

1 file changed

+4
-195
lines changed

recipes_source/mobile_interpreter.rst

Lines changed: 4 additions & 195 deletions
Original file line numberDiff line numberDiff line change
@@ -1,201 +1,10 @@
11
(beta) Efficient mobile interpreter in Android and iOS
22
==================================================================
33

4-
**Author**: `Chen Lai <https://github.com/cccclai>`_, `Martin Yuan <https://github.com/iseeyuan>`_
4+
PyTorch Mobile is no longer actively supported. Please check out ExecuTorch.
55

6-
.. warning::
7-
PyTorch Mobile is no longer actively supported. Please check out `ExecuTorch <https://pytorch.org/executorch-overview>`_, PyTorch’s all-new on-device inference library. You can also review our new documentation to learn more about how to build `iOS <https://pytorch.org/executorch/stable/demo-apps-ios.html>`_ and `Android <https://pytorch.org/executorch/stable/demo-apps-android.html>`_ apps with ExecuTorch.
6+
Redirecting in 3 seconds...
87

9-
Introduction
10-
------------
8+
.. raw:: html
119

12-
This tutorial introduces the steps to use PyTorch's efficient interpreter on iOS and Android. We will be using an Image Segmentation demo application as an example.
13-
14-
This application will take advantage of the pre-built interpreter libraries available for Android and iOS, which can be used directly with Maven (Android) and CocoaPods (iOS). It is important to note that the pre-built libraries are the available for simplicity, but further size optimization can be achieved with by utilizing PyTorch's custom build capabilities.
15-
16-
.. note:: If you see the error message: `PytorchStreamReader failed locating file bytecode.pkl: file not found ()`, likely you are using a torch script model that requires the use of the PyTorch JIT interpreter (a version of our PyTorch interpreter that is not as size-efficient). In order to leverage our efficient interpreter, please regenerate the model by running: `module._save_for_lite_interpreter(${model_path})`.
17-
18-
- If `bytecode.pkl` is missing, likely the model is generated with the api: `module.save(${model_psth})`.
19-
- The api `_load_for_lite_interpreter(${model_psth})` can be helpful to validate model with the efficient mobile interpreter.
20-
21-
Android
22-
-------------------
23-
Get the Image Segmentation demo app in Android: https://github.com/pytorch/android-demo-app/tree/master/ImageSegmentation
24-
25-
1. **Prepare model**: Prepare the mobile interpreter version of model by run the script below to generate the scripted model `deeplabv3_scripted.pt` and `deeplabv3_scripted.ptl`
26-
27-
.. code:: python
28-
29-
import torch
30-
from torch.utils.mobile_optimizer import optimize_for_mobile
31-
model = torch.hub.load('pytorch/vision:v0.7.0', 'deeplabv3_resnet50', pretrained=True)
32-
model.eval()
33-
34-
scripted_module = torch.jit.script(model)
35-
# Export full jit version model (not compatible mobile interpreter), leave it here for comparison
36-
scripted_module.save("deeplabv3_scripted.pt")
37-
# Export mobile interpreter version model (compatible with mobile interpreter)
38-
optimized_scripted_module = optimize_for_mobile(scripted_module)
39-
optimized_scripted_module._save_for_lite_interpreter("deeplabv3_scripted.ptl")
40-
41-
2. **Use the PyTorch Android library in the ImageSegmentation app**: Update the `dependencies` part of ``ImageSegmentation/app/build.gradle`` to
42-
43-
.. code:: gradle
44-
45-
repositories {
46-
maven {
47-
url "https://oss.sonatype.org/content/repositories/snapshots"
48-
}
49-
}
50-
51-
dependencies {
52-
implementation 'androidx.appcompat:appcompat:1.2.0'
53-
implementation 'androidx.constraintlayout:constraintlayout:2.0.2'
54-
testImplementation 'junit:junit:4.12'
55-
androidTestImplementation 'androidx.test.ext:junit:1.1.2'
56-
androidTestImplementation 'androidx.test.espresso:espresso-core:3.3.0'
57-
implementation 'org.pytorch:pytorch_android_lite:1.9.0'
58-
implementation 'org.pytorch:pytorch_android_torchvision:1.9.0'
59-
60-
implementation 'com.facebook.fbjni:fbjni-java-only:0.0.3'
61-
}
62-
63-
64-
65-
3. **Update model loader api**: Update ``ImageSegmentation/app/src/main/java/org/pytorch/imagesegmentation/MainActivity.java`` by
66-
67-
4.1 Add new import: `import org.pytorch.LiteModuleLoader`
68-
69-
4.2 Replace the way to load pytorch lite model
70-
71-
.. code:: java
72-
73-
// mModule = Module.load(MainActivity.assetFilePath(getApplicationContext(), "deeplabv3_scripted.pt"));
74-
mModule = LiteModuleLoader.load(MainActivity.assetFilePath(getApplicationContext(), "deeplabv3_scripted.ptl"));
75-
76-
4. **Test app**: Build and run the `ImageSegmentation` app in Android Studio
77-
78-
iOS
79-
-------------------
80-
Get ImageSegmentation demo app in iOS: https://github.com/pytorch/ios-demo-app/tree/master/ImageSegmentation
81-
82-
1. **Prepare model**: Same as Android.
83-
84-
2. **Build the project with Cocoapods and prebuilt interpreter** Update the `PodFile` and run `pod install`:
85-
86-
.. code-block:: podfile
87-
88-
target 'ImageSegmentation' do
89-
# Comment the next line if you don't want to use dynamic frameworks
90-
use_frameworks!
91-
92-
# Pods for ImageSegmentation
93-
pod 'LibTorch_Lite', '~>1.9.0'
94-
end
95-
96-
3. **Update library and API**
97-
98-
3.1 Update ``TorchModule.mm``: To use the custom built libraries project, use `<Libtorch-Lite.h>` (in ``TorchModule.mm``):
99-
100-
.. code-block:: swift
101-
102-
#import <Libtorch-Lite.h>
103-
// If it's built from source with xcode, comment out the line above
104-
// and use following headers
105-
// #include <torch/csrc/jit/mobile/import.h>
106-
// #include <torch/csrc/jit/mobile/module.h>
107-
// #include <torch/script.h>
108-
109-
.. code-block:: swift
110-
111-
@implementation TorchModule {
112-
@protected
113-
// torch::jit::script::Module _impl;
114-
torch::jit::mobile::Module _impl;
115-
}
116-
117-
- (nullable instancetype)initWithFileAtPath:(NSString*)filePath {
118-
self = [super init];
119-
if (self) {
120-
try {
121-
_impl = torch::jit::_load_for_mobile(filePath.UTF8String);
122-
// _impl = torch::jit::load(filePath.UTF8String);
123-
// _impl.eval();
124-
} catch (const std::exception& exception) {
125-
NSLog(@"%s", exception.what());
126-
return nil;
127-
}
128-
}
129-
return self;
130-
}
131-
132-
3.2 Update ``ViewController.swift``
133-
134-
.. code-block:: swift
135-
136-
// if let filePath = Bundle.main.path(forResource:
137-
// "deeplabv3_scripted", ofType: "pt"),
138-
// let module = TorchModule(fileAtPath: filePath) {
139-
// return module
140-
// } else {
141-
// fatalError("Can't find the model file!")
142-
// }
143-
if let filePath = Bundle.main.path(forResource:
144-
"deeplabv3_scripted", ofType: "ptl"),
145-
let module = TorchModule(fileAtPath: filePath) {
146-
return module
147-
} else {
148-
fatalError("Can't find the model file!")
149-
}
150-
151-
4. Build and test the app in Xcode.
152-
153-
How to use mobile interpreter + custom build
154-
---------------------------------------------
155-
A custom PyTorch interpreter library can be created to reduce binary size, by only containing the operators needed by the model. In order to do that follow these steps:
156-
157-
1. To dump the operators in your model, say `deeplabv3_scripted`, run the following lines of Python code:
158-
159-
.. code-block:: python
160-
161-
# Dump list of operators used by deeplabv3_scripted:
162-
import torch, yaml
163-
model = torch.jit.load('deeplabv3_scripted.ptl')
164-
ops = torch.jit.export_opnames(model)
165-
with open('deeplabv3_scripted.yaml', 'w') as output:
166-
yaml.dump(ops, output)
167-
168-
In the snippet above, you first need to load the ScriptModule. Then, use export_opnames to return a list of operator names of the ScriptModule and its submodules. Lastly, save the result in a yaml file. The yaml file can be generated for any PyTorch 1.4.0 or above version. You can do that by checking the value of `torch.__version__`.
169-
170-
2. To run the build script locally with the prepared yaml list of operators, pass in the yaml file generate from the last step into the environment variable SELECTED_OP_LIST. Also in the arguments, specify BUILD_PYTORCH_MOBILE=1 as well as the platform/architechture type.
171-
172-
**iOS**: Take the simulator build for example, the command should be:
173-
174-
.. code-block:: bash
175-
176-
SELECTED_OP_LIST=deeplabv3_scripted.yaml BUILD_PYTORCH_MOBILE=1 IOS_PLATFORM=SIMULATOR ./scripts/build_ios.sh
177-
178-
**Android**: Take the x86 build for example, the command should be:
179-
180-
.. code-block:: bash
181-
182-
SELECTED_OP_LIST=deeplabv3_scripted.yaml ./scripts/build_pytorch_android.sh x86
183-
184-
185-
186-
Conclusion
187-
----------
188-
189-
In this tutorial, we demonstrated how to use PyTorch's efficient mobile interpreter, in an Android and iOS app.
190-
191-
We walked through an Image Segmentation example to show how to dump the model, build a custom torch library from source and use the new api to run model.
192-
193-
Our efficient mobile interpreter is still under development, and we will continue improving its size in the future. Note, however, that the APIs are subject to change in future versions.
194-
195-
Thanks for reading! As always, we welcome any feedback, so please create an issue `here <https://github.com/pytorch/pytorch/issues>` - if you have any.
196-
197-
Learn More
198-
----------
199-
200-
- To learn more about PyTorch Mobile, please refer to `PyTorch Mobile Home Page <https://pytorch.org/mobile/home/>`_
201-
- To learn more about Image Segmentation, please refer to the `Image Segmentation DeepLabV3 on Android Recipe <https://pytorch.org/tutorials/beginner/deeplabv3_on_android.html>`_
10+
<meta http-equiv="Refresh" content="3; url='https://pytorch.org/executorch/stable/index.html'" />

0 commit comments

Comments
 (0)