You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: content/learning-paths/smartphones-and-mobile/build-android-selfie-app-using-mediapipe-multimodality/2-app-scaffolding.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -12,7 +12,7 @@ This learning path will teach you to architect an app following [modern Android
12
12
13
13
Download and install the latest version of [Android Studio](https://developer.android.com/studio/) on your host machine.
14
14
15
-
This learning path's instructions and screenshots are taken on macOS with Apple Silicon, but you may choose any of the supported hardware systems as described [here](https://developer.android.com/studio/install).
15
+
The instructions for this learning path were tested on a Apple Silicon host machine running macOS, but you may choose any of the supported hardware systems as described [here](https://developer.android.com/studio/install).
16
16
17
17
Upon first installation, open Android Studio and proceed with the default or recommended settings. Accept license agreements and let Android Studio download all the required assets.
Copy file name to clipboardExpand all lines: content/learning-paths/smartphones-and-mobile/build-android-selfie-app-using-mediapipe-multimodality/3-camera-permission.md
Copy file name to clipboardExpand all lines: content/learning-paths/smartphones-and-mobile/build-android-selfie-app-using-mediapipe-multimodality/4-introduce-mediapipe.md
+20-20Lines changed: 20 additions & 20 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -8,9 +8,9 @@ layout: learningpathall
8
8
9
9
[MediaPipe Solutions](https://ai.google.dev/edge/mediapipe/solutions/guide) provides a suite of libraries and tools for you to quickly apply artificial intelligence (AI) and machine learning (ML) techniques in your applications.
10
10
11
-
MediaPipe Tasks provides the core programming interface of the MediaPipe Solutions suite, including a set of libraries for deploying innovative ML solutions onto devices with a minimum of code. It supports multiple platforms, including Android, Web / JavaScript, Python, etc.
11
+
MediaPipe Tasks provides the core programming interface of the MediaPipe Solutions suite, including a set of libraries for deploying innovative ML solutions onto devices with a minimum of code. It supports multiple platforms, including Android, Web, JavaScript, Python, etc.
12
12
13
-
## Introduce MediaPipe dependencies
13
+
## Add MediaPipe dependencies
14
14
15
15
1. Navigate to `libs.versions.toml` and append the following line to the end of `[versions]` section. This defines the version of MediaPipe library we will be using.
16
16
@@ -19,57 +19,57 @@ mediapipe-vision = "0.10.15"
19
19
```
20
20
21
21
{{% notice Note %}}
22
-
Please stick with this version and do not use newer versions due to bugs and unexpected behaviors.
22
+
Please use this version and do not use newer versions as this introduces bugs and unexpected behavior.
23
23
{{% /notice %}}
24
24
25
-
2. Append the following lines to the end of `[libraries]` section. This declares MediaPipe's vision dependency.
25
+
2. Append the following lines to the end of `[libraries]` section. This declares MediaPipe's vision dependency:
26
26
27
27
```toml
28
28
mediapipe-vision = { group = "com.google.mediapipe", name = "tasks-vision", version.ref = "mediapipe-vision" }
29
29
```
30
30
31
-
3. Navigate to `build.gradle.kts` in your project's `app` directory, then insert the following line into `dependencies` block, ideally between `implementation` and `testImplementation`.
31
+
3. Navigate to `build.gradle.kts` in your project's `app` directory, then insert the following line into `dependencies` block, between `implementation` and `testImplementation`.
32
32
33
33
```kotlin
34
34
implementation(libs.mediapipe.vision)
35
35
```
36
36
37
37
## Prepare model asset bundles
38
38
39
-
In this app, we will be using MediaPipe's [Face Landmark Detection](https://ai.google.dev/edge/mediapipe/solutions/vision/face_landmarker) and [Gesture Recognizer](https://ai.google.dev/edge/mediapipe/solutions/vision/gesture_recognizer) solutions, which requires their model asset bundle files to initialize.
39
+
In this app, you will use MediaPipe's [Face Landmark Detection](https://ai.google.dev/edge/mediapipe/solutions/vision/face_landmarker) and [Gesture Recognizer](https://ai.google.dev/edge/mediapipe/solutions/vision/gesture_recognizer) solutions, which requires their model asset bundle files to initialize.
40
40
41
41
Choose one of the two options below that aligns best with your learning needs.
42
42
43
-
### Basic approach: manual downloading
43
+
### Basic approach: manual download
44
44
45
-
Simply download the following two files, then move them into the default asset directory: `app/src/main/assets`.
45
+
Download the following two files, then move them into the default asset directory: `app/src/main/assets`.
Gradle doesn't come with a convenient [Task](https://docs.gradle.org/current/userguide/tutorial_using_tasks.html) type to manage downloads, therefore we will introduce[gradle-download-task](https://github.com/michel-kraemer/gradle-download-task) dependency.
59
+
Gradle doesn't come with a convenient [Task](https://docs.gradle.org/current/userguide/tutorial_using_tasks.html) type to manage downloads, so you will use the[gradle-download-task](https://github.com/michel-kraemer/gradle-download-task) dependency.
60
60
61
-
1.Again, navigate to `libs.versions.toml`. Append `download = "5.6.0"` to `[versions]` section, and `de-undercouch-download = { id = "de.undercouch.download", version.ref = "download" }` to `[plugins]` section.
61
+
1.Navigate to `libs.versions.toml`. Append `download = "5.6.0"` to `[versions]` section, and `de-undercouch-download = { id = "de.undercouch.download", version.ref = "download" }` to `[plugins]` section.
62
62
63
-
2.Again, navigate to `build.gradle.kts` in your project's `app` directory and append `alias(libs.plugins.de.undercouch.download)` to the `plugins` block. This enables the aforementioned_Download_ task plugin in this `app` subproject.
63
+
2.Navigate to `build.gradle.kts` in your project's `app` directory and append `alias(libs.plugins.de.undercouch.download)` to the `plugins` block. This enables the _Download_ task plugin in this `app` subproject.
64
64
65
-
4. Insert the following lines between `plugins` block and `android` block to define the constant values, including: asset directory path and the URLs for both models.
65
+
3. Insert the following lines between `plugins` block and `android` block to define the constant values, including: asset directory path and the URLs for both models.
66
66
```kotlin
67
67
val assetDir ="$projectDir/src/main/assets"
68
68
val gestureTaskUrl ="https://storage.googleapis.com/mediapipe-models/gesture_recognizer/gesture_recognizer/float16/1/gesture_recognizer.task"
69
69
val faceTaskUrl ="https://storage.googleapis.com/mediapipe-models/face_landmarker/face_landmarker/float16/1/face_landmarker.task"
70
70
```
71
71
72
-
5. Insert `import de.undercouch.gradle.tasks.download.Download`into **the top of this file**, then append the following code to **the end of this file**, which hooks two _Download_ tasks to be executed before `preBuild`:
72
+
4. Insert `import de.undercouch.gradle.tasks.download.Download`to the top of this file, then append the following code to the end of this file, which hooks two _Download_ tasks to be executed before `preBuild`:
3.Now you are ready to import MediaPipe's Face Landmark Detection and Gesture Recognizer into the project. Actually, we have already implemented the code below for you based on [MediaPipe's sample code](https://github.com/google-ai-edge/mediapipe-samples/tree/main/examples). Simply create a new file `HolisticRecognizerHelper.kt` placed in the source directory along with `MainActivity.kt`, then copy paste the code below into it.
104
+
3.You are ready to import MediaPipe's Face Landmark Detection and Gesture Recognizer into the project. Example code is already implemented for ease of use based on [MediaPipe's sample code](https://github.com/google-ai-edge/mediapipe-samples/tree/main/examples). Simply create a new file `HolisticRecognizerHelper.kt` placed in the source directory along with `MainActivity.kt`, then copy paste the code below into it.
105
105
106
106
```kotlin
107
107
packagecom.example.holisticselfiedemo
@@ -426,9 +426,9 @@ data class GestureResultBundle(
426
426
```
427
427
428
428
{{% notice Info %}}
429
-
In this learning path we are only configuring the MediaPipe vision solutions to recognize one person with at most two hands in the camera.
429
+
In this learning path you are only configuring the MediaPipe vision solutions to recognize one person with at most two hands in the camera.
430
430
431
-
If you'd like to experiment with more people, simply change the `FACES_COUNT` constant to be your desired value.
431
+
If you'd like to experiment with more people, change the `FACES_COUNT` constant to be your desired value.
432
432
{{% /notice %}}
433
433
434
-
In the next chapter, we will connect the dots from this helper class to the UI layer via a ViewModel.
434
+
In the next section, you will connect the dots from this helper class to the UI layer via a ViewModel.
Copy file name to clipboardExpand all lines: content/learning-paths/smartphones-and-mobile/build-android-selfie-app-using-mediapipe-multimodality/_index.md
0 commit comments