Skip to content

Commit 8147bfb

Browse files
committed
Selfie Android LP review
1 parent b2a5fdf commit 8147bfb

File tree

4 files changed

+24
-142
lines changed

4 files changed

+24
-142
lines changed

content/learning-paths/smartphones-and-mobile/build-android-selfie-app-using-mediapipe-multimodality/2-app-scaffolding.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@ This learning path will teach you to architect an app following [modern Android
1212

1313
Download and install the latest version of [Android Studio](https://developer.android.com/studio/) on your host machine.
1414

15-
This learning path's instructions and screenshots are taken on macOS with Apple Silicon, but you may choose any of the supported hardware systems as described [here](https://developer.android.com/studio/install).
15+
The instructions for this learning path were tested on a Apple Silicon host machine running macOS, but you may choose any of the supported hardware systems as described [here](https://developer.android.com/studio/install).
1616

1717
Upon first installation, open Android Studio and proceed with the default or recommended settings. Accept license agreements and let Android Studio download all the required assets.
1818

content/learning-paths/smartphones-and-mobile/build-android-selfie-app-using-mediapipe-multimodality/3-camera-permission.md

Lines changed: 0 additions & 118 deletions
This file was deleted.

content/learning-paths/smartphones-and-mobile/build-android-selfie-app-using-mediapipe-multimodality/4-introduce-mediapipe.md

Lines changed: 20 additions & 20 deletions
Original file line numberDiff line numberDiff line change
@@ -8,9 +8,9 @@ layout: learningpathall
88

99
[MediaPipe Solutions](https://ai.google.dev/edge/mediapipe/solutions/guide) provides a suite of libraries and tools for you to quickly apply artificial intelligence (AI) and machine learning (ML) techniques in your applications.
1010

11-
MediaPipe Tasks provides the core programming interface of the MediaPipe Solutions suite, including a set of libraries for deploying innovative ML solutions onto devices with a minimum of code. It supports multiple platforms, including Android, Web / JavaScript, Python, etc.
11+
MediaPipe Tasks provides the core programming interface of the MediaPipe Solutions suite, including a set of libraries for deploying innovative ML solutions onto devices with a minimum of code. It supports multiple platforms, including Android, Web, JavaScript, Python, etc.
1212

13-
## Introduce MediaPipe dependencies
13+
## Add MediaPipe dependencies
1414

1515
1. Navigate to `libs.versions.toml` and append the following line to the end of `[versions]` section. This defines the version of MediaPipe library we will be using.
1616

@@ -19,57 +19,57 @@ mediapipe-vision = "0.10.15"
1919
```
2020

2121
{{% notice Note %}}
22-
Please stick with this version and do not use newer versions due to bugs and unexpected behaviors.
22+
Please use this version and do not use newer versions as this introduces bugs and unexpected behavior.
2323
{{% /notice %}}
2424

25-
2. Append the following lines to the end of `[libraries]` section. This declares MediaPipe's vision dependency.
25+
2. Append the following lines to the end of `[libraries]` section. This declares MediaPipe's vision dependency:
2626

2727
```toml
2828
mediapipe-vision = { group = "com.google.mediapipe", name = "tasks-vision", version.ref = "mediapipe-vision" }
2929
```
3030

31-
3. Navigate to `build.gradle.kts` in your project's `app` directory, then insert the following line into `dependencies` block, ideally between `implementation` and `testImplementation`.
31+
3. Navigate to `build.gradle.kts` in your project's `app` directory, then insert the following line into `dependencies` block, between `implementation` and `testImplementation`.
3232

3333
```kotlin
3434
implementation(libs.mediapipe.vision)
3535
```
3636

3737
## Prepare model asset bundles
3838

39-
In this app, we will be using MediaPipe's [Face Landmark Detection](https://ai.google.dev/edge/mediapipe/solutions/vision/face_landmarker) and [Gesture Recognizer](https://ai.google.dev/edge/mediapipe/solutions/vision/gesture_recognizer) solutions, which requires their model asset bundle files to initialize.
39+
In this app, you will use MediaPipe's [Face Landmark Detection](https://ai.google.dev/edge/mediapipe/solutions/vision/face_landmarker) and [Gesture Recognizer](https://ai.google.dev/edge/mediapipe/solutions/vision/gesture_recognizer) solutions, which requires their model asset bundle files to initialize.
4040

4141
Choose one of the two options below that aligns best with your learning needs.
4242

43-
### Basic approach: manual downloading
43+
### Basic approach: manual download
4444

45-
Simply download the following two files, then move them into the default asset directory: `app/src/main/assets`.
45+
Download the following two files, then move them into the default asset directory: `app/src/main/assets`.
4646

47-
```
47+
```console
4848
https://storage.googleapis.com/mediapipe-models/face_landmarker/face_landmarker/float16/1/face_landmarker.task
4949

5050
https://storage.googleapis.com/mediapipe-models/gesture_recognizer/gesture_recognizer/float16/1/gesture_recognizer.task
5151
```
5252

5353
{{% notice Tip %}}
54-
You might need to create the `assets` directory if not exist.
54+
You might need to create the `assets` directory if it does not exist.
5555
{{% /notice %}}
5656

5757
### Advanced approach: configure prebuild download tasks
5858

59-
Gradle doesn't come with a convenient [Task](https://docs.gradle.org/current/userguide/tutorial_using_tasks.html) type to manage downloads, therefore we will introduce [gradle-download-task](https://github.com/michel-kraemer/gradle-download-task) dependency.
59+
Gradle doesn't come with a convenient [Task](https://docs.gradle.org/current/userguide/tutorial_using_tasks.html) type to manage downloads, so you will use the [gradle-download-task](https://github.com/michel-kraemer/gradle-download-task) dependency.
6060

61-
1. Again, navigate to `libs.versions.toml`. Append `download = "5.6.0"` to `[versions]` section, and `de-undercouch-download = { id = "de.undercouch.download", version.ref = "download" }` to `[plugins]` section.
61+
1. Navigate to `libs.versions.toml`. Append `download = "5.6.0"` to `[versions]` section, and `de-undercouch-download = { id = "de.undercouch.download", version.ref = "download" }` to `[plugins]` section.
6262

63-
2. Again, navigate to `build.gradle.kts` in your project's `app` directory and append `alias(libs.plugins.de.undercouch.download)` to the `plugins` block. This enables the aforementioned _Download_ task plugin in this `app` subproject.
63+
2. Navigate to `build.gradle.kts` in your project's `app` directory and append `alias(libs.plugins.de.undercouch.download)` to the `plugins` block. This enables the _Download_ task plugin in this `app` subproject.
6464

65-
4. Insert the following lines between `plugins` block and `android` block to define the constant values, including: asset directory path and the URLs for both models.
65+
3. Insert the following lines between `plugins` block and `android` block to define the constant values, including: asset directory path and the URLs for both models.
6666
```kotlin
6767
val assetDir = "$projectDir/src/main/assets"
6868
val gestureTaskUrl = "https://storage.googleapis.com/mediapipe-models/gesture_recognizer/gesture_recognizer/float16/1/gesture_recognizer.task"
6969
val faceTaskUrl = "https://storage.googleapis.com/mediapipe-models/face_landmarker/face_landmarker/float16/1/face_landmarker.task"
7070
```
7171

72-
5. Insert `import de.undercouch.gradle.tasks.download.Download` into **the top of this file**, then append the following code to **the end of this file**, which hooks two _Download_ tasks to be executed before `preBuild`:
72+
4. Insert `import de.undercouch.gradle.tasks.download.Download` to the top of this file, then append the following code to the end of this file, which hooks two _Download_ tasks to be executed before `preBuild`:
7373

7474
```kotlin
7575
tasks.register<Download>("downloadGestureTaskAsset") {
@@ -97,11 +97,11 @@ tasks.named("preBuild") {
9797
Refer to [this section](2-app-scaffolding.md#enable-view-binding) if you need help.
9898
{{% /notice %}}
9999

100-
2. Now you should be seeing both model asset bundles in your `assets` directory, as shown below:
100+
2. Now you should see both model asset bundles in your `assets` directory, as shown below:
101101

102102
![model asset bundles](images/4/model%20asset%20bundles.png)
103103

104-
3. Now you are ready to import MediaPipe's Face Landmark Detection and Gesture Recognizer into the project. Actually, we have already implemented the code below for you based on [MediaPipe's sample code](https://github.com/google-ai-edge/mediapipe-samples/tree/main/examples). Simply create a new file `HolisticRecognizerHelper.kt` placed in the source directory along with `MainActivity.kt`, then copy paste the code below into it.
104+
3. You are ready to import MediaPipe's Face Landmark Detection and Gesture Recognizer into the project. Example code is already implemented for ease of use based on [MediaPipe's sample code](https://github.com/google-ai-edge/mediapipe-samples/tree/main/examples). Simply create a new file `HolisticRecognizerHelper.kt` placed in the source directory along with `MainActivity.kt`, then copy paste the code below into it.
105105

106106
```kotlin
107107
package com.example.holisticselfiedemo
@@ -426,9 +426,9 @@ data class GestureResultBundle(
426426
```
427427

428428
{{% notice Info %}}
429-
In this learning path we are only configuring the MediaPipe vision solutions to recognize one person with at most two hands in the camera.
429+
In this learning path you are only configuring the MediaPipe vision solutions to recognize one person with at most two hands in the camera.
430430

431-
If you'd like to experiment with more people, simply change the `FACES_COUNT` constant to be your desired value.
431+
If you'd like to experiment with more people, change the `FACES_COUNT` constant to be your desired value.
432432
{{% /notice %}}
433433

434-
In the next chapter, we will connect the dots from this helper class to the UI layer via a ViewModel.
434+
In the next section, you will connect the dots from this helper class to the UI layer via a ViewModel.

content/learning-paths/smartphones-and-mobile/build-android-selfie-app-using-mediapipe-multimodality/_index.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -28,9 +28,9 @@ author_primary: Han Yin
2828
skilllevels: Beginner
2929
subjects: ML
3030
armips:
31-
- ARM Cortex-A
32-
- ARM Cortex-X
33-
- ARM Mali GPU
31+
- Cortex-A
32+
- Cortex-X
33+
- Mali GPU
3434
tools_software_languages:
3535
- mobile
3636
- Android Studio

0 commit comments

Comments
 (0)