Skip to content

Commit c7657f5

Browse files
authored
Merge pull request #1478 from pareenaverma/content_review
Android Selfie LP review
2 parents b2a5fdf + 72039b8 commit c7657f5

File tree

8 files changed

+78
-100
lines changed

8 files changed

+78
-100
lines changed

content/learning-paths/smartphones-and-mobile/build-android-selfie-app-using-mediapipe-multimodality/2-app-scaffolding.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
---
2-
title: Scaffold a new Android project
2+
title: Create a new Android project
33
weight: 2
44

55
### FIXED, DO NOT MODIFY
@@ -12,7 +12,7 @@ This learning path will teach you to architect an app following [modern Android
1212

1313
Download and install the latest version of [Android Studio](https://developer.android.com/studio/) on your host machine.
1414

15-
This learning path's instructions and screenshots are taken on macOS with Apple Silicon, but you may choose any of the supported hardware systems as described [here](https://developer.android.com/studio/install).
15+
The instructions for this learning path were tested on a Apple Silicon host machine running macOS, but you may choose any of the supported hardware systems as described [here](https://developer.android.com/studio/install).
1616

1717
Upon first installation, open Android Studio and proceed with the default or recommended settings. Accept license agreements and let Android Studio download all the required assets.
1818

@@ -26,12 +26,12 @@ Before you proceed to coding, here are some tips that might come handy:
2626

2727
## Create a new Android project
2828

29-
1. Navigate to **File > New > New Project...**.
29+
1. Navigate to File > New > New Project....
3030

31-
2. Select **Empty Views Activity** in **Phone and Tablet** galary as shown below, then click **Next**.
31+
2. Select Empty Views Activity in the Phone and Tablet gallery as shown below, then click Next.
3232
![Empty Views Activity](images/2/empty%20project.png)
3333

34-
3. Proceed with a cool project name and default configurations as shown below. Make sure that **Language** is set to **Kotlin**, and that **Build configuration language** is set to **Kotlin DSL**.
34+
3. Enter a project name and use the default configurations as shown below. Make sure that Language is set to Kotlin, and that Build configuration language is set to Kotlin DSL.
3535
![Project configuration](images/2/project%20config.png)
3636

3737
### Introduce CameraX dependencies
@@ -194,4 +194,4 @@ private fun bindCameraUseCases() {
194194
}
195195
```
196196

197-
In the next chapter, we will build and run the app to make sure the camera works well.
197+
In the next section, you will build and run the app to make sure the camera works well.

content/learning-paths/smartphones-and-mobile/build-android-selfie-app-using-mediapipe-multimodality/3-camera-permission.md

Lines changed: 11 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
---
2-
title: Handle camera permission
2+
title: Handle camera permissions
33
weight: 3
44

55
### FIXED, DO NOT MODIFY
@@ -8,18 +8,18 @@ layout: learningpathall
88

99
## Run the app on your device
1010

11-
1. Connect your Android device to your computer via a USB **data** cable. If this is your first time running and debugging Android apps, follow [this guide](https://developer.android.com/studio/run/device#setting-up) and double check this checklist:
11+
1. Connect your Android device to your computer via a USB data cable. If this is your first time running and debugging Android apps, follow [this guide](https://developer.android.com/studio/run/device#setting-up) and double check this checklist:
1212

13-
1. You have enabled **USB debugging** on your Android device following [this doc](https://developer.android.com/studio/debug/dev-options#Enable-debugging).
13+
1. You have enabled USB debugging on your Android device following [this doc](https://developer.android.com/studio/debug/dev-options#Enable-debugging).
1414

15-
2. You have confirmed by tapping "OK" on your Android device when an **"Allow USB debugging"** dialog pops up, and checked "Always allow from this computer".
15+
2. You have confirmed by tapping "OK" on your Android device when an "Allow USB debugging" dialog pops up, and checked "Always allow from this computer".
1616

1717
![Allow USB debugging dialog](https://ftc-docs.firstinspires.org/en/latest/_images/AllowUSBDebugging.jpg)
1818

1919

20-
2. Make sure your device model name and SDK version correctly show up on the top right toolbar. Click the **"Run"** button to build and run, as described [here](https://developer.android.com/studio/run).
20+
2. Make sure your device model name and SDK version correctly show up on the top right toolbar. Click the "Run" button to build and run the app.
2121

22-
3. After waiting for a while, you should be seeing success notification in Android Studio and the app showing up on your Android device.
22+
3. After a while, you should see a success notification in Android Studio and the app showing up on your Android device.
2323

2424
4. However, the app shows only a black screen while printing error messages in your [Logcat](https://developer.android.com/tools/logcat) which looks like this:
2525

@@ -30,11 +30,11 @@ layout: learningpathall
3030
2024-11-20 11:43:03.408 2709-3807 PackageManager pid-2709 E Permission android.permission.CAMERA isn't requested by package com.example.holisticselfiedemo
3131
```
3232

33-
5. Worry not. This is expected behavior because we haven't correctly configured this app's [permissions](https://developer.android.com/guide/topics/permissions/overview) yet, therefore Android OS restricts this app's access to camera features due to privacy reasons.
33+
5. Do not worry. This is expected behavior because you haven't correctly configured this app's [permissions](https://developer.android.com/guide/topics/permissions/overview) yet. Android OS restricts this app's access to camera features due to privacy reasons.
3434

3535
## Request camera permission at runtime
3636

37-
1. Navigate to `manifest.xml` in your `app` subproject's `src/main` path. Declare camera hardware and permission by inserting the following lines into the `<manifest>` element. Make sure it's **outside** and **above** `<application>` element.
37+
1. Navigate to `manifest.xml` in your `app` subproject's `src/main` path. Declare camera hardware and permission by inserting the following lines into the `<manifest>` element. Make sure it's declared outside and above `<application>` element.
3838

3939
```xml
4040
<uses-feature
@@ -107,12 +107,12 @@ layout: learningpathall
107107

108108
## Verify camera permission
109109

110-
1. Rebuild and run the app. Now you should be seeing a dialog pops up requesting camera permissions!
110+
1. Rebuild and run the app. Now you should see a dialog pop up requesting camera permissions!
111111

112-
2. Tap `Allow` or `While using the app` (depending on your Android OS versions), then you should be seeing your own face in the camera preview. Good job!
112+
2. Tap `Allow` or `While using the app` (depending on your Android OS versions). Then you should see your own face in the camera preview. Good job!
113113

114114
{{% notice Tip %}}
115115
Sometimes you might need to restart the app to observe the permission change take effect.
116116
{{% /notice %}}
117117

118-
In the next chapter, we will introduce MediaPipe vision solutions.
118+
In the next section, you will learn how to integrate MediaPipe vision solutions.

content/learning-paths/smartphones-and-mobile/build-android-selfie-app-using-mediapipe-multimodality/4-introduce-mediapipe.md

Lines changed: 20 additions & 20 deletions
Original file line numberDiff line numberDiff line change
@@ -8,9 +8,9 @@ layout: learningpathall
88

99
[MediaPipe Solutions](https://ai.google.dev/edge/mediapipe/solutions/guide) provides a suite of libraries and tools for you to quickly apply artificial intelligence (AI) and machine learning (ML) techniques in your applications.
1010

11-
MediaPipe Tasks provides the core programming interface of the MediaPipe Solutions suite, including a set of libraries for deploying innovative ML solutions onto devices with a minimum of code. It supports multiple platforms, including Android, Web / JavaScript, Python, etc.
11+
MediaPipe Tasks provides the core programming interface of the MediaPipe Solutions suite, including a set of libraries for deploying innovative ML solutions onto devices with a minimum of code. It supports multiple platforms, including Android, Web, JavaScript, Python, etc.
1212

13-
## Introduce MediaPipe dependencies
13+
## Add MediaPipe dependencies
1414

1515
1. Navigate to `libs.versions.toml` and append the following line to the end of `[versions]` section. This defines the version of MediaPipe library we will be using.
1616

@@ -19,57 +19,57 @@ mediapipe-vision = "0.10.15"
1919
```
2020

2121
{{% notice Note %}}
22-
Please stick with this version and do not use newer versions due to bugs and unexpected behaviors.
22+
Please use this version and do not use newer versions as this introduces bugs and unexpected behavior.
2323
{{% /notice %}}
2424

25-
2. Append the following lines to the end of `[libraries]` section. This declares MediaPipe's vision dependency.
25+
2. Append the following lines to the end of `[libraries]` section. This declares MediaPipe's vision dependency:
2626

2727
```toml
2828
mediapipe-vision = { group = "com.google.mediapipe", name = "tasks-vision", version.ref = "mediapipe-vision" }
2929
```
3030

31-
3. Navigate to `build.gradle.kts` in your project's `app` directory, then insert the following line into `dependencies` block, ideally between `implementation` and `testImplementation`.
31+
3. Navigate to `build.gradle.kts` in your project's `app` directory, then insert the following line into `dependencies` block, between `implementation` and `testImplementation`.
3232

3333
```kotlin
3434
implementation(libs.mediapipe.vision)
3535
```
3636

3737
## Prepare model asset bundles
3838

39-
In this app, we will be using MediaPipe's [Face Landmark Detection](https://ai.google.dev/edge/mediapipe/solutions/vision/face_landmarker) and [Gesture Recognizer](https://ai.google.dev/edge/mediapipe/solutions/vision/gesture_recognizer) solutions, which requires their model asset bundle files to initialize.
39+
In this app, you will use MediaPipe's [Face Landmark Detection](https://ai.google.dev/edge/mediapipe/solutions/vision/face_landmarker) and [Gesture Recognizer](https://ai.google.dev/edge/mediapipe/solutions/vision/gesture_recognizer) solutions, which requires their model asset bundle files to initialize.
4040

4141
Choose one of the two options below that aligns best with your learning needs.
4242

43-
### Basic approach: manual downloading
43+
### Basic approach: manual download
4444

45-
Simply download the following two files, then move them into the default asset directory: `app/src/main/assets`.
45+
Download the following two files, then move them into the default asset directory: `app/src/main/assets`.
4646

47-
```
47+
```console
4848
https://storage.googleapis.com/mediapipe-models/face_landmarker/face_landmarker/float16/1/face_landmarker.task
4949

5050
https://storage.googleapis.com/mediapipe-models/gesture_recognizer/gesture_recognizer/float16/1/gesture_recognizer.task
5151
```
5252

5353
{{% notice Tip %}}
54-
You might need to create the `assets` directory if not exist.
54+
You might need to create the `assets` directory if it does not exist.
5555
{{% /notice %}}
5656

5757
### Advanced approach: configure prebuild download tasks
5858

59-
Gradle doesn't come with a convenient [Task](https://docs.gradle.org/current/userguide/tutorial_using_tasks.html) type to manage downloads, therefore we will introduce [gradle-download-task](https://github.com/michel-kraemer/gradle-download-task) dependency.
59+
Gradle doesn't come with a convenient [Task](https://docs.gradle.org/current/userguide/tutorial_using_tasks.html) type to manage downloads, so you will use the [gradle-download-task](https://github.com/michel-kraemer/gradle-download-task) dependency.
6060

61-
1. Again, navigate to `libs.versions.toml`. Append `download = "5.6.0"` to `[versions]` section, and `de-undercouch-download = { id = "de.undercouch.download", version.ref = "download" }` to `[plugins]` section.
61+
1. Navigate to `libs.versions.toml`. Append `download = "5.6.0"` to `[versions]` section, and `de-undercouch-download = { id = "de.undercouch.download", version.ref = "download" }` to `[plugins]` section.
6262

63-
2. Again, navigate to `build.gradle.kts` in your project's `app` directory and append `alias(libs.plugins.de.undercouch.download)` to the `plugins` block. This enables the aforementioned _Download_ task plugin in this `app` subproject.
63+
2. Navigate to `build.gradle.kts` in your project's `app` directory and append `alias(libs.plugins.de.undercouch.download)` to the `plugins` block. This enables the _Download_ task plugin in this `app` subproject.
6464

65-
4. Insert the following lines between `plugins` block and `android` block to define the constant values, including: asset directory path and the URLs for both models.
65+
3. Insert the following lines between `plugins` block and `android` block to define the constant values, including: asset directory path and the URLs for both models.
6666
```kotlin
6767
val assetDir = "$projectDir/src/main/assets"
6868
val gestureTaskUrl = "https://storage.googleapis.com/mediapipe-models/gesture_recognizer/gesture_recognizer/float16/1/gesture_recognizer.task"
6969
val faceTaskUrl = "https://storage.googleapis.com/mediapipe-models/face_landmarker/face_landmarker/float16/1/face_landmarker.task"
7070
```
7171

72-
5. Insert `import de.undercouch.gradle.tasks.download.Download` into **the top of this file**, then append the following code to **the end of this file**, which hooks two _Download_ tasks to be executed before `preBuild`:
72+
4. Insert `import de.undercouch.gradle.tasks.download.Download` to the top of this file, then append the following code to the end of this file, which hooks two _Download_ tasks to be executed before `preBuild`:
7373

7474
```kotlin
7575
tasks.register<Download>("downloadGestureTaskAsset") {
@@ -97,11 +97,11 @@ tasks.named("preBuild") {
9797
Refer to [this section](2-app-scaffolding.md#enable-view-binding) if you need help.
9898
{{% /notice %}}
9999

100-
2. Now you should be seeing both model asset bundles in your `assets` directory, as shown below:
100+
2. Now you should see both model asset bundles in your `assets` directory, as shown below:
101101

102102
![model asset bundles](images/4/model%20asset%20bundles.png)
103103

104-
3. Now you are ready to import MediaPipe's Face Landmark Detection and Gesture Recognizer into the project. Actually, we have already implemented the code below for you based on [MediaPipe's sample code](https://github.com/google-ai-edge/mediapipe-samples/tree/main/examples). Simply create a new file `HolisticRecognizerHelper.kt` placed in the source directory along with `MainActivity.kt`, then copy paste the code below into it.
104+
3. You are ready to import MediaPipe's Face Landmark Detection and Gesture Recognizer into the project. Example code is already implemented for ease of use based on [MediaPipe's sample code](https://github.com/google-ai-edge/mediapipe-samples/tree/main/examples). Simply create a new file `HolisticRecognizerHelper.kt` placed in the source directory along with `MainActivity.kt`, then copy paste the code below into it.
105105

106106
```kotlin
107107
package com.example.holisticselfiedemo
@@ -426,9 +426,9 @@ data class GestureResultBundle(
426426
```
427427

428428
{{% notice Info %}}
429-
In this learning path we are only configuring the MediaPipe vision solutions to recognize one person with at most two hands in the camera.
429+
In this learning path you are only configuring the MediaPipe vision solutions to recognize one person with at most two hands in the camera.
430430

431-
If you'd like to experiment with more people, simply change the `FACES_COUNT` constant to be your desired value.
431+
If you'd like to experiment with more people, change the `FACES_COUNT` constant to be your desired value.
432432
{{% /notice %}}
433433

434-
In the next chapter, we will connect the dots from this helper class to the UI layer via a ViewModel.
434+
In the next section, you will connect the dots from this helper class to the UI layer via a ViewModel.

content/learning-paths/smartphones-and-mobile/build-android-selfie-app-using-mediapipe-multimodality/6-flow-data-to-view-1.md

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ layout: learningpathall
88

99
[SharedFlow](https://developer.android.com/kotlin/flow/stateflow-and-sharedflow#sharedflow) and [StateFlow](https://developer.android.com/kotlin/flow/stateflow-and-sharedflow#stateflow) are [Kotlin Flow](https://developer.android.com/kotlin/flow) APIs that enable Flows to optimally emit state updates and emit values to multiple consumers.
1010

11-
In this learning path, you will have the opportunity to experiment with both `SharedFlow` and `StateFlow`. This chapter will focus on SharedFlow while the next chapter will focus on StateFlow.
11+
In this learning path, you will experiment with both `SharedFlow` and `StateFlow`. This section will focus on SharedFlow while the next chapter will focus on StateFlow.
1212

1313
`SharedFlow` is a general-purpose, hot flow that can emit values to multiple subscribers. It is highly configurable, allowing you to set the replay cache size, buffer capacity, etc.
1414

@@ -54,9 +54,9 @@ This `SharedFlow` is initialized with a replay size of `1`. This retains the mos
5454

5555
## Visualize face and gesture results
5656

57-
To visualize the results of Face Landmark Detection and Gesture Recognition tasks, we have prepared the following code for you based on [MediaPipe's samples](https://github.com/google-ai-edge/mediapipe-samples/tree/main/examples).
57+
To visualize the results of Face Landmark Detection and Gesture Recognition tasks, based on [MediaPipe's samples](https://github.com/google-ai-edge/mediapipe-samples/tree/main/examples) follow the intructions in this section.
5858

59-
1. Create a new file named `FaceLandmarkerOverlayView.kt` and fill in the content below:
59+
1. Create a new file named `FaceLandmarkerOverlayView.kt` and copy the content below:
6060

6161
```kotlin
6262
/*
@@ -180,7 +180,7 @@ class FaceLandmarkerOverlayView(context: Context?, attrs: AttributeSet?) :
180180
```
181181

182182

183-
2. Create a new file named `GestureOverlayView.kt` and fill in the content below:
183+
2. Create a new file named `GestureOverlayView.kt` and copy the content below:
184184

185185
```kotlin
186186
/*
@@ -302,7 +302,7 @@ class GestureOverlayView(context: Context?, attrs: AttributeSet?) :
302302

303303
## Update UI in the view controller
304304

305-
1. Add the above two overlay views to `activity_main.xml` layout file:
305+
1. Add the two overlay views to `activity_main.xml` layout file:
306306

307307
```xml
308308
<com.example.holisticselfiedemo.FaceLandmarkerOverlayView
@@ -316,7 +316,7 @@ class GestureOverlayView(context: Context?, attrs: AttributeSet?) :
316316
android:layout_height="match_parent" />
317317
```
318318

319-
2. Collect the new SharedFlow `uiEvents` in `MainActivity` by appending the code below to the end of `onCreate` method, **below** `setupCamera()` method call.
319+
2. Collect the new SharedFlow `uiEvents` in `MainActivity` by appending the code below to the end of `onCreate` method, below `setupCamera()` method call.
320320

321321
```kotlin
322322
lifecycleScope.launch {
@@ -363,7 +363,7 @@ class GestureOverlayView(context: Context?, attrs: AttributeSet?) :
363363
}
364364
```
365365

366-
4. Build and run the app again. Now you should be seeing face and gesture overlays on top of the camera preview as shown below. Good job!
366+
4. Build and run the app again. Now you should see face and gesture overlays on top of the camera preview as shown below. Good job!
367367

368368
![overlay views](images/6/overlay%20views.png)
369369

0 commit comments

Comments
 (0)