You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: content/learning-paths/smartphones-and-mobile/build-android-selfie-app-using-mediapipe-multimodality/6-flow-data-to-view-1.md
+7-7Lines changed: 7 additions & 7 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -8,7 +8,7 @@ layout: learningpathall
8
8
9
9
[SharedFlow](https://developer.android.com/kotlin/flow/stateflow-and-sharedflow#sharedflow) and [StateFlow](https://developer.android.com/kotlin/flow/stateflow-and-sharedflow#stateflow) are [Kotlin Flow](https://developer.android.com/kotlin/flow) APIs that enable Flows to optimally emit state updates and emit values to multiple consumers.
10
10
11
-
In this learning path, you will have the opportunity to experiment with both `SharedFlow` and `StateFlow`. This chapter will focus on SharedFlow while the next chapter will focus on StateFlow.
11
+
In this learning path, you will experiment with both `SharedFlow` and `StateFlow`. This section will focus on SharedFlow while the next chapter will focus on StateFlow.
12
12
13
13
`SharedFlow` is a general-purpose, hot flow that can emit values to multiple subscribers. It is highly configurable, allowing you to set the replay cache size, buffer capacity, etc.
14
14
@@ -54,9 +54,9 @@ This `SharedFlow` is initialized with a replay size of `1`. This retains the mos
54
54
55
55
## Visualize face and gesture results
56
56
57
-
To visualize the results of Face Landmark Detection and Gesture Recognition tasks, we have prepared the following code for you based on [MediaPipe's samples](https://github.com/google-ai-edge/mediapipe-samples/tree/main/examples).
57
+
To visualize the results of Face Landmark Detection and Gesture Recognition tasks, based on [MediaPipe's samples](https://github.com/google-ai-edge/mediapipe-samples/tree/main/examples) follow the intructions in this section.
58
58
59
-
1. Create a new file named `FaceLandmarkerOverlayView.kt` and fill in the content below:
59
+
1. Create a new file named `FaceLandmarkerOverlayView.kt` and copy the content below:
60
60
61
61
```kotlin
62
62
/*
@@ -180,7 +180,7 @@ class FaceLandmarkerOverlayView(context: Context?, attrs: AttributeSet?) :
180
180
```
181
181
182
182
183
-
2. Create a new file named `GestureOverlayView.kt` and fill in the content below:
183
+
2. Create a new file named `GestureOverlayView.kt` and copy the content below:
184
184
185
185
```kotlin
186
186
/*
@@ -302,7 +302,7 @@ class GestureOverlayView(context: Context?, attrs: AttributeSet?) :
302
302
303
303
## Update UI in the view controller
304
304
305
-
1. Add the above two overlay views to `activity_main.xml` layout file:
305
+
1. Add the two overlay views to `activity_main.xml` layout file:
@@ -316,7 +316,7 @@ class GestureOverlayView(context: Context?, attrs: AttributeSet?) :
316
316
android:layout_height="match_parent" />
317
317
```
318
318
319
-
2. Collect the new SharedFlow `uiEvents` in `MainActivity` by appending the code below to the end of `onCreate` method, **below**`setupCamera()` method call.
319
+
2. Collect the new SharedFlow `uiEvents` in `MainActivity` by appending the code below to the end of `onCreate` method, below `setupCamera()` method call.
320
320
321
321
```kotlin
322
322
lifecycleScope.launch {
@@ -363,7 +363,7 @@ class GestureOverlayView(context: Context?, attrs: AttributeSet?) :
363
363
}
364
364
```
365
365
366
-
4. Build and run the app again. Now you should be seeing face and gesture overlays on top of the camera preview as shown below. Good job!
366
+
4. Build and run the app again. Now you should see face and gesture overlays on top of the camera preview as shown below. Good job!
Copy file name to clipboardExpand all lines: content/learning-paths/smartphones-and-mobile/build-android-selfie-app-using-mediapipe-multimodality/7-flow-data-to-view-2.md
+6-6Lines changed: 6 additions & 6 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -25,7 +25,7 @@ Therefore, `StateFlow` is a specialized type of `SharedFlow` that represents a s
25
25
val gestureOk:StateFlow<Boolean> =_gestureOk
26
26
```
27
27
28
-
2. Append the following constant values to `MainViewModel`'s companion object. In this demo app, we are only focusing on smiling faces and thumb-up gestures.
28
+
2. Append the following constant values to `MainViewModel`'s companion object. In this demo app, you will focus on smiling faces and thumb-up gestures.
2. In the same directory, create a new resource file named `dimens.xml` if not exist, which is used to define layout related dimension values:
78
+
2. In the same directory, create a new resource file named `dimens.xml` if it does not exist. This file is used to define layout related dimension values:
79
79
80
80
```xml
81
81
<?xml version="1.0" encoding="utf-8"?>
@@ -85,7 +85,7 @@ Therefore, `StateFlow` is a specialized type of `SharedFlow` that represents a s
85
85
</resources>
86
86
```
87
87
88
-
3. Navigate to `activity_main.xml` layout file and add the following code to the root `ConstraintLayout`, **below**the two overlay views which you just added in the previous chapter.
88
+
3. Navigate to `activity_main.xml` layout file and add the following code to the root `ConstraintLayout`. Add this code after the two overlay views which you just added in the previous section.
89
89
90
90
```xml
91
91
<androidx.appcompat.widget.SwitchCompat
@@ -111,7 +111,7 @@ Therefore, `StateFlow` is a specialized type of `SharedFlow` that represents a s
4. Finally, navigate to `MainActivity.kt` and append the following code inside `repeatOnLifecycle(Lifecycle.State.RESUMED)` block, **below** the `launch` block you just added in the previous chapter. This makes sure each of the **three** parallel `launch`runs in its own Coroutine concurrently without blocking each other.
114
+
4. Finally, navigate to `MainActivity.kt` and append the following code inside `repeatOnLifecycle(Lifecycle.State.RESUMED)` block, after the `launch` block you just added in the previous section. This makes sure each of the three parallel `launch`run in its own co-routine concurrently without blocking each other.
115
115
116
116
```kotlin
117
117
launch {
@@ -127,15 +127,15 @@ Therefore, `StateFlow` is a specialized type of `SharedFlow` that represents a s
127
127
}
128
128
```
129
129
130
-
5. Build and run the app again. Now you should be seeing two switches on the bottom of the screen as shown below, which turns on and off while you smile and show thumb-up gestures. Good job!
130
+
5. Build and run the app again. Now you should see two switches on the bottom of the screen as shown below, which turn on and off while you smile and show thumb-up gestures. Good job!
131
131
132
132

133
133
134
134
## Recap on SharedFlow vs StateFlow
135
135
136
136
This app uses `SharedFlow` for dispatching overlay views' UI events without mandating a specific stateful model, which avoids redundant computation. Meanwhile, it uses `StateFlow` for dispatching condition switches' UI states, which prevents duplicated emission and consequent UI updates.
137
137
138
-
Here's a breakdown of the differences between `SharedFlow` and `StateFlow`:
138
+
Here's a overview of the differences between `SharedFlow` and `StateFlow`:
Copy file name to clipboardExpand all lines: content/learning-paths/smartphones-and-mobile/build-android-selfie-app-using-mediapipe-multimodality/8-mediate-flows.md
+8-31Lines changed: 8 additions & 31 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -6,7 +6,7 @@ weight: 8
6
6
layout: learningpathall
7
7
---
8
8
9
-
Now you have two independent Flows indicating the conditions of face landmark detection and gesture recognition. The simplest multimodality strategy is to combine multiple source Flows into a single output Flow, which emits consolidated values as the [single source of truth](https://en.wikipedia.org/wiki/Single_source_of_truth) for its observers (collectors) to carry out corresponding actions.
9
+
Now you have two independent Flows indicating the conditions of face landmark detection and gesture recognition. The simplest multimodality strategy is to combine multiple source Flows into a single output Flow, which emits consolidated values as the single source of truth for its observers (collectors) to carry out corresponding actions.
10
10
11
11
## Combine two Flows into a single Flow
12
12
@@ -33,9 +33,9 @@ Now you have two independent Flows indicating the conditions of face landmark de
33
33
```
34
34
35
35
{{% notice Note %}}
36
-
Kotlin Flow's [`combine`](https://kotlinlang.org/api/kotlinx.coroutines/kotlinx-coroutines-core/kotlinx.coroutines.flow/combine.html) transformation is equivalent to ReactiveX's [`combineLatest`](https://reactivex.io/documentation/operators/combinelatest.html). It combines emissions from multiple observables, so that each time **any** observable emits, the combinator function is called with the latest values from all sources.
36
+
Kotlin Flow's [`combine`](https://kotlinlang.org/api/kotlinx.coroutines/kotlinx-coroutines-core/kotlinx.coroutines.flow/combine.html) transformation is equivalent to ReactiveX's [`combineLatest`](https://reactivex.io/documentation/operators/combinelatest.html). It combines emissions from multiple observables, so that each time any observable emits, the combinator function is called with the latest values from all sources.
37
37
38
-
You might need to add `@OptIn(FlowPreview::class)` annotation since `sample` is still in preview. For more information on similar transformations, please refer to [this blog](https://kt.academy/article/cc-flow-combine).
38
+
You might need to add `@OptIn(FlowPreview::class)` annotation since `sample` is still in preview.
39
39
40
40
{{% /notice %}}
41
41
@@ -49,30 +49,7 @@ You might need to add `@OptIn(FlowPreview::class)` annotation since `sample` is
If this code looks confusing to you, please see the explanations below for Kotlin beginners.
53
-
54
-
{{% notice Info %}}
55
-
56
-
###### Keyword "it"
57
-
58
-
The operation `filter { it }` is simplified from `filter { bothOk -> bothOk == true }`.
59
-
60
-
Since Kotlin allows for implictly calling the single parameter in a lambda `it`, `{ bothOk -> bothOk == true }` is equivalent to `{ it == true }`, and again `{ it }`.
61
-
62
-
See [this doc](https://kotlinlang.org/docs/lambdas.html#it-implicit-name-of-a-single-parameter) for more details.
63
-
64
-
{{% /notice %}}
65
-
66
-
{{% notice Info %}}
67
-
68
-
###### "Unit" type
69
-
This `SharedFlow` has a generic type `Unit`, which doesn't contain any value. You may think of it as a "pulse" signal.
70
-
71
-
The operation `map { }` simply maps the upstream `Boolean` value emitted from `_bothOk` to `Unit` regardless their values are true or false. It's simplified from `map { bothOk -> Unit }`, which becomes `map { Unit } ` where the keyword `it` is not used at all. Since an empty block already returns `Unit` implicitly, we don't need to explicitly return it.
72
-
73
-
{{% /notice %}}
74
-
75
-
If this still looks confusing, you may also opt to use `SharedFlow<Boolean>` and remove the `map { }` operation. Just note that when you collect this Flow, it doesn't matter whether the emitted `Boolean` values are true or false. In fact, they are always `true` due to the `filter` operation.
52
+
You may also opt to use `SharedFlow<Boolean>` and remove the `map { }` operation. Just note that when you collect this Flow, it doesn't matter whether the emitted `Boolean` values are true or false. In fact, they are always `true` due to the `filter` operation.
76
53
77
54
## Configure ImageCapture use case
78
55
@@ -92,7 +69,7 @@ If this still looks confusing, you may also opt to use `SharedFlow<Boolean>` and
92
69
.build()
93
70
```
94
71
95
-
3.Again, don't forget to append this use case to `bindToLifecycle`.
72
+
3.Append this use case to `bindToLifecycle`.
96
73
97
74
```kotlin
98
75
camera = cameraProvider.bindToLifecycle(
@@ -102,7 +79,7 @@ If this still looks confusing, you may also opt to use `SharedFlow<Boolean>` and
102
79
103
80
## Execute photo capture with ImageCapture
104
81
105
-
1. Append the following constant values to `MainActivity`'s companion object. They define the file name format and the [MIME type](https://en.wikipedia.org/wiki/Media_type).
82
+
1. Append the following constant values to `MainActivity`'s companion object. They define the file name format and the media type:
106
83
107
84
```kotlin
108
85
// Image capture
@@ -165,7 +142,7 @@ If this still looks confusing, you may also opt to use `SharedFlow<Boolean>` and
165
142
166
143
## Add a flash effect upon capturing photo
167
144
168
-
1. Navigate to `activity_main.xml` layout file and insert the following `View` element **between** the two overlay views and two `SwitchCompat` views. This is essentially just a white blank view covering the whole surface.
145
+
1. Navigate to `activity_main.xml` layout file and insert the following `View` element between the two overlay views and two `SwitchCompat` views. This is essentially just a white blank view covering the whole surface.
169
146
170
147
```
171
148
<View
@@ -204,6 +181,6 @@ If this still looks confusing, you may also opt to use `SharedFlow<Boolean>` and
204
181
}
205
182
```
206
183
207
-
3. Invoke `showFlashEffect()` method in `executeCapturePhoto()` method, **before** invoking `imageCapture.takePicture()`
184
+
3. Invoke `showFlashEffect()` method in `executeCapturePhoto()` method, before invoking `imageCapture.takePicture()`
208
185
209
186
4. Build and run the app. Try keeping up a smiling face while presenting thumb-up gestures. When you see both switches turn on and stay stable for approximately half a second, the screen should flash white and then a photo should be captured and shows up in your album, which may take a few seconds depending on your Android device's hardware. Good job!
Copy file name to clipboardExpand all lines: content/learning-paths/smartphones-and-mobile/build-android-selfie-app-using-mediapipe-multimodality/9-avoid-redundant-requests.md
+6-6Lines changed: 6 additions & 6 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,16 +1,16 @@
1
1
---
2
-
title: Avoid duplicated photo capture requests
2
+
title: Avoid duplicate photo capture requests
3
3
weight: 9
4
4
5
5
### FIXED, DO NOT MODIFY
6
6
layout: learningpathall
7
7
---
8
8
9
-
So far, we have implemented the core logic for mediating MediaPipe's face and gesture task results and executing photo captures. However, the view controller does not communicate its execution results back to the view model. This introduces risks such as photo capture failures, frequent or duplicate requests, and other potential issues.
9
+
So far, you have implemented the core logic for MediaPipe's face and gesture task results and executing photo captures. However, the view controller does not communicate its execution results back to the view model. This introduces risks such as photo capture failures, frequent or duplicate requests, and other potential issues.
10
10
11
11
## Introduce camera readiness state
12
12
13
-
It is a best practice to complete the data flow cycle by providing callbacks for the view controller's states. This ensures that the view model does not emit values in undesired states, such as when the camera is busy or unavailable.
13
+
It is best practice to complete the data flow cycle by providing callbacks for the view controller's states. This ensures that the view model does not emit values in undesired states, such as when the camera is busy or unavailable.
14
14
15
15
1. Navigate to `MainViewModel` and add a `MutableStateFlow` named `_isCameraReady` as a private member variable. This keeps track of whether the camera is busy or unavailable.
16
16
@@ -58,7 +58,7 @@ The duration of image capture can vary across Android devices due to hardware di
58
58
59
59
To address this, implementing a simple cooldown mechanism after each photo capture can enhance the user experience while conserving computing resources.
60
60
61
-
1. Add the following constant value to `MainViewModel`'s companion object. This defines a `3` sec cooldown before marking the camera available again.
61
+
1. Add the following constant value to `MainViewModel`'s companion object. This defines a 3 seconds cooldown before making the camera available again.
@@ -91,6 +91,6 @@ However, silently failing without notifying the user is not a good practice for
91
91
92
92
## Completed sample code on GitHub
93
93
94
-
If you run into any difficulties completing this learning path, feel free to check out the [completed sample code](https://github.com/hanyin-arm/sample-android-selfie-app-using-mediapipe-multimodality) and import it into Android Studio.
94
+
If you run into any difficulties completing this learning path, you can check out the [complete sample code](https://github.com/hanyin-arm/sample-android-selfie-app-using-mediapipe-multimodality) and import it into Android Studio.
95
95
96
-
If you discover a bug, encounter an issue, or have suggestions for improvement, we’d love to hear from you! Please feel free to [open an issue](https://github.com/hanyin-arm/sample-android-selfie-app-using-mediapipe-multimodality/issues/new) with detailed information.
96
+
If you discover a bug, encounter an issue, or have suggestions for improvement, please feel free to [open an issue](https://github.com/hanyin-arm/sample-android-selfie-app-using-mediapipe-multimodality/issues/new) with detailed information.
0 commit comments