You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: content/learning-paths/smartphones-and-mobile/build-android-selfie-app-using-mediapipe-multimodality/6-flow-data-to-view-1.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -9,7 +9,7 @@ layout: learningpathall
9
9
10
10
[SharedFlow](https://developer.android.com/kotlin/flow/stateflow-and-sharedflow#sharedflow) and [StateFlow](https://developer.android.com/kotlin/flow/stateflow-and-sharedflow#stateflow) are [Kotlin Flow](https://developer.android.com/kotlin/flow) APIs that enable Flows to optimally emit state updates and emit values to multiple consumers.
11
11
12
-
In this Learning Path, you will experiment with both `SharedFlow` and `StateFlow`. This section focuses on SharedFlow, and the section focuses on StateFlow.
12
+
In this Learning Path, you will experiment with both `SharedFlow` and `StateFlow`. This section focuses on SharedFlow, and the next section focuses on StateFlow.
13
13
14
14
`SharedFlow` is a general-purpose, hot flow that can emit values to multiple subscribers. It is highly configurable, allowing you to configure settings such as the replay cache size and buffer capacity.
15
15
@@ -366,5 +366,5 @@ class GestureOverlayView(context: Context?, attrs: AttributeSet?) :
366
366
367
367
4. Build and run the app again. Now you should see face and gesture overlays on top of the camera preview as shown below. Good job!
368
368
369
-

369
+

Copy file name to clipboardExpand all lines: content/learning-paths/smartphones-and-mobile/build-android-selfie-app-using-mediapipe-multimodality/7-flow-data-to-view-2.md
+16-13Lines changed: 16 additions & 13 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -6,7 +6,7 @@ weight: 7
6
6
layout: learningpathall
7
7
---
8
8
9
-
`StateFlow` is a subclass of SharedFlow and internally use a SharedFlow to manage its emissions. However, it provides a stricter API, ensuring that:
9
+
`StateFlow` is a subclass of `SharedFlow` and internally uses `SharedFlow` to manage its emissions. However, it provides a stricter API, ensuring that:
10
10
1. It always has an initial value.
11
11
2. It emits only the latest state.
12
12
3. It cannot configure its replay cache (always `1`).
@@ -15,7 +15,7 @@ Therefore, `StateFlow` is a specialized type of `SharedFlow` that represents a s
15
15
16
16
## Expose UI states in StateFlow
17
17
18
-
1. Expose two `StateFlow`s named `faceOk` and `gestureOk` in `MainViewModel`, indicating whether the subject's face and gesture are ready for a selfie.
18
+
1. Expose two `StateFlow`s named `faceOk` and `gestureOk` in `MainViewModel`, indicating whether the subject's face and gestures are ready for a selfie.
19
19
20
20
```kotlin
21
21
privateval_faceOk=MutableStateFlow(false)
@@ -25,14 +25,14 @@ Therefore, `StateFlow` is a specialized type of `SharedFlow` that represents a s
25
25
val gestureOk:StateFlow<Boolean> =_gestureOk
26
26
```
27
27
28
-
2.Append the following constant values to `MainViewModel`'s companion object. In this demo app, you will focus on smiling faces and thumb-up gestures.
28
+
2. In this demo app, you will focus on smiling faces and thumb-up gestures. Append the following constant values to `MainViewModel`'s companion object:
2. In the same directory, create a new resource file named `dimens.xml` if it does not exist. This file is used to define layout related dimension values:
78
+
2. In the same directory, create a new resource file named `dimens.xml` if it does not exist already. This file is used to define layout related dimension values:
79
79
80
80
```xml
81
81
<?xml version="1.0" encoding="utf-8"?>
@@ -85,7 +85,7 @@ Therefore, `StateFlow` is a specialized type of `SharedFlow` that represents a s
85
85
</resources>
86
86
```
87
87
88
-
3. Navigate to `activity_main.xml` layout file and add the following code to the root `ConstraintLayout`. Add this code after the two overlay views which you just added in the previous section.
88
+
3. Navigate to the `activity_main.xml` layout file and add the following code to the root `ConstraintLayout`. Add this code after the two overlay views which you have just added in the previous section:
89
89
90
90
```xml
91
91
<androidx.appcompat.widget.SwitchCompat
@@ -111,7 +111,9 @@ Therefore, `StateFlow` is a specialized type of `SharedFlow` that represents a s
4. Finally, navigate to `MainActivity.kt` and append the following code inside `repeatOnLifecycle(Lifecycle.State.RESUMED)` block, after the `launch` block you just added in the previous section. This makes sure each of the three parallel `launch` run in its own co-routine concurrently without blocking each other.
114
+
4. Finally, navigate to `MainActivity.kt` and append the following code inside the `repeatOnLifecycle(Lifecycle.State.RESUMED)` block, after the `launch` block.
115
+
116
+
This makes sure each of the three parallel `launch` code sections run in its own co-routine concurrently without blocking each other.
115
117
116
118
```kotlin
117
119
launch {
@@ -127,20 +129,21 @@ Therefore, `StateFlow` is a specialized type of `SharedFlow` that represents a s
127
129
}
128
130
```
129
131
130
-
5. Build and run the app again. Now you should see two switches on the bottom of the screen as shown below, which turn on and off while you smile and show thumb-up gestures. Good job!
132
+
5. Build and run the app again.
133
+
Now you should see two switches on the bottom of the screen as shown below, which turn on and off while you smile and show thumb-up gestures. Good job!
131
134
132
135

133
136
134
137
## Recap on SharedFlow vs StateFlow
135
138
136
-
This app uses `SharedFlow` for dispatching overlay views' UI events without mandating a specific stateful model, which avoids redundant computation. Meanwhile, it uses `StateFlow` for dispatching condition switches' UI states, which prevents duplicated emission and consequent UI updates.
139
+
This app uses `SharedFlow` for dispatching overlay views' UI events without mandating a specific stateful model, which avoids redundant computation. Meanwhile, it uses `StateFlow` for dispatching condition switches' UI states, which prevents duplicate emission and consequent UI updates.
137
140
138
-
Here's a overview of the differences between `SharedFlow` and `StateFlow`:
141
+
Here is an overview of the differences between `SharedFlow` and `StateFlow`:
139
142
140
143
|| SharedFlow | StateFlow |
141
144
| --- | --- | --- |
142
-
| Type of Data | Transient events or actions | State or continuouslychanging data |
145
+
| Type of Data | Transient events or actions | State or continuously-changing data |
143
146
| Initial Value | Not required | Required |
144
-
| Replays to New Subscribers | Configurable with replay (e.g., 0, 1, or more) | Always emits the latest value |
147
+
| Replays to New Subscribers | Configurable with replay (for example, 0, 1, or more) | Always emits the latest value |
145
148
| Default Behavior | Emits only future values unless replay is set | Retains and emits only the current state |
146
-
| Use Case Examples | Short-lived, one-off events that shouldn't persist as part of the state | Long-lived state that represents the current view's state|
149
+
| Use Case Examples | Short-lived, one-off events that should not persist as part of the state | Long-lived state that represents the state of the current view |
Copy file name to clipboardExpand all lines: content/learning-paths/smartphones-and-mobile/build-android-selfie-app-using-mediapipe-multimodality/8-mediate-flows.md
+9-9Lines changed: 9 additions & 9 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -6,11 +6,13 @@ weight: 8
6
6
layout: learningpathall
7
7
---
8
8
9
-
Now you have two independent Flows indicating the conditions of face landmark detection and gesture recognition. The simplest multimodality strategy is to combine multiple source Flows into a single output Flow, which emits consolidated values as the single source of truth for its observers (collectors) to carry out corresponding actions.
9
+
Now you have two independent Flows indicating the conditions of face landmark detection and gesture recognition.
10
+
11
+
The simplest multimodality strategy is to combine multiple source Flows into a single output Flow, which emits consolidated values as the single source of truth for its observers, the collectors, to carry out corresponding actions.
10
12
11
13
## Combine two Flows into a single Flow
12
14
13
-
1. Navigate to `MainViewModel` and append the following constant values to its companion object.
15
+
1. Navigate to `MainViewModel` and append the following constant values to its companion object:
14
16
15
17
* The first constant defines how frequently you sample the conditions from each Flow.
16
18
@@ -39,7 +41,7 @@ You might need to add the `@OptIn(FlowPreview::class)` annotation as `sample` is
39
41
40
42
{{% /notice %}}
41
43
42
-
3. Expose a `SharedFlow` variable which emits a `Unit` whenever the face and gesture conditions are met and stay stable for a while, which means `500`ms as defined above. Again, add `@OptIn(FlowPreview::class)` if needed.
44
+
3. Expose a `SharedFlow` variable which emits a `Unit` whenever the face and gesture conditions are met and stays stable for a while, which means `500`ms as defined above. Again, add `@OptIn(FlowPreview::class)` if required.
43
45
44
46
```kotlin
45
47
val captureEvents:SharedFlow<Unit> =_bothOk
@@ -138,13 +140,11 @@ You can also opt to use `SharedFlow<Boolean>` and remove the `map { }` operation
138
140
}
139
141
}
140
142
```
141
-
4. Even though the photo capture has already been implemented, it is still quite inconvenient to check out the logs afterwards to find out whether the photo capture has been successfully executed.
142
-
143
-
You can now add a flash effect UI to explicitly show the users that a photo has been captured.
143
+
4. Even though the photo capture has already been implemented, it is still inconvenient to check out the logs afterwards to find out whether the photo capture has been successfully executed, so you can now add a flash effect UI to explicitly show the users that a photo has been captured.
144
144
145
145
## Add a flash effect upon capturing photo
146
146
147
-
1. Navigate to the `activity_main.xml` layout file and insert the following `View` element between the two overlay views and the two `SwitchCompat` views. This is essentially just a white blank view covering the whole surface.
147
+
1. Navigate to the `activity_main.xml` layout file and insert the following `View` element between the two overlay views and the two `SwitchCompat` views. This is essentially just a white blank view covering the whole surface:
148
148
149
149
```
150
150
<View
@@ -187,6 +187,6 @@ You can now add a flash effect UI to explicitly show the users that a photo has
187
187
188
188
4. Build and run the app:
189
189
190
-
* Try keeping up a smiling face while presenting thumb-up gestures.
190
+
* Try to maintain a smiling face whilst also presenting thumb-up gestures.
191
191
* When you see both switches, turn on and stay stable for approximately half a second.
192
-
* The screen should flash white and then a photo should be captured and show up in your album, which might take a few seconds depending on your Android device's hardware. Good job!
192
+
* The screen should flash white and then a photo should be captured. This will show up in your album, which might take a few seconds depending on your Android device's hardware. Good job!
Copy file name to clipboardExpand all lines: content/learning-paths/smartphones-and-mobile/build-android-selfie-app-using-mediapipe-multimodality/9-avoid-redundant-requests.md
+6-4Lines changed: 6 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -6,7 +6,9 @@ weight: 9
6
6
layout: learningpathall
7
7
---
8
8
9
-
So far you have implemented the core logic for MediaPipe's face and gesture task results and photo capture execution. However, the view controller is not communicating its execution results back to the view model, which raises the risk for photo capture failures, frequent or duplicate requests, and other potential issues.
9
+
So far you have implemented the core logic for MediaPipe's face and gesture task results and photo capture execution.
10
+
11
+
However, the view controller is not communicating its execution results back to the view model, which raises the risk for photo capture failures, frequent or duplicate requests, and other potential issues.
10
12
11
13
### Introduce camera readiness state
12
14
@@ -54,11 +56,11 @@ Copy in the following:
54
56
```
55
57
56
58
57
-
## Introduce camera cooldown
59
+
## Introduce a camera cooldown mechanism
58
60
59
61
The differences in hardware mean that the duration of an image capture varies across Android devices. Additionally, consecutive image captures place a heavy load on the CPU, GPU, camera, and flash memory buffer.
60
62
61
-
To address this, you can implement a simple cooldown mechanism after each photo capture that can enhance the user experience while conserving computing resources.
63
+
To address this, you can implement a simple cooldown mechanism after each photo capture that can both enhance the user experience whilst also conserving computing resources.
62
64
63
65
1. Add the following constant value to `MainViewModel`'s companion object. This defines a three-second cooldown before making the camera available again.
64
66
@@ -91,7 +93,7 @@ However, silently failing without notifying the user is not a good practice for
91
93
92
94
{{% /notice %}}
93
95
94
-
## Entire sample code on GitHub
96
+
## Further resource for support: entire sample code on GitHub
95
97
96
98
If you run into any difficulties completing this Learning Path, you can check out the [complete sample code](https://github.com/hanyin-arm/sample-android-selfie-app-using-mediapipe-multimodality) and import it into Android Studio.
0 commit comments