Skip to content

Commit bc5e8ab

Browse files
committed
fix images and add index.md
1 parent 7db8c9a commit bc5e8ab

File tree

114 files changed

+453
-84
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

114 files changed

+453
-84
lines changed

com.unity.perception/Documentation~/FAQ/FAQ.md

Lines changed: 10 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -42,7 +42,7 @@ Keep in mind that any new label added with this method should already be present
4242
Labeling works on the GameObject level, so to achieve the scenarios described here, you will need to break down your main object into multiple GameObjects parented to the same root object, and add `Labeling` components to each of the inner objects, as shown below.
4343

4444
<p align="center">
45-
<img src="images/inner_objects.png" width="800"/>
45+
<img src="../images/FAQ/images/inner_objects.png" width="800"/>
4646
</p>
4747

4848
Alternatively, in cases where parts of the surface of the object need to be labeled (e.g. decals on objects), you can add labeled invisible surfaces on top of these sections. These invisible surfaces need to have a fully transparent material. To create an invisible material:
@@ -54,7 +54,7 @@ Keep in mind that any new label added with this method should already be present
5454
An example labeled output for an object with separate labels on inner objects and decals is shown below:
5555

5656
<p align="center">
57-
<img src="images/inner_labels.gif" width="600"/>
57+
<img src="../images/FAQ/images/inner_labels.gif" width="600"/>
5858
</p>
5959

6060
---
@@ -118,7 +118,7 @@ Most human character models use Skinned Mesh Renderers. Unfortunately, at this t
118118
The ***Inspector*** view of a prefab cluster asset looks like below:
119119

120120
<p align="center">
121-
<img src="images/prefab_cluster.png" width="400"/>
121+
<img src="../images/FAQ/images/prefab_cluster.png" width="400"/>
122122
</p>
123123

124124
Now all that is left is to use our prefab clusters inside a Randomizer. Here is some sample code:
@@ -146,7 +146,7 @@ public class ClusterRandomizer : UnityEngine.Perception.Randomization.Randomizer
146146
This Randomizer takes a list of `PrefabCluster` assets, then, on each Iteration, it goes through all the provided clusters and samples one prefab from each. The ***Inspector*** view for this Randomizer looks like this:
147147

148148
<p align="center">
149-
<img src="images/cluster_randomizer.png" width="400"/>
149+
<img src="../images/FAQ/images/cluster_randomizer.png" width="400"/>
150150
</p>
151151

152152
---
@@ -426,7 +426,7 @@ Suppose we need to drop a few objects into the Scene, let them interact physical
426426

427427

428428
<p align="center">
429-
<img src="images/object_drop.gif" width="700"/>
429+
<img src="../images/FAQ/images/object_drop.gif" width="700"/>
430430
</p>
431431

432432
---
@@ -494,7 +494,7 @@ HDRP projects have motion blur and a number of other post processing effects ena
494494

495495

496496
<p align="center">
497-
<img src="images/volume.png" width="500"/>
497+
<img src="../images/FAQ/images/volume.png" width="500"/>
498498
</p>
499499

500500
---
@@ -573,25 +573,25 @@ A visual comparison of the different lighting configurations in HDRP is shown be
573573
Default HDRP:
574574

575575
<p align="center">
576-
<img src="images/hdrp.png" width="700"/>
576+
<img src="../images/FAQ/images/hdrp.png" width="700"/>
577577
</p>
578578

579579
HDRP with Global Illumination (notice how much brighter the scene is with ray traced light bouncing):
580580

581581
<p align="center">
582-
<img src="images/hdrp_rt_gi.png" width="700"/>
582+
<img src="../images/FAQ/images/hdrp_rt_gi.png" width="700"/>
583583
</p>
584584

585585
HDRP with Path Tracing (128 samples) (notice the red light bleeding from the cube onto the floor and the increased shadow quality):
586586

587587
<p align="center">
588-
<img src="images/hdrp_pt_128_samples.png" width="700"/>
588+
<img src="../images/FAQ/images/hdrp_pt_128_samples.png" width="700"/>
589589
</p>
590590

591591
HDRP with Path Tracing (4096 samples) (more samples leads to less ray tracing noise but also a longer time to render):
592592

593593
<p align="center">
594-
<img src="images/hdrp_pt_4096_samples.png" width="700"/>
594+
<img src="../images/FAQ/images/hdrp_pt_4096_samples.png" width="700"/>
595595
</p>
596596

597597
---

com.unity.perception/Documentation~/HPTutorial/TUTORIAL.md

Lines changed: 23 additions & 22 deletions
Original file line numberDiff line numberDiff line change
@@ -8,12 +8,13 @@ In this tutorial, **":green_circle: Action:"** mark all of the actions needed to
88

99
Steps included in this tutorial:
1010

11-
* [Step 1: Import `.fbx` Models and Animations](#step-1)
12-
* [Step 2: Set Up a Humanoid Character in a Scene](#step-2)
13-
* [Step 3: Set Up the Perception Camera for Keypoint Annotation](#step-3)
14-
* [Step 4: Configure Animation Pose Labeling](#step-4)
15-
* [Step 5: Add Joints to the Character and Customize Keypoint Templates](#step-5)
16-
* [Step 6: Randomize the Humanoid Character's Animations](#step-6)
11+
- [Human Pose Labeling and Randomization Tutorial](#human-pose-labeling-and-randomization-tutorial)
12+
- [<a name="step-1">Step 1: Import `.fbx` Models and Animations</a>](#step-1-import-fbx-models-and-animations)
13+
- [<a name="step-2">Step 2: Set Up a Humanoid Character in a Scene</a>](#step-2-set-up-a-humanoid-character-in-a-scene)
14+
- [<a name="step-3">Step 3: Set Up the Perception Camera for Keypoint Annotation</a>](#step-3-set-up-the-perception-camera-for-keypoint-annotation)
15+
- [<a name="step-4">Step 4: Configure Animation Pose Labeling</a>](#step-4-configure-animation-pose-labeling)
16+
- [<a name="step-5">Step 5: Add Joints to the Character and Customize Keypoint Templates</a>](#step-5-add-joints-to-the-character-and-customize-keypoint-templates)
17+
- [<a name="step-6">Step 6: Randomize the Humanoid Character's Animations</a>](#step-6-randomize-the-humanoid-characters-animations)
1718

1819
> :information_source: If you face any problems while following this tutorial, please create a post on the **[Unity Computer Vision forum](https://forum.unity.com/forums/computer-vision.626/)** or the **[GitHub issues](https://github.com/Unity-Technologies/com.unity.perception/issues)** page and include as much detail as possible.
1920
@@ -32,7 +33,7 @@ We will use this duplicated Scene in this tutorial so that we do not lose our gr
3233
Your Scenario should now look like this:
3334

3435
<p align="center">
35-
<img src="Images/scenario_empty.png" width="400"/>
36+
<img src="../images/HPTutorial/Images/scenario_empty.png" width="400"/>
3637
</p>
3738

3839
* **:green_circle: Action**: Select `Main Camera` and in the _**Inspector**_ view of the `Perception Camera` component, **disable** all previously added labelers using the check-mark in front of each. We will be using a new labeler in this tutorial.
@@ -45,7 +46,7 @@ We now need to import the sample files required for this tutorial.
4546
Once the sample files are imported, they will be placed inside the `Assets/Samples/Perception` folder in your Unity project, as seen in the image below:
4647

4748
<p align="center">
48-
<img src="Images/project_folders_samples.png" width="600"/>
49+
<img src="../images/HPTutorial/Images/project_folders_samples.png" width="600"/>
4950
</p>
5051

5152
* **:green_circle: Action**: Select all of the assets inside the `Assets/Samples/Perception/<perception-package-version>/Human Pose Labeling and Randomization/Models and Animations`.
@@ -61,7 +62,7 @@ Note how `Animation Type` is set to `Humanoid` for all selected assets. This is
6162
* **:green_circle: Action**: Select the new `Player` object in the Scene and in the _**Inspector**_ tab set its transform's position and rotation according to the image below to make the character face the camera.
6263

6364
<p align="center">
64-
<img src="Images/character_transform.png" width="800"/>
65+
<img src="../images/HPTutorial/Images/character_transform.png" width="800"/>
6566
</p>
6667

6768
The `Player` object already has an `Animator` component attached. This is because the `Animation Type` property of all the sample `.fbx` files is set to `Humanoid`.
@@ -71,7 +72,7 @@ To animate our character, we will now attach an `Animation Controller` to the `A
7172
* **:green_circle: Action**: Double click the new controller to open it. Then right-click in the empty area and select _**Create State**_ -> _**Empty**_.
7273

7374
<p align="center">
74-
<img src="Images/anim_controller_1.png" width="600"/>
75+
<img src="../images/HPTutorial/Images/anim_controller_1.png" width="600"/>
7576
</p>
7677

7778
This will create a new state and attach it to the Entry state with a new transition edge. This means the controller will always move to this new state as soon as the `Animator` component is awoken. In this example, this will happen when the **** button is pressed and the simulation starts.
@@ -83,19 +84,19 @@ In the selector window that pops up, you will see several clips named `Take 001`
8384
* **:green_circle: Action**: Select the animation clip originating from the `TakeObjects.fbx` file, as seen below:
8485

8586
<p align="center">
86-
<img src="Images/select_clip.png" width="600"/>
87+
<img src="../images/HPTutorial/Images/select_clip.png" width="600"/>
8788
</p>
8889

8990
* **:green_circle: Action**: Assign `TestAnimationController` to the `Controller` property of the `Player` object's `Animator` component.
9091

9192
<p align="center">
92-
<img src="Images/assign_controller.png" width="400"/>
93+
<img src="../images/HPTutorial/Images/assign_controller.png" width="400"/>
9394
</p>
9495

9596
If you run the simulation now you will see the character performing an animation for picking up a hypothetical object as seen in the GIF below.
9697

9798
<p align="center">
98-
<img src="Images/take_objects.gif" width="600"/>
99+
<img src="../images/HPTutorial/Images/take_objects.gif" width="600"/>
99100
</p>
100101

101102

@@ -116,7 +117,7 @@ Similar to the labelers we used in the Perception Tutorial, we will need a label
116117
* **:green_circle: Action**: In the _**Inspector**_ UI for this new `Labeling` component, expand `HPE_IdLabelConfig` and click _**Add to Labels**_ on `MyCharacter`.
117118

118119
<p align="center">
119-
<img src="Images/add_label_from_config.png" width="400"/>
120+
<img src="../HPTutorial/Images/add_label_from_config.png" width="400"/>
120121
</p>
121122

122123
* **:green_circle: Action**: Return to `Perception Camera` and assign `HPE_IdLabelConfig` to the `KeyPointLabeler`'s label configuration property.
@@ -125,14 +126,14 @@ Similar to the labelers we used in the Perception Tutorial, we will need a label
125126
The labeler should now look like the image below:
126127

127128
<p align="center">
128-
<img src="Images/keypoint_labeler.png" width="500"/>
129+
<img src="../HPTutorial/Images/keypoint_labeler.png" width="500"/>
129130
</p>
130131

131132

132133
The `Active Template` tells the labeler how to map default Unity rig joints to human joint labels in the popular COCO dataset so that the output of the labeler can be easily converted to COCO format. Later in this tutorial, we will learn how to add more joints to our character and how to customize joint mapping templates.
133134

134135
<p align="center">
135-
<img src="Images/take_objects_keypoints.gif" width="600"/>
136+
<img src="../images/HPTutorial/Images/take_objects_keypoints.gif" width="600"/>
136137
</p>
137138

138139
You can now check out the output dataset to see what the annotations look like. To do this, click the _**Show Folder**_ button in the `Perception Camera` UI, then navigate inside to the dataset folder to find the `captures_000.json` file. Here is an example annotation for the first frame of our test-case here:
@@ -276,15 +277,15 @@ You can now use the `Timestamps` list to define poses. Let's define four poses h
276277
Modify `MyAnimationPoseConfig` according to the image below:
277278

278279
<p align="center">
279-
<img src="Images/anim_pos_conf.png" width="800"/>
280+
<img src="../images/HPTutorial/Images/anim_pos_conf.png" width="800"/>
280281
</p>
281282

282283
The pose configuration we created needs to be assigned to our `KeyPointLabeler`. So:
283284

284285
* **:green_circle: Action**: In the _**Inspector**_ UI for `Perception Camera`, set the `Size` of `Animation Pose Configs` for the `KeyPointLabeler` to 1. Then, assign the `MyAnimationPoseConfig` to the sole slot in the list, as shown below:
285286

286287
<p align="center">
287-
<img src="Images/keypoint_labeler_2.png" width="500"/>
288+
<img src="../images/HPTutorial/Images/keypoint_labeler_2.png" width="500"/>
288289
</p>
289290

290291
If you run the simulation again to generate a new dataset, you will see the new poses we defined written in it. All frames that belong to a certain pose will have the pose label attached.
@@ -305,7 +306,7 @@ In the _**Inspector**_ view of `CocoKeypointTemplate`, you will see the list of
305306

306307

307308
<p align="center">
308-
<img src="Images/coco_template.png" width="500"/>
309+
<img src="../images/HPTutorial/Images/coco_template.png" width="500"/>
309310
</p>
310311

311312
If you review the list you will see that the `left_ear` and `right_ear` joints are also not associated with the rig.
@@ -317,7 +318,7 @@ We will create our three new joints under the `Head` object.
317318
* **:green_circle: Action**: Create three new empty GameObjects under `Head` and place them in the proper positions for the character's nose and ears, as seen in the GIF below (make sure the positions are correct in 3D space):
318319

319320
<p align="center">
320-
<img src="Images/new_joints.gif" width="600"/>
321+
<img src="../images/HPTutorial/Images/new_joints.gif" width="600"/>
321322
</p>
322323

323324
The final step in this process would be to label these new joints such that they match the labels of their corresponding keypoints in `CocoKeypointTemplate`. For this purpose, we use the `Joint Label` component.
@@ -327,7 +328,7 @@ The final step in this process would be to label these new joints such that they
327328
If you run the simulation now, you can see the new joints being visualized:
328329

329330
<p align="center">
330-
<img src="Images/new_joints_play.gif" width="600"/>
331+
<img src="../images/HPTutorial/Images/new_joints_play.gif" width="600"/>
331332
</p>
332333

333334
You could now look at the latest generated dataset to confirm the new joints are being detected and written.
@@ -347,7 +348,7 @@ The `Animation Randomizer Tag` accepts a list of animation clips. At runtime, th
347348
If you run the simulation now, your character will randomly perform one of the above four animations, each for 150 frames. This cycle will recur 20 times, which is the total number of Iterations in your Scenario.
348349

349350
<p align="center">
350-
<img src="Images/randomized_results.gif" width="600"/>
351+
<img src="../images/HPTutorial/Images/randomized_results.gif" width="600"/>
351352
</p>
352353

353354
> :information_source: The reason the character stops animating at certain points in the above GIF is that the animation clips are not set to loop. Therefore, if the randomly selected timestamp is sufficiently close to the end of the clip, the character will complete the animation and stop animating for the rest of the Iteration.

com.unity.perception/Documentation~/Index.md

Whitespace-only changes.

com.unity.perception/Documentation~/Randomization/Index.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -33,7 +33,7 @@ Continue reading for more details concerning the primary components driving rand
3333

3434
<br>
3535
<p align="center">
36-
<img src="Images/randomization_uml.png" width="900"/>
36+
<img src="../images/Randomization/Images/randomization_uml.png" width="900"/>
3737
<br><i>Class diagram for the randomization framework included in the Perception package</i>
3838
</p>
3939

com.unity.perception/Documentation~/Randomization/Scenarios.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -13,7 +13,7 @@ By default, the Perception package includes one ready-made Scenario, the `FixedL
1313
Scenarios have a number of lifecycle hooks that are called during execution. Below is a diagram visualizing the sequence of operations run by a typical scenario:
1414

1515
<p align="center">
16-
<img src="Images/scenario-lifecycle-diagram.png" width="600"/>
16+
<img src="../images/Randomization/Images/scenario-lifecycle-diagram.png" width="600"/>
1717
</p>
1818

1919

com.unity.perception/Documentation~/Schema/Synthetic_Dataset_Schema.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -61,7 +61,7 @@ The main difference between this schema and nuScenes is that we use **document b
6161
This means that instead of requiring multiple id-based "joins" to explore the data, data is nested and sometimes duplicated for ease of consumption.
6262

6363
## Components
64-
![image alt text](image_0.png)
64+
![image alt text](../images/Schema/image_0.png)
6565

6666
### captures
6767

@@ -96,7 +96,7 @@ We cannot use timestamps to synchronize between two different events because tim
9696
Instead, we use a "step" counter which make it easy to associate metrics and captures that occur at the same time.
9797
Below is an illustration of how captures, metrics, timestamps and steps are synchronized.
9898

99-
![image alt text](captures_steps_timestamps.png)
99+
![image alt text](../images/Schema/captures_steps_timestamps.png)
100100

101101
Since each sensor might trigger captures at different frequencies, at the same timestamp we might contain 0 to N captures, where N is the total number of sensors included in this scene.
102102
If two sensors are captured at the same timestamp, they should share the same sequence, step and timestamp value.
@@ -169,7 +169,7 @@ annotation {
169169

170170
A grayscale PNG file that stores integer values (label pixel_value in [annotation spec](#annotation_definitionsjson) reference table, semantic segmentation) of the labeled object at each pixel.
171171

172-
![image alt text](image_2.png)
172+
![image alt text](../images/Schema/image_2.png)
173173

174174
#### capture.annotation.values
175175

@@ -285,7 +285,7 @@ How to support instance segmentation (maybe we need to use polygon instead of pi
285285
286286
A grayscale PNG file that stores integer values of labeled instances at each pixel.
287287
288-
![image alt text](image_4.png)
288+
![image alt text](../images/Schema/image_4.png)
289289
-->
290290

291291
### metrics

com.unity.perception/Documentation~/Tutorial/DatasetInsights.md

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -18,15 +18,15 @@ This will download a Docker image from Unity. If you get an error regarding the
1818
* **:green_circle: Action**: The image is now running on your computer. Open a web browser and navigate to `http://localhost:8888` to open the Jupyter notebook:
1919

2020
<p align="center">
21-
<img src="Images/jupyter1.png" width="800"/>
21+
<img src="../images/Tutorial/Images/jupyter1.png" width="800"/>
2222
</p>
2323

2424
* **:green_circle: Action**: To make sure your data is properly mounted, navigate to the `data` folder. If you see the dataset's folders there, we are good to go.
2525
* **:green_circle: Action**: Navigate to the `datasetinsights/notebooks` folder and open `Perception_Statistics.ipynb`.
2626
* **:green_circle: Action**: Once in the notebook, remove the `/<GUID>` part of the `data_root = /data/<GUID>` path. Since the dataset root is already mapped to `/data`, you can use this path directly.
2727

2828
<p align="center">
29-
<img src="Images/jupyter2.png" width="800"/>
29+
<img src="../images/Tutorial/Images/jupyter2.png" width="800"/>
3030
</p>
3131

3232
This notebook contains a variety of functions for generating plots, tables, and bounding box images that help you analyze your generated dataset. Certain parts of this notebook are currently not of use to us, such as the code meant for downloading data generated through Unity Simulation (coming later in this tutorial).
@@ -37,7 +37,7 @@ Below, you can see a sample plot generated by the Dataset Insights notebook, dep
3737

3838

3939
<p align="center">
40-
<img src="Images/object_count_plot.png" width="600"/>
40+
<img src="../images/Tutorial/Images/object_count_plot.png" width="600"/>
4141
</p>
4242

4343

@@ -61,7 +61,7 @@ Once the Docker image is running, the rest of the workflow is quite similar to w
6161
* **:green_circle: Action**: In the `data_root = /data/<GUID>` line, the `<GUID>` part will be the location inside your `<download path>` where the data will be downloaded. Therefore, you can just remove it so as to have data downloaded directly to the path you previously specified:
6262

6363
<p align="center">
64-
<img src="Images/di_usim_1.png" width="900"/>
64+
<img src="../images/Tutorial/Images/di_usim_1.png" width="900"/>
6565
</p>
6666

6767
* **:green_circle: Action**: In the block of code titled "Unity Simulation [Optional]", uncomment the lines that assign values to variables, and insert the correct values, based on information from your Unity Simulation run.
@@ -93,7 +93,7 @@ The `access_token` you need for your Dataset Insights notebook is the access tok
9393
Once you have entered all the information, the block of code should look like the screenshot below (the actual values you input will be different):
9494

9595
<p align="center">
96-
<img src="Images/di_usim_2.png" width="800"/>
96+
<img src="../images/Tutorial/Images/di_usim_2.png" width="800"/>
9797
</p>
9898

9999

@@ -102,7 +102,7 @@ Once you have entered all the information, the block of code should look like th
102102
You will see a progress bar while the data downloads:
103103

104104
<p align="center">
105-
<img src="Images/di_usim_3.png" width="800"/>
105+
<img src="../images/Tutorial/Images/di_usim_3.png" width="800"/>
106106
</p>
107107

108108

@@ -111,7 +111,7 @@ The next couple of code blocks (under "Load dataset metadata") analyze the downl
111111
* **:green_circle: Action**: Once you reach the code block titled "Built-in Statistics", make sure the value assigned to the field `rendered_object_info_definition_id` matches the id displayed for this metric in the table output by the code block immediately before it. The screenshot below demonstrates this (note that your ids might differ from the ones here):
112112

113113
<p align="center">
114-
<img src="Images/di_usim_4.png" width="800"/>
114+
<img src="../images/Tutorial/Images/di_usim_4.png" width="800"/>
115115
</p>
116116

117117
Follow the rest of the steps inside the notebook to generate a variety of plots and stats.

0 commit comments

Comments
 (0)