You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: com.unity.perception/Documentation~/FAQ/FAQ.md
+10-10Lines changed: 10 additions & 10 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -42,7 +42,7 @@ Keep in mind that any new label added with this method should already be present
42
42
Labeling works on the GameObject level, so to achieve the scenarios described here, you will need to break down your main object into multiple GameObjects parented to the same root object, and add `Labeling` components to each of the inner objects, as shown below.
Alternatively, in cases where parts of the surface of the object need to be labeled (e.g. decals on objects), you can add labeled invisible surfaces on top of these sections. These invisible surfaces need to have a fully transparent material. To create an invisible material:
@@ -54,7 +54,7 @@ Keep in mind that any new label added with this method should already be present
54
54
An example labeled output for an object with separate labels on inner objects and decals is shown below:
Now all that is left is to use our prefab clusters inside a Randomizer. Here is some sample code:
@@ -146,7 +146,7 @@ public class ClusterRandomizer : UnityEngine.Perception.Randomization.Randomizer
146
146
This Randomizer takes a list of `PrefabCluster` assets, then, on each Iteration, it goes through all the provided clusters and samples one prefab from each. The ***Inspector*** view for this Randomizer looks like this:
-[<aname="step-5">Step 5: Add Joints to the Character and Customize Keypoint Templates</a>](#step-5-add-joints-to-the-character-and-customize-keypoint-templates)
17
+
-[<aname="step-6">Step 6: Randomize the Humanoid Character's Animations</a>](#step-6-randomize-the-humanoid-characters-animations)
17
18
18
19
> :information_source: If you face any problems while following this tutorial, please create a post on the **[Unity Computer Vision forum](https://forum.unity.com/forums/computer-vision.626/)** or the **[GitHub issues](https://github.com/Unity-Technologies/com.unity.perception/issues)** page and include as much detail as possible.
19
20
@@ -32,7 +33,7 @@ We will use this duplicated Scene in this tutorial so that we do not lose our gr
***:green_circle: Action**: Select `Main Camera` and in the _**Inspector**_ view of the `Perception Camera` component, **disable** all previously added labelers using the check-mark in front of each. We will be using a new labeler in this tutorial.
@@ -45,7 +46,7 @@ We now need to import the sample files required for this tutorial.
45
46
Once the sample files are imported, they will be placed inside the `Assets/Samples/Perception` folder in your Unity project, as seen in the image below:
***:green_circle: Action**: Select all of the assets inside the `Assets/Samples/Perception/<perception-package-version>/Human Pose Labeling and Randomization/Models and Animations`.
@@ -61,7 +62,7 @@ Note how `Animation Type` is set to `Humanoid` for all selected assets. This is
61
62
***:green_circle: Action**: Select the new `Player` object in the Scene and in the _**Inspector**_ tab set its transform's position and rotation according to the image below to make the character face the camera.
The `Player` object already has an `Animator` component attached. This is because the `Animation Type` property of all the sample `.fbx` files is set to `Humanoid`.
@@ -71,7 +72,7 @@ To animate our character, we will now attach an `Animation Controller` to the `A
71
72
***:green_circle: Action**: Double click the new controller to open it. Then right-click in the empty area and select _**Create State**_ -> _**Empty**_.
This will create a new state and attach it to the Entry state with a new transition edge. This means the controller will always move to this new state as soon as the `Animator` component is awoken. In this example, this will happen when the **▷** button is pressed and the simulation starts.
@@ -83,19 +84,19 @@ In the selector window that pops up, you will see several clips named `Take 001`
83
84
***:green_circle: Action**: Select the animation clip originating from the `TakeObjects.fbx` file, as seen below:
@@ -116,7 +117,7 @@ Similar to the labelers we used in the Perception Tutorial, we will need a label
116
117
***:green_circle: Action**: In the _**Inspector**_ UI for this new `Labeling` component, expand `HPE_IdLabelConfig` and click _**Add to Labels**_ on `MyCharacter`.
The `Active Template` tells the labeler how to map default Unity rig joints to human joint labels in the popular COCO dataset so that the output of the labeler can be easily converted to COCO format. Later in this tutorial, we will learn how to add more joints to our character and how to customize joint mapping templates.
You can now check out the output dataset to see what the annotations look like. To do this, click the _**Show Folder**_ button in the `Perception Camera` UI, then navigate inside to the dataset folder to find the `captures_000.json` file. Here is an example annotation for the first frame of our test-case here:
@@ -276,15 +277,15 @@ You can now use the `Timestamps` list to define poses. Let's define four poses h
276
277
Modify `MyAnimationPoseConfig` according to the image below:
The pose configuration we created needs to be assigned to our `KeyPointLabeler`. So:
283
284
284
285
***:green_circle: Action**: In the _**Inspector**_ UI for `Perception Camera`, set the `Size` of `Animation Pose Configs` for the `KeyPointLabeler` to 1. Then, assign the `MyAnimationPoseConfig` to the sole slot in the list, as shown below:
If you run the simulation again to generate a new dataset, you will see the new poses we defined written in it. All frames that belong to a certain pose will have the pose label attached.
@@ -305,7 +306,7 @@ In the _**Inspector**_ view of `CocoKeypointTemplate`, you will see the list of
If you review the list you will see that the `left_ear` and `right_ear` joints are also not associated with the rig.
@@ -317,7 +318,7 @@ We will create our three new joints under the `Head` object.
317
318
***:green_circle: Action**: Create three new empty GameObjects under `Head` and place them in the proper positions for the character's nose and ears, as seen in the GIF below (make sure the positions are correct in 3D space):
The final step in this process would be to label these new joints such that they match the labels of their corresponding keypoints in `CocoKeypointTemplate`. For this purpose, we use the `Joint Label` component.
@@ -327,7 +328,7 @@ The final step in this process would be to label these new joints such that they
327
328
If you run the simulation now, you can see the new joints being visualized:
You could now look at the latest generated dataset to confirm the new joints are being detected and written.
@@ -347,7 +348,7 @@ The `Animation Randomizer Tag` accepts a list of animation clips. At runtime, th
347
348
If you run the simulation now, your character will randomly perform one of the above four animations, each for 150 frames. This cycle will recur 20 times, which is the total number of Iterations in your Scenario.
> :information_source: The reason the character stops animating at certain points in the above GIF is that the animation clips are not set to loop. Therefore, if the randomly selected timestamp is sufficiently close to the end of the clip, the character will complete the animation and stop animating for the rest of the Iteration.
Copy file name to clipboardExpand all lines: com.unity.perception/Documentation~/Randomization/Scenarios.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -13,7 +13,7 @@ By default, the Perception package includes one ready-made Scenario, the `FixedL
13
13
Scenarios have a number of lifecycle hooks that are called during execution. Below is a diagram visualizing the sequence of operations run by a typical scenario:
Copy file name to clipboardExpand all lines: com.unity.perception/Documentation~/Schema/Synthetic_Dataset_Schema.md
+4-4Lines changed: 4 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -61,7 +61,7 @@ The main difference between this schema and nuScenes is that we use **document b
61
61
This means that instead of requiring multiple id-based "joins" to explore the data, data is nested and sometimes duplicated for ease of consumption.
62
62
63
63
## Components
64
-

64
+

65
65
66
66
### captures
67
67
@@ -96,7 +96,7 @@ We cannot use timestamps to synchronize between two different events because tim
96
96
Instead, we use a "step" counter which make it easy to associate metrics and captures that occur at the same time.
97
97
Below is an illustration of how captures, metrics, timestamps and steps are synchronized.
98
98
99
-

99
+

100
100
101
101
Since each sensor might trigger captures at different frequencies, at the same timestamp we might contain 0 to N captures, where N is the total number of sensors included in this scene.
102
102
If two sensors are captured at the same timestamp, they should share the same sequence, step and timestamp value.
@@ -169,7 +169,7 @@ annotation {
169
169
170
170
A grayscale PNG file that stores integer values (label pixel_value in [annotation spec](#annotation_definitionsjson) reference table, semantic segmentation) of the labeled object at each pixel.
171
171
172
-

172
+

173
173
174
174
#### capture.annotation.values
175
175
@@ -285,7 +285,7 @@ How to support instance segmentation (maybe we need to use polygon instead of pi
285
285
286
286
A grayscale PNG file that stores integer values of labeled instances at each pixel.
Copy file name to clipboardExpand all lines: com.unity.perception/Documentation~/Tutorial/DatasetInsights.md
+7-7Lines changed: 7 additions & 7 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -18,15 +18,15 @@ This will download a Docker image from Unity. If you get an error regarding the
18
18
***:green_circle: Action**: The image is now running on your computer. Open a web browser and navigate to `http://localhost:8888` to open the Jupyter notebook:
***:green_circle: Action**: To make sure your data is properly mounted, navigate to the `data` folder. If you see the dataset's folders there, we are good to go.
25
25
***:green_circle: Action**: Navigate to the `datasetinsights/notebooks` folder and open `Perception_Statistics.ipynb`.
26
26
***:green_circle: Action**: Once in the notebook, remove the `/<GUID>` part of the `data_root = /data/<GUID>` path. Since the dataset root is already mapped to `/data`, you can use this path directly.
This notebook contains a variety of functions for generating plots, tables, and bounding box images that help you analyze your generated dataset. Certain parts of this notebook are currently not of use to us, such as the code meant for downloading data generated through Unity Simulation (coming later in this tutorial).
@@ -37,7 +37,7 @@ Below, you can see a sample plot generated by the Dataset Insights notebook, dep
@@ -61,7 +61,7 @@ Once the Docker image is running, the rest of the workflow is quite similar to w
61
61
***:green_circle: Action**: In the `data_root = /data/<GUID>` line, the `<GUID>` part will be the location inside your `<download path>` where the data will be downloaded. Therefore, you can just remove it so as to have data downloaded directly to the path you previously specified:
***:green_circle: Action**: In the block of code titled "Unity Simulation [Optional]", uncomment the lines that assign values to variables, and insert the correct values, based on information from your Unity Simulation run.
@@ -93,7 +93,7 @@ The `access_token` you need for your Dataset Insights notebook is the access tok
93
93
Once you have entered all the information, the block of code should look like the screenshot below (the actual values you input will be different):
@@ -111,7 +111,7 @@ The next couple of code blocks (under "Load dataset metadata") analyze the downl
111
111
***:green_circle: Action**: Once you reach the code block titled "Built-in Statistics", make sure the value assigned to the field `rendered_object_info_definition_id` matches the id displayed for this metric in the table output by the code block immediately before it. The screenshot below demonstrates this (note that your ids might differ from the ones here):
0 commit comments