You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+14-8Lines changed: 14 additions & 8 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -9,27 +9,27 @@
9
9
> com.unity.perception is in active development. Its features and API are subject to significant change as development progresses.
10
10
11
11
12
-
# Perception
12
+
# Perception Package (Unity Computer Vision)
13
13
14
-
The Perception package provides a toolkit for generating large-scale datasets for perception-based machine learning training and validation. It is focused on a handful of camera-based use cases for now and will ultimately expand to other forms of sensors and machine learning tasks.
14
+
The Perception package provides a toolkit for generating large-scale datasets for computer vision training and validation. It is focused on a handful of camera-based use cases for now and will ultimately expand to other forms of sensors and machine learning tasks.
Detailed instructions covering all the important steps from installing Unity Editor, to creating your first Perception project, building a randomized Scene, and generating large-scale synthetic datasets by leveraging the power of Unity Simulation. No prior Unity experience required.
22
+
Detailed instructions covering all the important steps from installing Unity Editor, to creating your first computer vision data generation project, building a randomized Scene, and generating large-scale synthetic datasets by leveraging the power of Unity Simulation. No prior Unity experience required.
23
23
24
24
## Documentation
25
-
In-depth documentation on inidividual components of the package.
25
+
In-depth documentation on individual components of the package.
26
26
27
27
|Feature|Description|
28
28
|---|---|
29
29
|[Labeling](com.unity.perception/Documentation~/GroundTruthLabeling.md)|A component that marks a GameObject and its descendants with a set of labels|
30
-
|[LabelConfig](com.unity.perception/Documentation~/GroundTruthLabeling.md#label-config)|An asset that defines a taxonomy of labels for ground truth generation|
30
+
|[Label Config](com.unity.perception/Documentation~/GroundTruthLabeling.md#label-config)|An asset that defines a taxonomy of labels for ground truth generation|
31
31
|[Perception Camera](com.unity.perception/Documentation~/PerceptionCamera.md)|Captures RGB images and ground truth from a [Camera](https://docs.unity3d.com/Manual/class-Camera.html).|
32
-
|[DatasetCapture](com.unity.perception/Documentation~/DatasetCapture.md)|Ensures sensors are triggered at proper rates and accepts data for the JSON dataset.|
32
+
|[Dataset Capture](com.unity.perception/Documentation~/DatasetCapture.md)|Ensures sensors are triggered at proper rates and accepts data for the JSON dataset.|
33
33
|[Randomization (Experimental)](com.unity.perception/Documentation~/Randomization/Index.md)|The Randomization tool set lets you integrate domain randomization principles into your simulation.|
34
34
35
35
## Example Projects
@@ -43,7 +43,7 @@ In-depth documentation on inidividual components of the package.
The [Unity Simulation Smart Camera Example](https://github.com/Unity-Technologies/Unity-Simulation-Smart-Camera-Outdoor) illustrates how Perception could be used in a smart city or autonomous vehicle simulation. You can generate datasets locally or at scale in [Unity Simulation](https://unity.com/products/unity-simulation).
46
+
The [Unity Simulation Smart Camera Example](https://github.com/Unity-Technologies/Unity-Simulation-Smart-Camera-Outdoor) illustrates how the Perception package could be used in a smart city or autonomous vehicle simulation. You can generate datasets locally or at scale in [Unity Simulation](https://unity.com/products/unity-simulation).
47
47
48
48
## Local development
49
49
The repository includes two projects for local development in `TestProjects` folder, one set up for HDRP and the other for URP.
@@ -59,10 +59,16 @@ For closest standards conformity and best experience overall, JetBrains Rider or
59
59
## License
60
60
*[License](com.unity.perception/LICENSE.md)
61
61
62
+
## Support
63
+
64
+
For general questions or concerns please contact the Computer Vision team at [email protected].
65
+
66
+
For feedback, bugs, or other issues please file a GitHub issue and the Computer Vision team will investigate the issue as soon as possible.
67
+
62
68
## Citation
63
69
If you find this package useful, consider citing it using:
Copy file name to clipboardExpand all lines: com.unity.perception/Documentation~/DatasetCapture.md
+5-3Lines changed: 5 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -4,12 +4,14 @@
4
4
5
5
6
6
## Sensor scheduling
7
-
While sensors are registered, `DatasetCapture` ensures that frame timing is deterministic and run at the appropriate simulation times to let each sensor run at its own rate.
7
+
While sensors are registered, `DatasetCapture` ensures that frame timing is deterministic and run at the appropriate simulation times to let each sensor render and capture at its own rate.
8
8
9
-
Using [Time.CaptureDeltaTime](https://docs.unity3d.com/ScriptReference/Time-captureDeltaTime.html), it also decouples wall clock time from simulation time, allowing the simulation to run as fast as possible.
9
+
Using [Time.captureDeltaTime](https://docs.unity3d.com/ScriptReference/Time-captureDeltaTime.html), it also decouples wall clock time from simulation time, allowing the simulation to run as fast as possible.
10
10
11
11
## Custom sensors
12
-
You can register custom sensors using `DatasetCapture.RegisterSensor()`. The `period` you pass in at registration time determines how often (in simulation time) frames should be scheduled for the sensor to run. The sensor implementation then checks `ShouldCaptureThisFrame` on the returned `SensorHandle` each frame to determine whether it is time for the sensor to perform a capture. `SensorHandle.ReportCapture` should then be called in each of these frames to report the state of the sensor to populate the dataset.
12
+
You can register custom sensors using `DatasetCapture.RegisterSensor()`. The `simulationDeltaTime` you pass in at registration time is used as `Time.captureDeltaTime` and determines how often (in simulation time) frames should be simulated for the sensor to run. This and the `framesBetweenCaptures` value determine at which exact times the sensor should capture the simulated frames. The decoupling of simulation delta time and capture frequency based on frames simulated allows you to render frames in-between captures. If no in-between frames are desired, you can set `framesBetweenCaptures` to 0. When it is time to capture, the `ShouldCaptureThisFrame` check of the `SensorHandle` returns true. `SensorHandle.ReportCapture` should then be called in each of these frames to report the state of the sensor to populate the dataset.
13
+
14
+
`Time.captureDeltaTime` is set at every frame in order to precisely fall on the next sensor that requires simulation, and this includes multi-sensor simulations. For instance, if one sensor has a `simulationDeltaTime` of 2 and another 3, the first five values for `Time.captureDeltaTime` will be 2, 1, 1, 2, and 3, meaning simulation will happen on the timestamps 0, 2, 3, 4, 6, and 9.
13
15
14
16
## Custom annotations and metrics
15
17
In addition to the common annotations and metrics produced by [PerceptionCamera](PerceptionCamera.md), scripts can produce their own via `DatasetCapture`. You must first register annotation and metric definitions using `DatasetCapture.RegisterAnnotationDefinition()` or `DatasetCapture.RegisterMetricDefinition()`. These return `AnnotationDefinition` and `MetricDefinition` instances which you can then use to report values during runtime.
The Perception Camera component ensures that the [Camera](https://docs.unity3d.com/Manual/class-Camera.html) runs at deterministic rates. It also ensures that the Camera uses [DatasetCapture](DatasetCapture.md) to capture RGB and other Camera-related ground truth in the [JSON dataset](Schema/Synthetic_Dataset_Schema.md). You can use the Perception Camera component on the High Definition Render Pipeline (HDRP) or the Universal Render Pipeline (URP).
3
3
4
-

<br><i>The Inspector view of the Perception Camera component</i>
7
+
</p>
6
8
7
9
## Properties
8
10
| Property: | Function: |
9
11
|--|--|
10
12
| Description | A description of the Camera to be registered in the JSON dataset. |
11
-
|Period | The amount of simulation time in seconds between frames for this Camera. For more information on sensor scheduling, see [DatasetCapture](DatasetCapture.md). |
12
-
|Start Time | The simulation time at which to run the first frame. This time offsets the period, which allows multiple Cameras to run at the correct times relative to each other. |
13
-
| Capture Rgb Images|When you enable this property, Unity captures RGB images as PNG files in the dataset each frame. |
13
+
|Show Visualizations | Display realtime visualizations for labelers that are currently active on this camera. |
14
+
|Capture RGB Images | When you enable this property, Unity captures RGB images as PNG files in the dataset each frame. |
15
+
| Capture Trigger Mode|The method of triggering captures for this camera. In `Scheduled` mode, captures happen automatically based on a start frame and frame delta time. In `Manual` mode, captures should be triggered manually through calling the `RequestCapture` method of `PerceptionCamera`. |
14
16
| Camera Labelers | A list of labelers that generate data derived from this Camera. |
15
17
18
+
### Properties for Scheduled Capture Mode
19
+
| Property: | Function: |
20
+
|--|--|
21
+
| Simulation Delta Time | The simulation frame time (seconds) for this camera. E.g. 0.0166 translates to 60 frames per second. This will be used as Unity's `Time.captureDeltaTime`, causing a fixed number of frames to be generated for each second of elapsed simulation time regardless of the capabilities of the underlying hardware. For more information on sensor scheduling, see [DatasetCapture](DatasetCapture.md). |
22
+
| First Capture Frame | Frame number at which this camera starts capturing. |
23
+
| Frames Between Captures | The number of frames to simulate and render between the camera's scheduled captures. Setting this to 0 makes the camera capture every frame. |
24
+
25
+
### Properties for Manual Capture Mode
26
+
| Property: | Function: |
27
+
|--|--|
28
+
| Affect Simulation Timing | Have this camera affect simulation timings (similar to a scheduled camera) by requesting a specific frame delta time. Enabling this option will let you set the `Simulation Delta Time` property described above.|
29
+
30
+
16
31
## Camera labelers
17
32
Camera labelers capture data related to the Camera in the JSON dataset. You can use this data to train models and for dataset statistics. The Perception package provides several Camera labelers, and you can derive from the CameraLabeler class to define more labelers.
18
33
19
-
### SemanticSegmentationLabeler
34
+
### Semantic Segmentation Labeler
20
35

21
36
<br/>_Example semantic segmentation image from a modified SynthDet project_
22
37
23
38
The SemanticSegmentationLabeler generates a 2D RGB image with the attached Camera. Unity draws objects in the color you associate with the label in the SemanticSegmentationLabelingConfiguration. If Unity can't find a label for an object, it draws it in black.
24
39
25
-
### InstanceSegmentationLabeler
40
+
### Instance Segmentation Labeler
26
41
27
42
The instance segmentation labeler generates a 2D RGB image with the attached camera. Unity draws each instance of a labeled
28
43
object with a unique color.
29
44
30
-
### BoundingBox2DLabeler
45
+
### Bounding Box 2D Labeler
31
46

32
47
<br/>_Example bounding box visualization from SynthDet generated by the `SynthDet_Statistics` Jupyter notebook_
33
48
34
49
The BoundingBox2DLabeler produces 2D bounding boxes for each visible object with a label you define in the IdLabelConfig. Unity calculates bounding boxes using the rendered image, so it only excludes occluded or out-of-frame portions of the objects.
35
50
36
51
### Bounding Box 3D Ground Truth Labeler
37
52
38
-
The Bounding Box 3D Ground Truth Labeler prouces 3D ground truth bounding boxes for each labeled game object in the scene. Unlike the 2D bounding boxes, 3D bounding boxes are calculated from the labeled meshes in the scene and all objects (independent of their occlusion state) are recorded.
53
+
The Bounding Box 3D Ground Truth Labeler produces 3D ground truth bounding boxes for each labeled game object in the scene. Unlike the 2D bounding boxes, 3D bounding boxes are calculated from the labeled meshes in the scene and all objects (independent of their occlusion state) are recorded.
39
54
40
-
### ObjectCountLabeler
55
+
### Object Count Labeler
41
56
42
57
```
43
58
{
@@ -50,7 +65,7 @@ _Example object count for a single label_
50
65
51
66
The ObjectCountLabeler records object counts for each label you define in the IdLabelConfig. Unity only records objects that have at least one visible pixel in the Camera frame.
52
67
53
-
### RenderedObjectInfoLabeler
68
+
### Rendered Object Info Labeler
54
69
```
55
70
{
56
71
"label_id": 24,
@@ -62,6 +77,85 @@ _Example rendered object info for a single object_
62
77
63
78
The RenderedObjectInfoLabeler records a list of all objects visible in the Camera image, including its instance ID, resolved label ID and visible pixels. If Unity cannot resolve objects to a label in the IdLabelConfig, it does not record these objects.
64
79
80
+
### KeypointLabeler
81
+
82
+
The keypoint labeler captures keypoints of a labeled gameobject. The typical use of this labeler is capturing human pose
83
+
estimation data. The labeler uses a [keypoint template](#KeypointTemplate) which defines the keypoints to capture for the
84
+
model and the skeletal connections between those keypoints. The positions of the keypoints are recorded in pixel coordinates
85
+
and saved to the captures json file.
86
+
87
+
```
88
+
keypoints {
89
+
label_id: <int> -- Integer identifier of the label
90
+
instance_id: <str> -- UUID of the instance.
91
+
template_guid: <str> -- UUID of the keypoint template
92
+
pose: <str> -- Pose ground truth information
93
+
keypoints [ -- Array of keypoint data, one entry for each keypoint defined in associated template file.
94
+
{
95
+
index: <int> -- Index of keypoint in template
96
+
x: <float> -- X pixel coordinate of keypoint
97
+
y: <float> -- Y pixel coordinate of keypoint
98
+
state: <int> -- 0: keypoint does not exist, 1 keypoint exists
99
+
}, ...
100
+
]
101
+
}
102
+
```
103
+
104
+
#### Keypoint Template
105
+
106
+
keypoint templates are used to define the keypoints and skeletal connections captured by the KeypointLabeler. The keypoint
107
+
template takes advantage of Unity's humanoid animation rig, and allows the user to automatically associate template keypoints
108
+
to animation rig joints. Additionally, the user can choose to ignore the rigged points, or add points not defined in the rig.
109
+
A Coco keypoint template is included in the perception package.
110
+
111
+
##### Editor
112
+
113
+
The keypoint template editor allows the user to create/modify a keypoint template. The editor consists of the header information,
114
+
the keypoint array, and the skeleton array.
115
+
116
+

117
+
<br/>_Header section of the keypoint template_
118
+
119
+
In the header section, a user can change the name of the template and supply textures that they would like to use for the keypoint
120
+
visualization.
121
+
122
+

123
+
<br/>_Keypoint section of the keypoint template_
124
+
125
+
The keypoint section allows the user to create/edit keypoints and associate them with Unity animation rig points. Each keypoint record
126
+
has 4 fields: label (the name of the keypoint), Associate to Rig (a boolean value which, if true, automatically maps the keypoint to
127
+
the gameobject defined by the rig), Rig Label (only needed if Associate To Rig is true, defines which rig component to associate with
128
+
the keypoint), and Color (RGB color value of the keypoint in the visualization).
129
+
130
+

131
+
<br/>_Skeleton section of the keypoint template_
132
+
133
+
The skeleton section allows the user to create connections between joints, basically defining the skeleton of a labeled object.
134
+
135
+
##### Format
136
+
```
137
+
annotation_definition.spec {
138
+
template_id: <str> -- The UUID of the template
139
+
template_name: <str> -- Human readable name of the template
140
+
key_points [ -- Array of joints defined in this template
141
+
{
142
+
label: <str> -- The label of the joint
143
+
index: <int> -- The index of the joint
144
+
}, ...
145
+
]
146
+
skeleton [ -- Array of skeletal connections (which joints have connections between one another) defined in this template
147
+
{
148
+
joint1: <int> -- The first joint of the connection
149
+
joint2: <int> -- The second joint of the connection
150
+
}, ...
151
+
]
152
+
}
153
+
```
154
+
155
+
#### Animation Pose Label
156
+
157
+
This file is used to define timestamps in an animation to a pose label.
158
+
65
159
## Limitations
66
160
67
161
Ground truth is not compatible with all rendering features, especially those that modify the visibility or shape of objects in the frame.
0 commit comments