Skip to content

Commit 687df12

Browse files
author
Unity Technologies
committed
com.unity.polyspatial.visionos@0.0.4
## [0.0.4] - 2023-07-18 ## [0.0.3] - 2023-07-18
1 parent ce98547 commit 687df12

File tree

66 files changed

+1003
-480
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

66 files changed

+1003
-480
lines changed

.editorconfig

Lines changed: 0 additions & 5 deletions
This file was deleted.

CHANGELOG.md

Lines changed: 3 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -4,15 +4,13 @@ All notable changes to this package will be documented in this file.
44
The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/)
55
and this project adheres to [Semantic Versioning](http://semver.org/spec/v2.0.0.html).
66

7-
## Unreleased
7+
## [0.0.4] - 2023-07-18
8+
9+
## [0.0.3] - 2023-07-18
810

911
## [0.0.2] - 2023-07-17
1012

1113
## [0.0.1] - 2023-07-14
1214

1315
### Added
1416
- Initial PolySpatial visionOS package.
15-
16-
### Fixed
17-
### Changed
18-
### Removed

Documentation~/Assets.md

Lines changed: 25 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,25 @@
1+
# PolySpatial Asset Support
2+
3+
## Meshes
4+
RealityKit offers a limited set of predefined vertex formats. Meshes can supply a position, a normal, a tangent, a color, blend weight and blend indices. Unity will supply up to 8 texture coordinates to RealityKit, but note that only the first two UV channels are useable within its MaterialX implementation, limiting the utility of the extra geometric data.
5+
6+
As Unity and RealityKit use different coordinate systems, some vertex attributes are modified when passing between systems. Handedness swapping is performed for position, normal, and tangent. UVs are flipped for all UV channels.
7+
8+
## Materials
9+
Please refer to [PolySpatial Material Support](Materials.md) for detailed information about material and shader support on visionOS.
10+
11+
### Unity ShaderGraphs
12+
Please refer to [Shader Graph Support](ShaderGraph.md) for detailed information about how custom shaders defined via Unity ShaderGraph are converted to MaterialX to interop with RealityKit.
13+
14+
## Textures
15+
Unity provides support for 2D textures on visionOS, and takes advantage of native texture compression options.
16+
17+
RealityKit for visionOS does not support 3D textures or cubemaps, so users must reimplement these texture assets in terms of 2D textures instead.
18+
19+
### Render Textures
20+
Unity will replicate render targets to RealityKit in real time, but currently only a limited number of submissions can be made at rate. Introducing additional render targets may contend with Unity's own graphics buffer sumbmission, hindering overall performance.
21+
22+
Also note that you must manually mark RenderTextures as dirty after modifying them; currently, no such dirtying occurs automatically, and if the texture isn't dirtied it won't be replicated over to RealityKit.
23+
24+
## Fonts
25+
Both rasterized and SDF fonts are supported on visionOS, but we highly recommend using SDF fonts to ensure sharpness at all viewing distances.
Lines changed: 35 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,35 @@
1+
# Development & Iteration
2+
3+
## Prerequisites
4+
Please refer to [visionOS PolySpatial Requirements & Limitations](Requirements.md) for information about supported hardware, software, and Unity features.
5+
6+
<!-- TODO: ## Package Setup Instructions -->
7+
8+
## Enable PolySpatial runtime
9+
visionOS support for Mixed Reality is provided by Unity PolySpatial, which can be toggled via the option **Edit &gt; Project Settings &gt; PolySpatial &gt; Enable PolySpatial Runtime**.
10+
11+
## Iteration and Preview
12+
Unity provides several options for iterating and previewing content that targets visionOS. These options are on par with Unity's support for other non-desktop platforms.
13+
14+
### Play Mode
15+
The fastest way to preview content is to enter Play Mode within the Unity Editor. This provides the fastest iteration cycle, but uses Unity's rendering system rather than RealityKit. This mode is optimized for rapid iteration, such as iterating on gameplay or UX, but may not always provide a faithful representation of the visuals or performance characteristics of a target platform. Visuals, optimization, and similar tasks typically benefit from other preview options provided by Unity. In addition, Play Mode doesn't currently preview volumes or the new input modalities provided by visionOS.
16+
17+
In order to better approximate the visionOS runtime, Play Mode for PolySpatial apps creates a parallel hierarchy of **backing** GameObjects that are linked to your app's **simulation** GameObjects, but perform all the rendering. This means you will observe some differences based on the state of the `Enable PolySpatial Runtime` project setting. These differences are intentional, as they allow developers to better preview how their content will look on device.
18+
19+
### visionOS Player builds.
20+
Choose visionOS from the Build Settings window to target your build for visionOS. Most options in build settings are analogous to those provided for iOS. visionOS player builds will generate an Xcode project that needs to be compiled on a Mac (currently, this must be a Mac with Apple silicon), but may target either the visionOS simulator or an Apple Vision Pro headset connected to your Mac.
21+
22+
Note: unlike iOS, there is no need to switch to a different SDK in Project Settings to run your content in the simulator. Simply select the RealityDevice simulator target in Xcode.
23+
24+
For building to a development kit make sure you have setup a valid provisioning profile and signing certificate for Apple Development (that includes visionOS) platform. You will also need to make sure the device is correctly registered to your development account.
25+
26+
### Recording and playback
27+
PolySpatial for visionOS supports a unique recording and playback workflow that allows you to record a session (including input commands) and then play it back within the Unity Editor. For more information, see information about [PolySpatial tooling](Tooling.md)
28+
29+
## Debugging Support
30+
The standard debugging workflow works normally when using PolySpatial. You enable Script Debugging in the build settings and optionally Wait for Managed Debugger. Then attach a managed debugger/IDE to your running application and debug your script code.
31+
32+
## Building Blocks in PolySpatial XR
33+
The building blocks system is an overlay window in the scene view that can help you quickly access commonly used items in your project. To open the building blocks overlay click on the hamburger menu on the scene view &gt; Overlay menu Or move the mouse over the scene view and press the "tilde" key. Afterwards just enable the Building Blocks overlay.
34+
35+
You can find more info about the building blocks system in the [XR Core Utils package](https://docs.unity3d.com/Packages/com.unity.xr.core-utils@latest).

Documentation~/FAQ.md

Lines changed: 42 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,42 @@
1+
---
2+
uid: polyspatialxr-faq
3+
---
4+
5+
# Frequently Asked Questions (FAQ)
6+
7+
## Q: I see different results running in the visionOS simulator than on Hardware
8+
* Note that when running in Simulator, some hardware-specific features are not available - most notably AR data. This could mean that the simulated outcomes in the visionOS simulator may be different from the simulation on the Vision Pro headset. Check out Apple’s guide on running your app in the simulator to learn more.
9+
* Please note that Unity is still building towards feature parity with the Metal API on XCode, so you might observe warnings from Metal’s API validation layer. To work around this, you can turn off the Metal API Validation Layer via XCode’s scheme menu.
10+
11+
## Q: How can I bring an existing mobile project to the PolySpatial XR platform?
12+
Please check the Project conversion guide on the [getting started page](GettingStarted.md#unity-project-conversion-guide-for-unity-polyspatial-xr) for information on enabling and using PolySpatial.
13+
14+
## Q: How can I bring an existing XR project to the PolySpatial XR platform?
15+
You can check for a Project conversion guide on the [getting started page](GettingStarted.md#unity-project-conversion-guide-for-unity-polyspatial-xr)
16+
17+
## Q: I enter Play Mode and see no visual or execution difference in my project!
18+
This may indicate you haven't yet turned on support for the PolySpatial Runtime. To do so, go to **Project Settings &gt; PolySpatial** and make sure that **Enable PolySpatial Runtime** is toggled.
19+
20+
## Q: The runtime is enabled, but nothing shows up!
21+
* Ensure you have a Volume Camera in your scene. An Unbounded Volume Camera with its origin positioned in the middle of your scene is a good starting point.
22+
If one is not present a default one will be created that will include the bounds of every object in the scene, but this may cause objects in the scene within the bounds of the volume camera to be too small to see.
23+
* Verify that the in-editor preview runtime is functioning. Open the “DontDestroyOnLoad” scene in the hierarchy while playing, and check if there is a "PolySpatial Root” object present. If there is not, ensure that the PolySpatial runtime is enabled. If it is enabled and nothing shows up, please contact the Unity team.
24+
* When using an Unbounded camera, the platform is responsible for choosing the (0,0,0) origin and may choose position for it that is unexpected. Look around (literally) to see if your content is somewhere other than here you think it should be. Rebooting the device can also help to reset its session space. It can be helpful to ensure that it is in a consistent location (for example, sitting on the desk, facing forward) every time you boot it up.
25+
26+
## Q: Skinned Meshes are not animating!
27+
* On the **Animator** component, ensure **Culling Mode** is set to **Always Animate**.
28+
* If the model is imported, navigate to the **Import Settings** for the model. Under the **Rig** tab, ensure **Optimize Game Object** is unticked. Some models may not even have this setting; in that case, it should be good as-is.
29+
* Certain models may contain a skeleton (a set of bones in a hierarchy) that are incompatible with RealityKit. To be compatible, a skeleton must have the following attributes:
30+
1. A group of bones must have a common ancestor GameObject in the transform hierarchy.
31+
2. Each bone in the skeleton must be able to traverse up the transform hierarchy without passing any non-bone GameObjects.
32+
* In general, skeletons that have a non-bone GameObject somewhere in the skeleton (often used for scaling or offsets on bones) are not supported.
33+
34+
## Q: I see an error on build about ScriptableSingleton
35+
* This comes from the AR Foundation package and is benign. You can ignore this error.
36+
37+
## Q: I see a NULL ref or other issues in the log related to XXXX Tracker (Mesh Tracker, Sprite Tracker, etc)*
38+
* Locate the Runtime flags option in the PolySpatial settings and select the tracker that is causing issues. This will disable changes from those types of objects in PolySpatial. Please flag the issue with the team so we can understand and fix the tracker type.
39+
40+
## Q: My TextMeshPro text shows up as Pink glyph blocks or My TextMeshPro text is blurry**
41+
* Locate the shader graphs included in the visionOS Package (visionOS/Resources/Shaders) and right click -> Reimport.
42+

Documentation~/GettingStarted.md

Lines changed: 20 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,20 @@
1+
# Getting Started
2+
This section covers several important topics to get you up to speed when developing for the visionOS platform. Here you will find links to a step-by-step guide for building your first Unity PolySpatial XR app and some development best practices when developing for PolySpatial XR.
3+
4+
## Development & Iteration
5+
Please refer to [Development & Iteration](DevelopmentAndIteration.md) for information about prerequisites, development, iteration, deployment, and debugging.
6+
7+
## Creating New Projects
8+
These guides have step by step instructions for getting started with visionOS.
9+
* In [Starting a new visionOS project from scratch](TutorialCreateFromScratch.md), you will find a step-by-step tutorial that guides you through installing, setting up and deploying a simple Unity app from scratch to target visionOS and the Apple Vision Pro.
10+
* In [Starting a new visionOS project from the Immersive App Template](TutorialCreateFromTemplate.md), you will find a step-by-step tutorial for setting up a new project using the Immersive App Template.
11+
* In [Sample Contents: Learn how to use visionOS with Application Examples](Samples.md),
12+
you will find a wide range of vertical slices demo projects explaining how to develop for visionOS using PolySpatial technology.
13+
14+
## Porting Existing Projects
15+
16+
When porting existing Unity projects to visonOS, several considerations need to be taken into account. The biggest limitation is that some core Unity features aren't supported, and others provide a reduced feature set. In addition, input is different, and processing power and supported components will vary. Sometimes you will have to develop your own systems to support your unique project features and work around these limitations.
17+
18+
You can find information about [porting VR experiences to visionOS](VRApps.md#porting-vr-experiences-to-visionos), find out which [Unity features and components](SupportedFeatures.md) are currently supported for immersive apps, or how to use [Project Validation](PolySpatialXRProjectValidation.md) for helpful in-editor assistance to port a project. For more information on input and other dev topics, review the [Reference documentation](TableOfContents.md#reference-documentation)
19+
20+
<!-- ## Development best practices -->

Documentation~/Glossary.md

Lines changed: 29 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,29 @@
1+
# Glossary
2+
3+
- **PolySpatial Core** (com.unity.polyspatial): The foundational PolySpatial Package, where initialization and all setup begins. It performs change tracking and processing, serialization/deserialization, and includes the ShaderGraph to MaterialX converter
4+
5+
- **PolySpatial XR** (com.unity.polyspatial.xr): Includes scene validation, capability profiles, building blocks, and coaching UI. Adds package dependencies on XRI, AR Foundation, and XR hands.
6+
7+
- **Unity PolySpatial -- Apple visionOS support** (com.unity.polyspatial.visionos): Adds a new build target (visionOS) and platform support for visionOS and Apple Vision Pro.
8+
9+
- **PolySpatial App** (aka **Client App** or **Unity App**): A Unity app (Unity player) that uses PolySpatial. PolySpatial apps are split in two logical parts: the **Unity Sim** and the **Backend**.
10+
11+
- **Unity Sim**: The non-rendering portion of a Unity app - its application-specific logic, as well as built-in simulation features including physics, animation, AI, and asset management.
12+
- **(Unity) Sim Space**: The world space of a Unity Sim. While a typical Unity app simulates and renders objects in the same space, these may differ in a PolySpatial app.
13+
- **(Unity) Sim Physics**: The physics and colliders of the Unity Sim.
14+
15+
- **Vanilla Unity**: In the context of PolySpatial, Vanilla Unity refers to a non-PolySpatial Unity app.
16+
17+
- **PolySpatial Host** (or Backend): The system that's responsible for actually rendering the objects controlled by the **Unity Sim**.
18+
- **Host (or Backend) Space**: The world space of the backend or Host in which a PolySpatial app is running. This may differ from **Unity Sim Space** because the host environment may allow apps to be moved around independently (for example, relocated to another position and volume in the real world).
19+
- **Host (or Backend) Physics**: The physics and colliders implemented within the backend to model the full shared environment for purposes such as input, selection, and cross-app interactions.
20+
21+
- **PolySpatial Layer**: A Unity layer that is created to house the backing objects of the Unity SceneGraph. If no such layer already exists *and* there are no free layers in which to create a new layer, the PolySpatial runtime will not initialize when you enter Play Mode, and you will instead get vanilla Unity rendering.
22+
23+
- **Volume Camera**: A new component which defines what content within a Sim scene that should be displayed on the Host. A volume camera consists of a mode, an oriented bounding box (OBB), and a culling mask. There are currently two modes:
24+
- Bounded Mode: In this mode, all content within the volume camera's OBB *and* whose layer matches the culling mask will be replicated to the host. Content that falls on the border (partially inside and partially outside the OBB) will be clipped.
25+
- Unbounded Mode: In this mode, the OBB is ignored, and all content in the scene whose layer matches the culling mask will be replicated to the host. No content is explicitly clipped. See (see [VolumeCamera](VolumeCamera.md)) for more details.
26+
27+
- **Exclusive Mode**: Refers to the runtime behavior where an app is the only active and visible application
28+
29+
- **Shared Mode**: Refers to the runtime behavior where other apps may be active and/or visible

Documentation~/HoverEffect.md

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,6 @@
1+
# PolySpatial Hover Effect
2+
Provides a hint to the platform to apply a system hover effect when the user is hovering over this object. This is typically used to provide a visual cue to the user that the object is interactive. This effect gets triggered by gaze or a hand poke. The effect is applied to the object that is being hovered over, and not the object that is doing the hovering.
3+
4+
For privacy reasons, visionOS does not permit apps access to user gaze directly. However, it is often helpful to visually highlight objects at which the user is gazing in order to hint which object will receive input if the user performs a pinch gesture. To this end, Unity PolySpatial provides a `PolySpatialHoverEffect` component for visionOS, which can be added to GameObjects that might receive input via gaze. The presence of this component instructs the host platform (RealityKit) to apply a coloration effect to `GameObject`'s `MeshRenderer` any time the user's gaze ray intersects its corresponding collider(s).
5+
6+
All three components must be present to achieve an effect: the `PolySpatialHoverEffect` indicates a `GameObject` should display hover, a `Collider` component defines the collision shape against which the gaze ray is cast, and the `MeshRenderer` provides the mesh and geometry on which the coloration affect will be applied

Documentation~/Input.md

Lines changed: 23 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,23 @@
1+
## Input
2+
<a name="input"></a>
3+
4+
There are two ways to capture user intent on visionOS: 3D touch and skeletal hand tracking. In exclusive mode, developers can also access head tracking data.
5+
6+
7+
### 3D Touch and TouchSpace
8+
9+
In both bounded and unbounded volumes, a 3D touch input is provided when the user looks at an object with an input collider and performs the “pinch” (touch thumb and index finger together to “**tap**” or “**drag**”) gesture. The **PolySpatialTouchSpace Input device** provides that information to the developer. If the user holds the pinch gesture, a drag is initiated and the application is provided “move” updates relative to the original start point. Users may also perform the pinch gesture directly on an object if it is within arms reach (without specific gaze).
10+
11+
3D touch events are exposed via the **PolySpatialTouchSpace Input device**, which is built on top of the `com.unity.inputsystem` package, otherwise known as the New Input System. Existing actions bound to a touchscreen device should work for 2D input. For 3D input, users can bind actions to the specific **PolySpatialTouchSpace** device for a 3D position vector.
12+
13+
A collider with the collision mask set to the PolySpatial Input layer is required on any object that can receive 3D touch events. Only touches against those events are reported. At this time, the platform does not make available the gaze ray at the start of a tap gesture.
14+
15+
16+
### Skeletal Hand Tracking
17+
18+
Skeletal hand tracking is provided by the **Hand Subsystem** in the **XR Hands Package**. Using a **Hand Visualizer** component in the scene, users can show a skinned mesh or per-joint geometry for the player’s hands, as well as physics objects for hand-based physics interactions. Users can write C# scripts against the **Hand Subsystem** directly to reason about the distance between bones and joint angles. The code for the **Hand Visualizer** component is available in the **XR Hands Package** and serves as a good jumping off point for code utilizing the **Hand Subsystem**.
19+
20+
21+
### Head Tracking
22+
23+
Head tracking is provided by ARKit through the **VisionOS Package**. This can be setup in a scene using the create menu for mobile AR: **Create &gt; XR &gt; XR Origin (Mobile AR)**. The pose data comes through the new input system from **devicePosition \[HandheldARInputDevice\]** and **deviceRotation \[HandheldARInputDevice\]** .

0 commit comments

Comments
 (0)