Skip to content

Commit a4db72c

Browse files
authored
Create README.md
1 parent 2226cb6 commit a4db72c

File tree

1 file changed

+90
-0
lines changed

1 file changed

+90
-0
lines changed

README.md

Lines changed: 90 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,90 @@
1+
# BionicVisionXR
2+
3+
BionicVisionXR is a virtual and augmented reality toolbox for simulated prosthetic vision (SPV).
4+
5+
The software provides a framework for integrating different computational models with simulations of current visual prostheses to produce simulations of prosthetic vision that respect:
6+
- clinical reports about phosphene appearance (i.e., shape = f(stimulus params), persistence/fading)
7+
- eye movements (i.e., gaze-congruent vs. gaze-incongruent viewing)
8+
- practical limitations (i.e., rastering, as not all electrodes can be turned on at the same time)
9+
10+
The package is most powerful in combination with [pulse2percept](https://github.com/pulse2percept/pulse2percept), the lab's Python-based bionic vision simulator.
11+
The ideal workflow is as follows:
12+
13+
![Figure 1](/bionicvisionxr.png)
14+
15+
To simplify the implementation of research tasks in immersive virtual environments, the package relies on [SimpleXR](https://github.com/simpleOmnia/sXR),
16+
which allows for simple handling of object interactions, collision detection, keyboard controls, switching between desktop mode and HMD mode, and much more.
17+
18+
If you use this simulator in a publication, please cite:
19+
20+
> J Kasowski & M Beyeler (2022). Immersive Virtual Reality Simulations of Bionic Vision. *AHs '22: Proceedings of the Augmented Humans International Conference 2022*
21+
> 82–93, doi:[10.1145/3519391.3522752](https://doi.org/10.1145/3519391.3522752).
22+
23+
24+
## Getting Started
25+
26+
If you are a bionic vision researcher who is interested in using this project, but would benefit from support, please contact [email protected].
27+
28+
For the more experienced user:
29+
30+
- The project requires [SteamVR](https://valvesoftware.github.io/steamvr_unity_plugin/articles/Quickstart.html)
31+
- When developing a Unity project, download the contents of this repo into the Assets folder.
32+
- To use the bionic vision simulation, replace the normal camera with the `SPV_prefab.prefab` located in the "BionicVisionVR" folder.
33+
The settings can be changed in Unity's inspector under the different scripts attached to the prefab's nested objects.
34+
- The default phosphene model is the axon map model [Beyeler et al., (2019](https://doi.org/10.1038/s41598-019-45416-4).
35+
The axon map model requires [pulse2percept](https://github.com/pulse2percept/pulse2percept) to be [installed]((https://pulse2percept.readthedocs.io/en/stable/install.html) and working from the command line.
36+
- We recommend [SimpleXR](https://github.com/simpleOmnia/sXR) for eye tracking. Once imported into your project, update the `GetGazeScreenPos()` function in `BackendShaderHandler.cs` (uncomment the code).
37+
38+
39+
## Troubleshooting
40+
41+
This project should be easy to set up. If it is not, please contact [email protected].
42+
43+
- **Python/pulse2percept errors**:
44+
It is recommended to install python and all pip libraries through the command line (Windows) or terminal (Linux/Mac).
45+
In Windows, this can be done by simply typing `python` in the command line.
46+
This will automatically open the Windows store with the option to install Python 3.
47+
Once python is installed, use `python -m pip install XXXXXXX` to install all the required libraries.
48+
Installing this way will ensure pulse2percept can be called from within Unity.
49+
50+
- **Import Error**:
51+
Sometimes Unity fails to import the scripts correctly and will display an error in the console stating "Game Object XXXXX attached script not found".
52+
Double clicking the error message will pull the object up in the inspector and tell you the name of the missing script.
53+
Search the folders for the missing scripts. Most of the files needed are in "BionicVisionVR -> Backend -> Resources"
54+
55+
56+
## How it works
57+
58+
All software was developed using the Unity development platform, consisting of a combination of C\# code processed by the CPU and fragment/compute shaders processed by the GPU.
59+
60+
The general workflow is as follows:
61+
62+
1. **Image acquisition:** Unity's virtual camera captures a 60-degree field of view at 90 frames per second.
63+
2. **Image processing:** The image is typically downscaled, converted to greyscale, preprocessed, and blurred with a Gaussian kernel.
64+
Preprocessing includes things like depth or edge detection, contrast enhancement, etc.
65+
3. **Electrode activation:** Electrode activation is derived directly from the closest pixel to each electrode's location in the visual field.
66+
The previous blurring is in place to avoid misrepresenting crisp edges, where moving one pixel could result in an entirely different activation value.
67+
Activation values are only collected for electrodes that are currently active.
68+
4. **Spatial effects:** The electrode activation values are used with a psychophysically validated phosphene model
69+
to determine the brightness value for each pixel in the current frame.
70+
5. **Temporal effects:** Previous work has demonstrated phosphene fading and persistence across prosthetic technologies.
71+
Additionally, previous simulations have eluded to the importance of temporal properties in electrode stimulation strategies.
72+
To simulate these effects, we implemented a charge accumulation and decay model with parameters matching previously reported temporal properties in real devices.
73+
Information from previous frames is used to adjust the brightness of subsequent frames.
74+
6. **Gaze-contingent rendering:** Gaze contingency (when what you're being shown is congruent with your gaze) has been shown to improve performance on various tasks using real devices.
75+
The package has the option to access an HMD's eye tracker and present the stimulus as either gaze-congruent or gaze-incongruent.
76+
77+
The majority of the simulation is handled by shaders which are called in the `BackendHandler.cs` file.
78+
This file is attached to the `SPV_Camera`, and shader's are attached to the script as materials of the `.shader` file.
79+
80+
81+
## Acknowledgments
82+
83+
We thank our research assistants who have contributed to this code base.
84+
In chronological order:
85+
* Nathan Wu (helping implement the initial shader logic and providing initial support with real time computer vision algorithms)
86+
* Ethan Gao (converting the axon map and electrode gaussian equations to compute shaders)
87+
* Versha Rohatgi (work on determining VR FOV and allowing for any pulse2percept device to be simulated)
88+
* Rucha Kolhatkar (work on the initial demo's GUI)
89+
* Robert Gee (creating docstrings and continuing to develop real time computer vision algorithms)
90+
* Anand Giduthuri (converting 3d models and continuing to develop real time computer vision algorithms)

0 commit comments

Comments
 (0)