design suggestions for sonification library #2
Replies: 4 comments 3 replies
-
|
Couple of thoughts:
|
Beta Was this translation helpful? Give feedback.
-
|
@jamesadevine Thank you for the feedback!
|
Beta Was this translation helpful? Give feedback.
-
|
Sorry about the poor formatting in the above comment. I will fix that as soon as I can get the screen reader to allow me to select the edit option. |
Beta Was this translation helpful? Give feedback.
-
|
You probably want to use another name than AudioSource since it is one of the base class in Web Audio and it will lead to some confusion. The plan looks like a great start. If you define your JSON spec as TypeScrip types, this would provide a very clear structure of the declarative data needed to sonify a data source. It's hard to get these kind of libraries right on the first shot, so we'll probably have to iterate the concepts as we try to sonify various libraries. What are the key scenario you want to be able to sonify? |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
@make4all/sensor-accessibility I am looking for input on a few design decisions for the library before we start coding.
Usecase
First, I would like to re-iterate on the usecase for this library. The primary usecase for this is to be consumed by interfaces that run on a browser, with eventual usecases being sonification inside IDEs e.g. VSCode, and other prototypes that do not necessarily run on the browser.
API overview
After looking at several examples such as Apple's Audio Graph API (video introduction), their spatial audio API, and Vega-Lite specification, and reading up on different API designs, I feel the best approach for us would be to expose a set of functions (for consumers that have Web Audio support) and a JSON format (for consumers that do not have a web audio context). Consumers that do not have Web Audio context will need to run chromium in a separate process to play audio. We will need to investigate if this can be a headless instance.
Data structure
The primary class for sonification is the AudioSource. The AudioSource will contain the following variables.
SourceType:Enum {TTS,AUDIO}Speaker for Aria live region or screen reader-based alerts and Audio for sonification.AnnouncementSeverity:Enum {LOW, hIGH}This maps to aria-polite and aria-assertive for web, and equivalent speech canceling mechanism should be developed if screen reader announcements are being made.Announcer: Enum {WEB,SR_SPEECH,SELF_VOICE}if we plan to incorporate screen reader announcements into our sonification library.Webwill need us to induce an aria-live region into the DOM,SR_SPEECHwill require messaging to be implemented into a screen reader, andSELF_VOICEsupport will be implemented through Web Speech.text:Stringa text (format string) representation of how to speak a particular value in the data stream.Feature:[[regionEnterTrigger:Filter, regionEntercallback:Function],[inRegionTrigger:Filter,inRegionCallback:Function],[endRegionTrigger:Filter,endRegionCallback:Function][...]]a set of filter-callback combinations to denote regions of interest.request for advice on
next steps
design
build
Beta Was this translation helpful? Give feedback.
All reactions