You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Clean up legacy 3D sphere voice markers and update documentation
- Remove unused instance buffer fields (positions, colors, scales) from FrameContext
- Remove build_instances_reuse function that was building data for invisible markers
- Remove unused constants: BASE_SCALE, SCALE_PULSE_MULTIPLIER, RING_COUNT, ANALYSER_DOTS_MAX, MUTE_DARKEN, HOVER_BRIGHTEN
- Simplify render function to use hardcoded voice positions instead of dynamic arrays
- Update documentation to reflect current wave-based aesthetic (no visible spheres)
- All tests pass, build succeeds, CI runs successfully
- Maintains full voice interaction through invisible interaction zones
Copy file name to clipboardExpand all lines: README.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -20,7 +20,7 @@
20
20
- Keyboard: A..F (root), 1..7 (mode), R (new sequence), T (random key+mode), Space (pause/resume), ArrowLeft/Right (tempo), ArrowUp/Down (volume), Enter (fullscreen)
21
21
- Starts at a lower default volume; use ArrowUp to raise or ArrowDown to lower
22
22
- Dynamic hint shows current BPM, paused, and muted state
23
-
- Rich visuals: instanced voice markers with emissive pulses, ambient waves background, post bloom/tonemap/vignette; optional analyser-driven spectrum dots
- Planned microtonality: global detune in cents and additional microtonal scale families (19-TET, 24-TET, 31-TET); keyboard shortcuts for detune and scale selection
15
15
16
16
## Goals and Use Cases
@@ -258,7 +258,7 @@ graph TD
258
258
**Scene and Visual Elements:**
259
259
What the user sees:
260
260
261
-
-**Objects Representing Voices:**Three instanced round markers (circle-masked quads) represent voices. Positions correspond to voice `PannerNode` positions; markers pulse and emit on note events.
261
+
-**Voice Influence on Waves:**Voice positions influence the wave patterns through displacement and proximity effects, creating golden highlights and wave distortions around each voice location.
262
262
-**Ambient Waves Background:** A fullscreen pass (see `waves.wgsl`) renders layered ribbons with pointer-driven swirl displacement, per-voice influence, and click/tap ripple propagation.
263
263
-**Post-processing:** A post stack (see `post.wgsl`) performs bright pass, separable blur, ACES tonemap, vignette, subtle hue warp, and film grain.
264
264
-**Camera:** Fixed view; the `AudioListener` tracks the camera to maintain spatial consistency.
@@ -301,21 +301,21 @@ The UI is minimalist and embedded in the 3D world. The goal is that the user see
301
301
302
302
-**Play/Pause:** Space key toggles pause/resume. No in-scene play/pause icon yet.
-**Position Adjustment:** Click+drag a voice object to move it on the horizontal plane; movement is clamped to a radius. Positions update the corresponding `PannerNode` in real time.
304
+
-**Position Adjustment:** Click+drag on a voice's invisible interaction zone to move it on the horizontal plane; movement is clamped to a radius. Positions update the corresponding `PannerNode` in real time.
305
305
-**Tempo:** ArrowRight/ArrowLeft adjust BPM.
306
306
-**Overlay:** Start overlay for audio unlock; `H` toggles visibility. It does not show live BPM/Paused/Muted state.
307
307
308
308
**Possible UI Elements/Controls (future):**
309
309
We identify additional interactions that could be mapped to in-scene controls:
310
310
311
311
-**Play/Pause:** If the system allows stopping the music, a control to pause or resume generation. Perhaps the music runs by default and maybe we don’t need an explicit play (it starts immediately), but pause could be useful. Implement as an icon (e.g., a play/pause symbol) floating in a corner of the scene or as part of an object (maybe a central orb that stops/starts everything when clicked).
312
-
-**Regenerate (Randomize):** A control to generate a new musical sequence (either for all voices at once, or maybe separate control per voice). For all-at-once, an icon like 🔄 could be placed somewhere in view. For per-voice regeneration, perhaps clicking an individual voice object could trigger it to come up with a new pattern.
312
+
-**Regenerate (Randomize):** A control to generate a new musical sequence (either for all voices at once, or maybe separate control per voice). For all-at-once, an icon like 🔄 could be placed somewhere in view. For per-voice regeneration, perhaps clicking on a voice's invisible interaction zone could trigger it to come up with a new pattern.
313
313
-**Voice Mute/Unmute or Volume:** Perhaps clicking a voice object toggles it on/off (if user wants to focus on certain layers). If no labels, the object’s appearance can indicate mute state (e.g., dim or turn grey when muted). Volume could be controlled by distance: maybe the user drags the object closer or further from camera/listener to effectively change volume (since closer = louder in spatial audio). This would be a very natural metaphor for volume control!
314
314
-**Position Adjustment:** The user can **grab and move a voice’s object** in the 3D space. This changes the spatial position of that sound (panning/volume in headphones). It’s an interactive way for the user to do a sort of “mixing” – e.g., spread sounds out or bring one closer. We’ll implement drag controls:
315
315
316
316
- On desktop, mouse click+drag on an object could move it. We need to implement a picking mechanism to select objects with the mouse. Possibly ray-cast from camera through cursor to find which object is clicked.
317
317
- Simplify movement to perhaps a plane or spherical surface: e.g., restrict dragging to horizontal plane (x-z) so user won’t lose it in depth too much, or allow full 3D if we have a way to move in all axes (maybe using right-click or modifier for up/down).
318
-
- As the object moves, update the corresponding PannerNode position in real-time so the sound appears from the new direction. This will likely impress the spatial effect on the user.
318
+
- As the voice position moves, update the corresponding PannerNode position in real-time so the sound appears from the new direction. This will likely impress the spatial effect on the user.
319
319
320
320
-**Change Scale/Key or Mode:** We might include a control for musical scale or mood. Perhaps a small set of preset scales (Major, Minor, Pentatonic, etc.) can be cycled. Without labels, this is tricky – maybe an object that cycles color and each color corresponds to a scale (could be hinted in some text in documentation or a minimal legend). Alternatively, the user might not need to change scale if the generative is fine by itself. This might be an advanced control possibly omitted in first version to keep UI simple.
321
321
-**Tempo Control:** If needed, could allow user to speed up or slow down. Perhaps a dial control represented by a ring around some object – the user dragging that ring could adjust tempo. Or simpler, two buttons (faster, slower) as plus/minus icons. But unlabeled plus/minus might be okay if intuitively placed next to a tempo icon (metronome icon?).
@@ -327,9 +327,9 @@ We identify additional interactions that could be mapped to in-scene controls:
327
327
328
328
- In the browser, capture mouse events on the canvas.
329
329
- Perform **ray-sphere** intersection for voice picking. Maintain hover highlight; on click/drag, update engine voice state and audio panner.
330
-
- Once we know which object is selected on click, we handle according to that object’s role (e.g., if it’s a voice sphere: start dragging it; if it’s a regenerate button: trigger regeneration immediately; etc.).
331
-
- On drag: update object position in real-time (for voice objects) and possibly give some visual feedback (like a highlight or trailing indicator).
332
-
- On release: drop the object at new position.
330
+
- Once we know which voice is selected on click, we handle according to that voice's role (e.g., if it's a voice: start dragging it; if it's a regenerate button: trigger regeneration immediately; etc.).
331
+
- On drag: update voice position in real-time and possibly give some visual feedback through wave displacement effects.
332
+
- On release: drop the voice at new position.
333
333
- Also handle hover highlighting: as mouse moves, if it hovers an object, maybe slightly scale it up or change color to indicate it’s interactable. This can be done by checking ray intersection each frame with cursor position.
334
334
335
335
-**Integrated Look and Feel:**
@@ -356,7 +356,7 @@ We identify additional interactions that could be mapped to in-scene controls:
356
356
To ensure a "fantastic result", the development should proceed in stages, verifying each piece:
357
357
358
358
1.**Initial Setup:** Get a basic Rust+WASM project running with WebGPU rendering something simple (like a triangle or cube on screen) and Web Audio playing a test tone. This ensures the environment and build pipeline are correct (WebGPU initialization, etc.). Use this to verify browser compatibility (e.g., test in Chrome Canary or current stable with proper flags if needed).
359
-
2.**Basic 3D Scene (implemented):** The scene is in place with an ambient waves fullscreen pass and three instanced voice markers representing voices. There are no placeholder objects. The camera is fixed (the `AudioListener` tracks it for spatial audio). Interaction testing is via pointer hover/drag and keyboard; orbit/mouselook is not used.
359
+
2.**Basic 3D Scene (implemented):** The scene is in place with an ambient waves fullscreen pass that reacts to voice positions through displacement and proximity effects. There are no placeholder objects. The camera is fixed (the `AudioListener` tracks it for spatial audio). Interaction testing is via pointer hover/drag and keyboard; orbit/mouselook is not used.
360
360
3.**Audio Generation:** Implement the audio engine’s core:
361
361
362
362
- Pick a scale (e.g., C major pentatonic) and generate a repeating random sequence for one voice. Use an OscillatorNode to play it. Ensure timing is consistent.
@@ -367,7 +367,7 @@ To ensure a "fantastic result", the development should proceed in stages, verify
367
367
4.**Sync Audio-Visual:** Link the events. Have the visual objects respond to the audio – e.g., on each note event, flash or scale the corresponding object. Fine-tune to make it noticeable but not jarring.
368
368
5.**Interactivity:** Add the user interaction one by one:
369
369
370
-
- Ray picking and dragging of objects. Ensure that moving a voice object changes its PannerNode coordinates and the visual moves accordingly.
370
+
- Ray picking and dragging of voice positions. Ensure that moving a voice position changes its PannerNode coordinates and the wave displacement effects move accordingly.
371
371
- Add a regenerate button or gesture. Perhaps a key press “R” for now to regenerate all sequences (for easier testing) – later replace with a 3D button.
372
372
- Add a play/pause toggle (again, maybe key press first, then integrate UI object).
373
373
- Test that these interactions can happen while audio is playing without glitching.
0 commit comments