Replies: 1 comment 8 replies
-
You can instantiate wavesurfer with just peaks and duration, without passing any audio URL. |
Beta Was this translation helpful? Give feedback.
8 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hi, i'm developing a feature for a web-based radio platform that allows users to view and interact with audio waveforms in real-time, indicating what's currently being played. For this, I am utilizing wavesurfer.js to render multiple waveforms that include regions serving as cue points.
The waveforms are for visual representation only, with peaks already pre-generated, so there's no need to play the audio or load the audio files — doing so would only increase bandwidth usage unnecessarily.
An essential aspect of the user experience is ensuring the waveform progress is synchronized for all users. This is achieved via WebSocket, which updates the waveform progress to keep it in sync with the live broadcast. However, this presents a technical hurdle: whenever the WebSocket updates the progress, it triggers a 'seeking' event in wavesurfer.js. The issue is that the 'seeking' event does not distinguish between an update from the WebSocket and a user's manual seek action.
To streamline this and avoid confusion, it would be beneficial for the seekTo and setTime methods in wavesurfer.js to include an additional parameter that prevents the 'seeking' event from firing when the progress update is system-initiated. This modification would help in maintaining a clear distinction between programmatic updates and user-driven interactions, ensuring a seamless experience for both the users and the backend system managing the live broadcast.
Beta Was this translation helpful? Give feedback.
All reactions