A complete web-based interactive signal equalization and separation system
A signal equalizer is a fundamental tool in music, speech, and biomedical signal processing. In biomedical engineering, equalization assists in hearing-aid tuning, abnormality detection, and audio-based diagnostics.
This project implements a web application that loads an input signal, decomposes it into frequencies, and allows users to manipulate the magnitude of selected frequency components through various modes. The processed signal is then reconstructed, visualized, and optionally played as audio.
- Features
- System Architecture
- Modes
- Signal Visualization
- Spectrograms
- Audiogram Scale Support
- AI Models
- Contributors
- Load and process 1-D time-domain signals (WAV, CSV, MAT, etc.).
- Full custom equalization through user-defined frequency windows.
- Real-time Fourier Transform (custom implementation — no external FFT libraries).
- Real-time signal reconstruction using inverse Fourier transform (custom implementation).
- Linked time-domain cine viewers for input & output signals.
- Dual spectrogram visualization (input vs output).
- Audio playback for compatible signals.
- Save/Load equalizer preset settings for all modes.
- Smooth mode switching (dropdown / combobox).
- Automatic generation of sliders and controls when loading a settings file.
- Toggle spectrogram visibility.
- Zoom, pan, speed control, and reset in cine viewers.
- Synchronous time navigation between input and output viewers.
- Linear and Audiogram frequency scale support.
Signal Loader → Fourier Transform → Equalizer Engine → Inverse Transform → Visualization (Cine Viewers, Spectrograms) → Audio Output (optional) → Settings Manager (Presets)
A fully customizable mode where the user builds their own equalizer by adding frequency subdivisions manually.
- Add/remove subdivisions dynamically.
- Control each subdivision’s:
- Start frequency
- End frequency
- Scale (0 → mute, 1 → unchanged, 2 → amplify)
- Save created scheme as a preset file (JSON).
- Load presets and regenerate the full UI automatically.
A synthetic test signal composed of multiple pure tones is used to verify that frequency manipulation behaves correctly.
Each mode contains fixed sliders, each representing one sound source, which may map to multiple frequency windows.
Control the magnitude of different instruments in a mixed track:
- Piano
- Violin
- Bass
- Vocals
Control the magnitude of different animal sounds in a mix:
- Dog
- Cat
- Bird
- Horse
Control different people in a multi-speaker mixture. Voices may differ by:
- Gender
- Age
- Clarity
- Timbre
- Slider-to-frequency mapping is non-contiguous.
- Presets are externally editable.
- UI remains consistent across modes (labels & number of sliders change only).
Two synchronized time-domain viewers:
- Input Signal Viewer
- Output Signal Viewer
Both include:
- Play / Pause / Stop
- Playback speed control
- Zoom & pan
- Boundary-aware scrolling
- Perfect synchronization
Frequency axis can switch between:
- Linear frequency scale
- Audiogram scale (hearing-perception based)
Switching scales does not reset any settings.
Two spectrograms:
- Input spectrogram
- Output spectrogram
- Fully custom implementation (no libraries).
- Real-time update on slider changes.
- Show/hide toggle.
Two pretrained models are provided for comparison.
Used to compare with Human Voices Mode.

Used to compare with Musical Instruments Mode.

- Separation accuracy
- Signal quality
- Interference reduction
- Runtime
- Manual vs AI control
| Raghad Abdelhameed | Salma Ali | Youssef Mohamed Wanis | Rawan Mohamed |










