Skip to content

shahbazfareedchishti/FYP

Repository files navigation

Identification and Reduction of Noise by Mechanical Systems Onboard Ships

Project Overview

This project addresses a critical challenge in maritime engineering: reducing mechanical noise to enhance stealth capabilities, improve sonar clarity, and protect crew health.

Instead of relying on expensive, heavy hardware solutions, this repository implements a software-defined AI system capable of real-time diagnostics and signal purification. The system processes raw audio to detect, classify, and reduce mechanical noise with high precision.

Key Results

  • Noise Reduction: Achieved a 60% reduction in background mechanical noise.
  • Classification Accuracy: 98% accuracy in identifying specific noise sources (UUV, Speedboat, Kaiyuan).
  • Deployment: Fully offline-capable web application for naval vessels.

Technical Architecture: The 3-Stage Pipeline

The core of this project is a custom deep learning pipeline that processes 3-second audio clips through three distinct stages:

1. Detection (The Gatekeeper)

  • Model: YAMNet
  • Function: Acts as a highly efficient filter to continuously scan audio streams.
  • Purpose: Determines if a target mechanical noise exists before triggering heavier downstream models, saving computational power.

2. Identification (The Classifier)

  • Model: CRNN (Convolutional Recurrent Neural Network)
  • Function: Classifies the detected noise into specific categories.
  • Classes: Speedboat, UUV (Unmanned Underwater Vehicle), Kaiyuan.
  • Performance: 98% Accuracy.

3. Reduction (The Denoiser)

  • Model: TasNet (Time-domain Audio Separation Network)
  • Function: A lightweight Convolutional Encoder-Decoder network designed to "scrub" background interference.
  • Training Details: * Trained 3 separate models (one for each category).
  • Training Load: 50 epochs per model, requiring ~14 hours of training time each.

Dataset & Preprocessing

This project utilizes the QiandaoEar22 underwater acoustic dataset.

Raw Data: .wav files of 3-second duration.

  • Preprocessing Challenges:
  • Normalization of sample rates across the dataset.
  • Splitting data into "Target" (pure signal) vs. "Other" (interference) for effective supervised learning.
  • Conversion of raw audio into Log Mel Spectrograms for feature extraction.

Tech Stack

  • Deep Learning: Python, TensorFlow/Keras, PyTorch (YAMNet, CRNN, TasNet)
  • Backend: Flask (Python) - Handles API requests and model inference.
  • Database: SQLite - Used for local, offline-capable logging of detection timestamps and confidence scores.
  • Frontend: HTML/JavaScript - Provides real-time visualization of signal analysis.

Additional Resources


Usage

  1. Clone the repository:
git clone https://github.com/your-username/your-repo-name.git
  1. Install dependencies:
pip install -r requirements.txt
  1. Run the Flask App:
python app.py
  1. Access the dashboard at http://localhost:5000 to start real-time detection.

About

A software-defined AI system for maritime noise reduction using a 3-stage deep learning pipeline (YAMNet, CRNN, TasNet). Achieved 98% classification accuracy and 60% noise reduction on the QiandaoEar22 dataset. Deployed as an offline-capable web app using Flask and SQLite.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors