Hoplite is a system for storing large volumes of embeddings from machine perception models. We focus on combining vector search with active learning workflows, aka agile modeling.
In brief, agile modeling is a process for rapidly developing classifiers using embeddings from a pre-trained 'foundation' model. For bioacoustics work, we find that new classifiers can often be developed for new signals in under an hour.
How does it work?
We first use a bioacoustics model to convert your unlabeled audio data into embeddings - these are like semantic 'fingerprints' of 5-second audio clips. Then, you can search the embeddings of your data by providing an example of what you're looking for. You then give feedback on the results - which examples are and are not what you're looking for. From this feedback, we can quickly train a classifier. You can then improve on the classifier with active learning: Examine the classifier outputs, provide more feedback, and re-train the classifier.
A key feature of this workflow is that we pre-compute the embeddings. This may take a while if you have a large amount of data, but the subsequent search and classifier training is very efficient.
To get started, load up the following Colab/Jupyter notebooks:
agile/01_embed_audio.ipynb– Computes embeddings of your audio data.agile/02_agile_modeling.ipynb– Performs search, classification, and active learning.
This repository consists of four sub-libraries:
db– The core database functionality for storing embeddings and related metadata. The database also handles labels applied to embeddings and vector search, both exact and approximate.agile– Tooling (and example notebooks) for agile modeling on top of the Hoplite db layer, combining search and active learning approaches. This library includes organizing labeled data and training linear classifiers over embeddings, as well as tooling for embedding large datasets.zoo– A bioacoustics model zoo. A basic wrapper class is provided, and any model which can transform windows of audio samples into embeddings can then be used in the agile modeling workflow.taxonomy– A database of taxonomic information, especially for handling conversions between the various bird taxonomies.
Each sub-library has its own documentation.
We recommend using uv or pip for installation. uv is a fast rust-based
pip-compatible package installer and resolver.
First, install system dependencies for audio processing:
sudo apt-get update
sudo apt-get install libsndfile1 ffmpegIf you don't have uv, you can install it via pipx install uv or
pip install uv.
If you are developing locally, clone the repository and install in editable
mode:
git clone https://github.com/google-research/perch-hoplite.git
cd perch-hoplite
uv pip install -e .You can install the latest stable release from PyPI:
pip install perch-hopliteOr install the latest version from GitHub:
pip install git+https://github.com/google-research/perch-hoplite.gitAfter installation, you can run the tests to check that everything is working:
python -m unittest discover -s perch_hoplite/db/tests -p "*test.py"
python -m unittest discover -s perch_hoplite/taxonomy -p "*test.py"
python -m unittest discover -s perch_hoplite/zoo -p "*test.py"
python -m unittest discover -s perch_hoplite/agile/tests -p "*test.py"Tensorflow is required for agile modeling (classifier training) and for using the Perch or BirdNET models, but is not installed by default. We recommend installing one of the Tensorflow options:
To install with Tensorflow (CPU version):
pip install 'perch-hoplite[tf]'To install with Tensorflow with CUDA support (for GPU usage):
pip install 'perch-hoplite[tf-cuda]'The zoo library contains wrappers for various bioacoustic models. Some of
these require JAX. To install with JAX dependencies:
uv pip install -e '.[jax]'or with pip:
pip install 'perch-hoplite[jax]'If installing with uv in editable mode, you can use
uv pip install -e '.[tf,jax]'.
This is not an officially supported Google product. This project is not eligible for the Google Open Source Software Vulnerability Rewards Program.