Evolving Supercollider synths using Large Language Models
-
Since the launch of LLM's, I've been testing them by trying to generate Supercollider(SC) Code. Fast forward.... Claude 3.5 (June 2024) was the first time succesful code was generated. Time to play!
-
Began to test the depth of understanding of generating SC code
-
Surprisingly successful at different prompt strategies for different outcomes... and sometimes terrible.
-
Synth construction
- Describing a synth design with parameters
- Model a hardware synth examples : Jupiter8, MoogOne, Buchla, ArpOdyssey
- Descriptive (texture, object, environment) examples : Metal, WoodBamboo, Melbourne Pedestrian Crossing, Raindrop, Whip Bird, Forest, Cicada, Interactive Wind
-
Music construction examples : Steve Reich, Brain Eno, Stockhausen, Xenakis, Deadmua5
-
-
Synth construction WINS!
-
is this Vibe coding ? (Andrej Karpathy) : continue to use this in all my synth designs, from building scaffolding code to abstract descriptions.
-
2025 : The need to automate : Claude’s API. $ign me up!
-
Dependencies : Claude.ai API / Supercollider / Python
-
Steps :
- Create Virtual Environment
- Install Python Requirements
- set
export ANTHROPIC_API_KEY=
- Set a prototype synth
template_0.sc
- Set prompt (used for every iteration)
prompt.sc
- Run
oscCommands.sc
in SC - Run
next_sound.py
in Python
-
How it works :
- Hacked together as a very basic work flow
- Send Claude :
template_0.sc
andprompt.sc
- Claude returns some code : (and calls SC to load, interpret and play new Synth Definition code)
- Each iteration is saved and loaded into SC, so every iteration is archived.
-
Examples (video) : links to examples of script running while Supercollider interprets and generates realtime audio.
-
Outcome so far :
- Very successful in creating a family of sounds that evolve over each iteration
- Errors propagate but sometimes disappear after several iterations
- Feels like growing synths
- Have not improved or developed the idea further as I am too busy ‘prompting’ new ideas and listening. Is this a good thing?
- Rag (Retrieval-Augmented Generation), MCP (Model Context Protocol) and general tooling.
- Build more robust ‘play ground’ for testing LLM’s, workflows and prompting strategies.
- Archive outputs : code, audio, error, analysis.
- Automate automate automate : batch process all of it.
- Local model version.
- Looking for collaborators : ideas, techniques, code
- Big dream : local model running on a RPI that generates continuous soundscape with prompting (text for now) that guides the composition.