Skip to content

Design Notes

Alex Rosengarten edited this page Feb 19, 2019 · 8 revisions

2019-02-18 Talk about direction of Project

Attendees:

  • Alex, @alxrsgrtn
  • Jason, @jbdoar

General Discussion

Jason:

  • Central part: Take some time series data, e.g. chaotic dynamical systems, noise sources, etc. Then, using functions from numpy and scipy that emulate the modules of an analog synth, the user can generate interesting melodies.
  • Take cool melodies and harmonies produced, generate them in a live way.

Alex:

  • Goal: Get users for the project. Have them appreciate the beauty of experimental computer generated music. Make the hard parts easy for them.

Jason:

  • Wouldn't it be nice if users had easy leavers they could pull? To mess with? For example, we could orient the project to an embedded device -- a raspberry pi. A potenitometer's voltage change can translate into a parameter change that adjusts the music in real time. Instead of a potenitometer, why can't it be keystrokes on a keyboard?

Alex: I like that goal. It's valuable to use a design pattern s.t. we can have multiple easy-to-use clients that let people intuit good music.

Jason:

  • constant in lorenz is a bifrucation param -- Rho or r in our existing code. If you adjust that param you can get other interesting things, like damped oscillations. Or periodic solutions. It will do a loop-de-loop around the basins of attractors.
  • Some way to vary the initial conditions -- but also r -- you can get a lot of expressiveness out of just this one system.
  • Also: There are other params that the user has to decide on, for artistic reasons, to compose the song. What scale will this be mapped to? Do we quantize (sample-and-hold) twice a second, or could we discretize other ways?
  • Could have a scheme when it's in chaos where it jumps from basin to basin in the attractor. What if we trigger the next note when there's a basin switch? Can this be represented in a composition?
  • Defining the voice of the ... each voice is a composition of functions. We leave it up to the user how that composition is set up. Then, the way they can learn how to do that -- how it's set up in our particular system -- it by looking at the example notebooks. Even after we've set up a melody or set of melodies that are generated from some generator (e.g. a ODE system) there are other params that you might want to control (e.g. bifrucation parameter). How do we make these parameters controllable in real time?

Alex:

  • To accomplish this level of control in a near realtime way, we need to have the functions determined by the user (their composition) to be acting on a immutable state (i.e. the parameters for the function). At a small interval window (2ms? X hz, whatever), inference needs to be performed on the functions given that state.
  • When we turn a nob, it merely modified that state, giving the user the experience of changing elements of the musical composition when really their changing parameters.

Jason: That model sounds a lot like how the arduino works.

...

Alex: Inspiration for how this library could be structured: https://github.com/kkroening/ffmpeg-python

Alex + Jason: Let's create interfaces or typeclasses or some sort of strict standard to organize our functions in the library.

Clone this wiki locally