Skip to content

Commit a1eae19

Browse files
authored
Create README.md
1 parent a0c8cc3 commit a1eae19

File tree

1 file changed

+92
-0
lines changed

1 file changed

+92
-0
lines changed

README.md

Lines changed: 92 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,92 @@
1+
# 🎵 fbcontrol~
2+
Two Max-MSP objects for acoustic feedback (Larsen effect) manipulation.
3+
- [An early example in practice](https://vimeo.com/394801431)
4+
5+
# 🖱️ Use
6+
7+
- **The feedback system should have (minimum specs):**
8+
1. A free to move, unidirectional, microphone (input signal)
9+
2. A fixed speaker (output signal)
10+
3. Some kind of signal attenuation (compression, limiting, etc) right before signal output
11+
4. Enough gain
12+
5. A similar signal path to one found on the examples: INPUT -> FFT -> Manipulation object -> Resynthesis -> OUTPUT
13+
- Inside `obj_source` you will find the `C` source code for the Max-MSP objects. You will need [Max-msp SDK](https://github.com/Cycling74/max-sdk) if you want to compile from source.
14+
- Inside `examples` you can find two Max patches that showcase both objects (`fbcontrolresist~` and `fbcontrolreact~`)
15+
- These objects depend on [sigmund~](https://github.com/v7b1/sigmund_64bit-version) by Miller Puckette.
16+
- This only provides the FFT output, so any other object will work, as long as the output follows the same pattern of:
17+
- `[osc, freq, amp, flag]` -> For detailed info on this see [sigmund~](https://github.com/v7b1/sigmund_64bit-version)
18+
- You can find pre-compiled versions as releases
19+
20+
Simply add the three objects (`sigmund~`, `fbcontrolreact~`, `fbcontrolresist~`) to the `Packages` folder of Max. It is highly recommended
21+
that you run a compressor (and a limiter) at the end of your signal chain. **You are dealing with feedback,
22+
which can easily overpower your system and cause real damage to your hearing and equipment.**
23+
24+
These objects are made to work on live audio. You will need at least one source on input signal and a source of output signal in you feedback system.
25+
26+
Detailed info on each component/parameter can be found in the examples.
27+
28+
# ☮️ Keep in mind
29+
30+
- Code comments and some other text are in Portuguese (I didn't know about best practices back then)
31+
- This is my first C project (and probably my first programming project ever). The code is not pretty, but it works.
32+
- Please contact me at alumiamusic@gmail.com if you want to use this, or you are interested in the idea, but cannot understand the code/examples/etc.
33+
I am more than happy to help
34+
35+
# 🕵️ Detailed description
36+
- **This is part of the `experimental_system` repositories that are present on this github account.**
37+
38+
When generating acoustic feedback in a room, if we move the microphone around, we realize that different regions of the room generate
39+
different feedback tones.
40+
41+
We also hear that tones fight for stability, transitioning between each other as we move.
42+
43+
`fbcontrolresist~` and `fbcontrolreact~` manipulate the way tones appear and disappear, by dealing with the concept of **resistance**
44+
45+
## Resistance
46+
47+
Let's suppose that a tone at a given frequency (in a given region of a room) really wants to appear: whatever tone we had before,
48+
vanishes very rapidly as soon as we move to that region of the room. **What would happen if we imposed some resistance to this change?**
49+
50+
We would actually change the acoustic characteristics of the produced feedback. We would be navigating in a completely different room
51+
than before, because acoustic feedback is self inducing, and relies on itself to modulate its own behavior.
52+
- If the tone grows slower, it will also feed less energy to the system (in the same time period), changing its own characteristics.
53+
54+
**So, how do we manipulate acoustic signal?**
55+
56+
## Time domain
57+
58+
We first need to be aware of how each tone is behaving. For this, we need real time frequency information.
59+
60+
We can achieve that with some kind of time domain to frequency domain transformation. It has to be fast enough so that we can still work live with the sound.
61+
62+
Constant-Q transform would be ideal, but for simplicity we used an already functioning FFT implementation by Miller Puckette ([sigmund~](https://github.com/v7b1/sigmund_64bit-version)).
63+
64+
## Manipulation
65+
66+
We now have a deconstructed signal that, hundreds of times per second, informs us about the current amplitude of all frequencies.
67+
68+
If we let some time pass, we get a sense of the rates at which frequencies are changing their amplitudes. We want to manipulate
69+
these rates in ways that keep the self-inducing and self-feeding characteristics of the original acoustic feedback.
70+
71+
We don't want to fix the grow/decay amplitude rates to a given value. We simply want a way to impose/remove some resistance to these
72+
self-inducing and self-feeding properties.
73+
74+
This is what `fbcontrolresist~` and `fbcontrolreact~` try to achieve.
75+
76+
77+
## Resynthesis
78+
79+
Our objects output the new state of each frequency (sinusoid) for a given interval. If this interval is small enough, we can
80+
use this freq/amp information to perform additive synthesis with a degree of fidelity that is way more than enough for
81+
acoustic feedback.
82+
83+
This is what the final part of the example patches does: a reconstruction of the manipulated audio with additive synthesis.
84+
85+
## Conclusion
86+
87+
We effectively read, deconstructed, manipulated and reconstructed the signal in close to real time.
88+
89+
The manipulation is based on the acoustic feedback tones growth/decay rates, which are not forced to any specific value.
90+
91+
We are reducing/increasing these rates by a ratio that is proportional to the rate itself, creating a system
92+
that is still completely self-inducing and free to behave as it wants (within these new more resistive, or reactive environments).

0 commit comments

Comments
 (0)