Tuesday, February 06, 2018

Still Mostly Noise - Improvements in my music generation toolset

I continue to try to wrap my head around composing little chunks of interesting sounding music. While I still haven't cracked the puzzle, I've made some important progress.

First, I brushed up on music theory with this TED talk: Transforming Noise Into Music | Jackson Jhin | TEDxUND. It's exactly the topic I'm exploring, as I strive to create audibly interesting snippets. If you've ever wondered what makes music, music, you'll enjoy this 10 minute talk.

Next, I refactored my code to allow for standard note generation with ease. So rather than saying:

 new Sound().frequency(261.626)

I can say:

  new Sound().f(Note.C);

Working with standard notes helps increase my odds of avoiding ear splitting frequencies.

The biggest improvement to my system came when I did a Google search to explain why I was hearing a clicking noise after most notes. The answer is explained here: Web Audio, the ugly click and the human ear. The clicking comes from halting the oscillator mid sound wave. A much cleaner way to handle stopping an oscillator is to gracefully lower the volume (gain) to 0. In my system, I'm already using a gainNode to control the volume of the note. So rather calling stop() on the oscillator, I tried this:

gainNode.gain.exponentialRampToValueAtTime(0.0001, t + duration + 0.03);
oscillatorNode.stop(t + duration + 3);

At t + duration + 'a fudge factor' I take the volume of the oscillator down to near 0. And the click went away! Whoo! You can read the full details of this here.

Doing more research on this, I came across Chris Lowis' Synthesis with the Web Audio API - Envelopes blog post. This article explains that synthesizers frequently use a strategy like the one above to control not just how a note finishes (decays), but also how it starts (the so called attack). And this makes sense, when you a strum a guitar string, it doesn't start off at full volume, it works up to this, and then fades off. I updated my code to implement this behavior as so:

gainNode.gain.exponentialRampToValueAtTime(gainNode.gain.value, t + Math.min(.001, duration / 32));
gainNode.gain.exponentialRampToValueAtTime(0.0001, t + duration + 0.03);
oscillatorNode.stop(t + duration + 3);

The result is that I'm ramping both up to, and down from, the max gain value. It's remarkable how large an impact this one changes has on my musical snippets. Instead of blasting ear splitting tones, I'm now generating sounds that have an almost organic, mallet'y feel to them. I'm not sure this is ideal, but it's far easier on the ears than a pure tone.

Currently, I've hard coded both the ramp up and ramp down to values. However, I'm thinking that this may be so essential that it may make sense to re-work my Sound API to give control over the this behavior. In fact, I'm wondering if my whole Sound / Score / Stack model is holding me back, rather helping me. My current thinking is that my API should expose oscillators directly, rather than try to abstract them away.

Here are some examples of my work, and as always, you can find the source code on github.

No comments:

Post a Comment