Modeling an Analog Delay in the Web Audio API Part 2

Joshua Geisler
4 min readFeb 9, 2021
MXR Carbon Copy Analog Delay
The MXR Carbon Copy Analog Delay — a classic circuit design

In part one of this series, we looked at how to create a model of a classic Analog Delay effect in the Web Audio API. This took the form of a Delay class. In the constructor, we created the audio nodes necessary for the effect, set default values, and connected the signal path through them.

In this article, we will put it to use in a fully functional demo with inputs that modify the various effect parameters so we can listen to the results. By the end of this tutorial, you will be familiar with some basic ways of manipulating audio in real time!

The code for this project is open source and available on Github. It will be helpful to see everything in context as we go through some code examples.

Everything in our javascript file starts with the code from Part 1. We initialize the audio context and create the Delay class and its constructor. To instantiate our effect we use the new keyword:

const delay = new Delay(context, 0.375);

We will use an HTML <audio> element for our sound source. To connect it to the Web Audio API, we create a mediaElementSource node and connect it as follows:

// get a reference to the audio element
const audioPlayer = document.getElementById("audio");
// connect to audio context
const sourceNode = context.createMediaElementSource(audioPlayer);
// wire everything together
sourceNode.connect(delay.input);
delay.output.connect(context.destination);

The browser requires a user input before allowing the audio context to start. This is to prevent websites from forcing a bad user experience by auto-playing audio when the page loads. We account for this by listening for the play event, and checking the state of the audio context.

audioPlayer.addEventListener("play", () => {  
if (context.state !== "running") {
context.resume();
}
});

To manipulate the effect in real time, we will add some methods to our class definition. But first, we need to understand how parameters work in the Web Audio API.

For example, we have a delay node, which has a parameter called “delayTime”. Audio parameters have a .value property as in:

console.log(this.delay.delayTime.value) // returns time in seconds

We can set value directly as in:

this.delay.delayTime.value = 0.5 // time in seconds

This will instantly change the delayTime. It works, but it’s not necessarily the most musical way to do things. It could sound too abrupt to be musical, and at worst, it can cause clicks and pops in the sound. The API gives us methods to deal with this. We will make use of linearRampToValueAtTime as below:

this.delay.delayTime.linearRampToValueAtTime(
0.5,
this.context.currentTime + 0.01
);

This method will smoothly change the delayTime value to 0.5 seconds, 0.01 seconds from the audio context’s current time at the moment of execution. For our use case, 0.01 seconds is enough to sound natural and responsive without any side effects. Thus our updateDelayTime method looks like this:

updateDelayTime(time){
this.delay.delayTime.linearRampToValueAtTime(
time,
this.context.currentTime + 0.01
);
}

In our demo, we control effects with an HTML “range” input. This will take the form of:

<label for="feedback">Feedback</label>    
<input
type="range"
name="feedback"
id="feedback"
min="0"
max="1"
step="0.01"
value="0.3"
/>

The key here is to set meaningful min and max values, an appropriate step value that will allow for the right amount of granular control, and a default value that matches the value set in the constructor (though this can also be done with javascript depending on your use case).

Our updateFeedback method follows the pattern above:

// Delay class method
updateFeedback(level) {
this.feedback.gain.linearRampToValueAtTime(
level,
this.context.currentTime + 0.01
);
}

To access the value from the HTML input element:

// get reference to HTML element
const feedback = document.getElementById("feedback");
// listen for changes and call method with value
feedback.addEventListener("input",(e) => {
delay.updateFeedback(e.target.value);
});

Because we chose meaningful min and max values, we can directly input the range value into our method. It’s not always that simple though. Sometimes we need a bit of math to make the value more useful.

Frequencies are not interpreted linearly by the human ear. Every doubling of frequency is heard as a rise in pitch of an octave. Thus frequency increases exponentially even as we perceive a linear rise in pitch. When graphing frequencies, a logarithmic graph is often used. Each vertical line represents an increase of 10 Hz between 0 and 100, 100 Hz between 100 and 1000, and 1000 Hz between 1000 and 10,000.

The above paragraph boils down to this: if we were to set the min value of our input to 100 and the max to 10,000, the beginning of the range would have a large effect and the last half or so would do very little. This is not the optimal user experience. Therefore, we set the min to 0 and the max to 1, and use a bit of math to scale the output properly in our class method.

/*
rangeValue of 0 => Math.pow(10,2) // 100
rangeValue of 1 => Math.pow(10,4) // 10000
to scale [0,1] to [2,4] the formula is:
(val * (max - min)) + min
*/
updateFilterFreq(rangeValue) {
const freq = Math.pow(10, rangeValue * 2 + 2);
this.filter.frequency.linearRampToValueAtTime(
freq,
this.context.currentTime + 0.01
);
}

Everything else in our demo is a variation on these themes. Head on over to the site and play around to get a feel for what everything does. You can find the source code on Github.

Did you like this article? Want more content like this? Follow me on Twitter and let me know!

--

--