Audio Reactive Plugin
Now that we know how to write GLSL shaders in Shadertoy and convert them to FFGL plugins, we can take a look at the next step in creating advanced plugins.
Audio triggers make visuals interactive and dynamic and on this page we will look at the process of reading and processing audio information from the host application.
As in our last topic, let’s start in our Shadertoy sandbox, create an audio reactive shader and then adapt it to FFGL.
Shadertoy provides both raw audio and FFT (audio spectrum) data in the form of a GLSL texture. This tutorial demonstrates how to access the audio data in this texture and use them in a shader. The audio input can be a link to a soundcloud track (under the Misc tab) or a sample song (under the Music tab).
To access audio triggers, we will define three variables to store the value of the Bass, Mid and Treble frequencies. These are broad frequency ranges and determining them is really a matter of personal preference and the type of music being sampled. For simplicity we will assume these values:
Bass: 46 Hz
Mid: 345 Hz
Treble: 11500 Hz
Keep in mind that since each texel in the audio texture is equivalent to 23 Hz on Shadertoy, we will divide the values above by 23 before sampling values:
These audio triggers can now be used to modify the trigonometric function variables of our shader.
There are 2 ways to ask a host application to deliver audio information to an FFGL plugin:
- Buffer Parameters
- Audio Inputs
The most basic way is to create a Buffer Parameter. This is the standard method of reading audio information and is guaranteed to work on every host application supporting FFGL 2.
This parameter works exactly like every other standard parameter we’ve worked with before except that instead of a single float value, it deals with an array of float values.
So you can use it to read audio levels and FFT values from the host application. This works perfectly for a small number of values but can be challenging for larger numbers. For example, if your host provides a 32-band FFT array, this parameter would work well since any modern CPU can deliver 32 values to your plugin very quickly.
But if you’re reading a 1024-band FFT, you will have to upload those to your shader one value at a time which can slow down rendering.
The alternative to using a buffer parameter is following how Shadertoy passed in audio information. In FFGL this means passing in audio inputs as OpenGL textures the same way an input frame is passed into the plugin. The advantage of this method is that you can rely on the host application to upload the audio information to the GPU for you which means your plugin will not need to use CPU processing to use audio information.
This is much faster than using a buffer parameter but is not a standard method of audio delivery. Every host application is written differently and most of them aren’t designed to deliver audio to a plugin this way.