Audiograph

Audiograph is a small project I built in my free time over a two-week period in April. It’s a music visualizer for the 2016 album TRANS by Pilotpriest, rendering in real-time with WebGL and WebAudio.

In this post I’ll explain the process and some discoveries along the way.

 Inspiration

After learning about Dolby’s Web Audio challenge at FITC Toronto, I was motivated to build a captivating audio-visual experience with WebGL. With only two weeks left in the challenge, I knew it had to be a small and focused project.

I’ve always been a huge fan of Beeple and his Everydays series. The vivid colours, bloom, film grain, and other lens effects give his work a lot of texture, and I’ve often wondered how to emulate the aesthetic in WebGL.

Select works by Beeple

 Implementation

At its core, Audiograph is really just some simple geometry moving toward the camera. As with one of my Codevember demos from last year, I started with Mikko Haapoja’s geo-arc and geo-piecering modules to create dozens of circular shapes turning and moving over time.

The geometry is more obvious when viewed from a different angle:

The scene is made up of basic and unlit materials – some using a custom vertex shader with glsl-noise to create a “dancing” motion.

The rich colours of Audiograph were sourced from the top 200 palettes on ColourLovers.com. I’ve since used this API for other experiments, including Generative Art with Node.js and MTCHMV.

With only a few materials, some basic geometries, and a slew of colours, the project was already starting to take shape.

 Post-Processing

I spent a lot of time tweaking the post-processing in Audiograph, trying to make the experience feel more organic and photographic.

 Depth Texture

The depth buffer is used in a lot of modern post-processing effects, such as volumetric fog and ambient occlusion. Historically in ThreeJS, you would render your scene with MeshDepthMaterial to a WebGLRenderTarget, and then unpack to a linear depth value when sampling from the depth target. This is fairly expensive and often unnecessary, since many environments support the WEBGL_depth_texture extension.

I had just finished implementing this extension in PR#8577 (now merged into ThreeJS core) and was keen to apply it to a project. The code to set up a depth-enabled render target looks like this:

var target = new THREE.WebGLRenderTarget(width, height);
target.texture.format = THREE.RGBFormat;
target.texture.minFilter = THREE.NearestFilter;
target.texture.magFilter = THREE.NearestFilter;
target.texture.generateMipmaps = false;
target.stencilBuffer = false;
target.depthTexture = new THREE.DepthTexture();

Now, in your EffectComposer, you can send the depthTexture along to any passes that should read from the depth buffer. The GLSL function to sample and linearize the depth looks a bit like this:

uniform float cameraNear;       // camera.near
uniform float cameraFar;        // camera.far
uniform highp sampler2D tDpeth; // target.depthTexture

float readDepth (in vec2 coord) {
  float cameraFarPlusNear = cameraFar + cameraNear;
  float cameraFarMinusNear = cameraFar - cameraNear;
  float cameraCoef = 2.0 * cameraNear;
  return cameraCoef / ( cameraFarPlusNear - texture2D( tDepth, coord ).x * cameraFarMinusNear );
}

Note the highp for the depth sampler, which is necessary on some mobile devices.

Visualizing the depth buffer

 Ambient Occlusion

Screen Space Ambient Occlusion (SSAO) is used to give the scene a bit more depth and variety. The shader is slightly modified from ThreeJS examples to improve performance and support the new DepthTexture extension. The result shows darker edges where the meshes overlap, almost like drop shadows.

 Grain & Bloom

Next, I used a custom bloom effect to add some more texture to the final image. The scene was rendered to a downsampled WebGLRenderTarget, and then blurred with glsl-hash-blur. This leads to a snowy and grainy image, as seen below:

Compositing this with additive blending shows our final output:

This is a fairly fill-rate intensive shader, so for consistent performance across devices I decided to reduce the pixel density of the canvas. This was acceptable for the grainy/textured aesthetic I was aiming for, but it might not be suitable for other projects.

 Audio & Interactions

Once the visuals were set, the rest of the experience quickly fell into place. Rather than making a generic visualizer, the experience was tailored around a single album by Canadian artist Pilotpriest. The user can cycle through tracks with their keyboard on desktop and touch on mobile. Small modules like web-audio-player and beats were used to glue together the real-time WebAudio interactions.

I used Ableton Live (and a lot of trial and error) to help pick the best frequencies for each track. Ableton’s Spectrum effect provides a good reference point when building a visualizer, showing the frequencies of the track as it plays:

On some desktop browsers, I also added WebAudio effects with soundbank-reverb. If you’re on Chrome, you’ll hear the effect when you hold the Space key; the tail of the reverb should even carry over into the next track.

 Release & Reception

Although it had a small scope and tight deadline, it was one of my most ambitious personal projects. Since its release, Audiograph has won Awwwards Site of the Day, FWA Site of the Day, and 2nd place in Dolby’s 2016 Web Audio Challenge.


You can view the full source for the project here.

 
239
Kudos
 
239
Kudos

Now read this

array slinging in JavaScript

This is one part of an ongoing style guide for modern front-end JavaScript development. Stay tuned for more: @mattdesl. Functional programming may be a nebulous concept to some front-end developers, but even learning the basics can... Continue →