What Terrifies Mastering Engineers: LUFS Loudness Normalization

LUFS normalized comparisons are the deep career-ending fear of today's mastering engineers. After a career of making things “loud,” loudness normalizing reveals what they have really done.

We are in an awkward part of history where the 1990-2010s fad of peak removal for “loudness,” which arrived with the new technology which was the digital peak limiter, has been invalidated by the invention of yet newer digital technology—LUFS loudness normalization—but the old guard are still in place, conducting business as usual.

For those not familiar, LUFS is an algorithm which looks at average audio loudness (taking into account that harsh upper frequencies sound louder to us than lower frequencies, volume swells across the song, plus other variables) and calculates a loudness value so we can listen back to songs at the same level for comparison to see if we have made things better/worse in production, or just for general listening. It’s available in just about every audio editing app, including all DAWs, and is used by streaming companies like Spotify, YouTube, etc.

“Back in Black”: the original vs the remaster, normalized to -14 LUFS equal loudness.

If you do two masters, -8 and -14dbfs, both done well (* see footnote), then normalize both to -14 LUFS so you’re listening at the same average loudness, the one with peaks removed sounds flat while the other explodes out of the speakers…in most every case. It’s not that complicated. Basic math: the former has peaks which are weirdly squared off at exactly the same computerized level of -6, a full 6db quieter than the latter which extends naturally with varying peak levels all the way to 0dbfs (see illustration). The best example of this is how nearly every rock remaster sounds worse when LUFS normalized, despite advances in technology (e.g. Back in Black, Nevermind, Violator, etc). “Back in Black,” pretty much universally recognized by everyone as a phenomenal recording, is the perfect and exceedingly fair example. How could taking a band and mix with phenomenal dynamics and modifying their peaks to robotic levels actually make it better? Actually we would expect it would make it worse since there is zero artistic need to be doing that. But it’s not until you actually do the exercise of loudness normalization that this becomes clear as day. What has changed is that this normalized playback technology is now readily available, and automatically in play in many cases. The loudness game has flipped—it’s actually the -14 masters which sound louder (and better) than the -8.

And it’s not even close; that’s why mastering engineers don’t give LUFS normalized comparisons to clients as a responsible standard industry practice, and in my personal experience 100% of US-based mastering engineers outright refuse to do -14 levels. It’s only then, when we have a loudness-matched non-destroyed comparable, that the sophistry of "well it still sounds pretty good relative to trash" becomes apparent. And the destruction they are doing to recorded music, becomes evident. That’s why industry guys get nervous and circle the wagons, like on this quite telling thread at Gearspace, anytime this topic of LUFS comes up. There’s always a long list of pseudo-technical reasons, which don’t really make sense, for why we shouldn’t do it (i.e. make the BEST sounding recordings, not just less bad given a non-existent and/or self-imposed requirement for degradation). And one other thing is for sure, illustrated by the above thread: the story that “we don’t want to make it this loud, the labels are forcing us” is a lie. Master engineers are fully pushing this, insisting on it, even when indie clients with no label pressure want them to stop it. With the destruction of the music industry around 2000 due to piracy, and labels shutting down, mastering engineers moved into the new market of selling their services every-day self-producing musicians and hobbyists. So there’s no fantasy evil label; now it’s just them.

To conclude, this era’s extremist normalization of peaks has had so many consequences to recorded music: 1) it’s relatively flat sounding with muted pillow-like transients, 2) the removal of peaks removes definition, which then leads to over-equalization such as sucking out the lower midrange and boosting shrill top end to get definition back, 3) normalizing peaks by definition removes variation, and variation is music, 4) it decreases artist legitimacy, making things sound more synthetic (i.e. “disposable pop”) with each peak hitting the same computer-processed level, 5) stereo shrinks, as varying peaks at the edges contribute to the perception of width… and on and on the list could go. It’s really been a dark time for recorded music, with it getting worse sounding despite advances in technology. Now LUFS standards potentially end this era. And there’s nothing lost; no downside. One would think professionals who have devoted their life to tiny sonic adjustments in order to make recordings their best would be on board with that—but they are not. It feels like we’re stuck in a transitional period, but still firmly on the side of the past.

(*) Footnote: regarding my earlier statement “two masters, -8 and -14, both done well done,” the thing is these engineers probably can’t do both well anymore—they’ve spent their career, decades since the 90s, on techniques/tools to obsessively remove peaks and limit the dynamic range into a miniaturized version. It’s an art, admittedly. Now they’re dug in—still invested in the old loudness regime, which has been invalidated by loudness normalization—even though anyone who's not half-deaf can hear that streaming-optimized levels of -14 sound vastly better. And what’s crazy, for those obsessed with loudness, is that lower LUFS masters generally sound louder than the “loud” version when normalized. But this thinking of -14 as the new 0, instead of 0 as 0, is a completely different mode, which requires different techniques and gear to optimize to…and the old guard just aren’t set up for that.


Examples

For education purposes here are 30 second clips of some classic recordings from the rock world, and the remaster. The entire song has been normalized to the same loudness, -14 LUFS, then these are clips from within those normalized songs that I think illustrate the point clearly (yeah these selected original clips turned out to be ~0.5db louder than the remaster, but that’s just how this works). Start with some full songs, normalize them, and check it out for yourself. You should load these into your DAW on different tracks, and then set it up so you can undo/redo to swap mutes in order to seamlessly switch between them with one click while it’s playing. It’s not hard. Anyone with a decent system and even amateur ears can hear the “quiet” master is both bigger and more spacious, as well as wider, and more defined, and just better in nearly every way despite old technology back then and extremely talented people with the best gear now.

Since this is seen over and over again, for decades, the obvious conclusion on a rule (with perhaps an occasional exception): you can’t beat a master by aggressively taking off peaks. How could you? That doesn’t even make sense, why would that sound better? Especially when these mixes already had killer, world-famous, dynamics? Yet the industry automatons just keep doing it automatically, no matter if the music’s dynamics need taming or not…

AC/DC “Back in Black” original vs remaster (clips)

Nirvana “In Bloom” original vs remaster (clips)

Next
Next

Pendulum OCL-2 Side chain EQ