When Adam Lasus decided to partner up with Joe Rogers and Scott Porter at the new Room 17 in Brooklyn, it was something of a homecoming. Until high rents and new opportunities convinced him and his wife to move to LA in 2006, Lasus had run his Fireproof Recording Studio Ghostbusters-style, out of a converted 19th century firehouse in Red Hook. Between that space and an even earlier studio in Philly, Lasus had worked with a long line of indie rock artists like Helium, Yo La Tengo, Ben Harper, Dawn Landes, Matt Keating, and Clap Your Hands Say Yeah.
Although his own personal studio and Neotek Elan console still live on the West Coast, Lasus seems thrilled to be commuting back east, sometimes staying for a week or more in order to work on projects in this new room. When we met, he was in town to record a new solo album for a songwriter named Aaron Lee Tasjan. “In L.A. there are maybe 20, 30 really awesome indie bands doing great things,” Lasus says. “Here in Brooklyn there are that many on this block.”
Lasus is a youthful-seeming 44. He’s ginger-haired and gregarious, with a charming, almost boyish sense of enthusiasm for both his tools and for the people he records with. One of those people is Joe Rogers, a young label-owner, songwriter, and a former client who now runs day-to-day operations at Room 17 and engineers the bulk of the sessions. Rogers started putting out records over 10 years ago, working out of a makeshift studio in the Bronx, and has recorded with artists like The Shivers and Kelli Scarr.
The two of them sit together for an extended interview in a cavernous yet surprisingly well-controlled mix room, and occasionally finish each other’s thoughts and sentences. They share some central ideas: That trust and camaraderie are the most important aspects of the client/engineer relationship; That digital is fine but tape is more fun; And that smashing mic signals through cheap old transistor stereos is a badass thing to do.
Unable to make this meeting is a third partner, the musician and investor Scott Porter. Like Rogers, he’s a close friend and former client of Lasus’, who has made the transition from performer to producer/engineer in his own right.
Room 17 sits on a revitalizing Bushwick block, part of a once-industrial strip close to the border of East Williamsburg. The studio is located just down the street from local “DIY” venue The House of Yes, and not far from 3rd Ward, Shea Stadium, The Sweatshop, and essentially, the whole burgeoning Bushwick art and music scene.
As I walk toward their building, I pass an old minibus, parked about dozen yards from their door. It’s spray-painted in technicolor graffiti and stuffed full of the Brooklyn equivalent of hippies (presumably psych-folk fans) brandishing iPhones and acoustic guitars. They’re perhaps indicative of this new Bushwick, although by no means emblematic of it.
As austere and industrial as the area might seem to the outside eye, the three studio partners still had a hell of a time finding a 10 year lease here (perhaps one of the only arrangements that really makes sense for a fairly high-cost, low-profit business like an affordable music studio.) New York landlords know the deal: Once the artists start moving in, residential rents start going up, and soon after, commercial rents will follow. In real life, just as in the online world, art and culture are perhaps among the biggest drivers of perceived value and economic growth. (If only more artists knew how to capitalize on that).
The inside of the studio mirrors the area itself. It’s a large warehouse space that blends thrifty professionalism with a sensible minimalist build. Rather than re-imagining the concrete raw space, the studio instead re-purposes it, keeping much of the site’s lofty, wide-open appeal intact.
Each of the rooms is huge, and somewhat spare, with stone floors and a few strategically placed carpets. But they are also unexpectedly well-balanced. There’s barely a parallel wall in the whole place, and the 14-foot-high ceilings are stuffed full of 6-12 inches of insulation, practically eliminating the need for additional trapping. Otherwise all that’s there is cement, glass and drywall, allowing the space to retain some subtle reflections that make the room sound airy and alive.
The main tracking space is enormous on its own, and it connects to two ample iso booths that are larger than some other studios’ live rooms. Even the control room by itself is bigger than many Brooklyn apartments. All these spaces are linked by immense glass doors, and downstairs there’s a makeshift echo chamber that sometimes doubles as an additional live room. Put together, it’s well over 3,000 square feet of recording space.
Gear at Room 17 is as distinctive as the space. The console is a rare Trident – an early 80 series refurbed with a newly upgraded master section. The main recorder is an equally unusual 2” Otari, once property of Manhattan’s legendary Unique Recording Studios, and it comes equipped with both the 24- and 16-track headstacks.
Naturally, there’s also a Pro Tools HD rig, and an island of rack gear is stuffed with some interesting and esoteric pieces from Valley People, Manley, ADR, TapCo, Focusrite, MXR, Allison Research and Symetrix. The mic locker is full of vibey old dynamics and some great-sounding, cost-effective mics from Peluso, Gefell, AKG, Oktava, Michael Joly Engineering and Mojave.
The idea here is to keep things affordable while offering a larger, less intimidating space that bands might otherwise find in a similar price bracket. To Lasus, one of the few challenges is helping the kinds of bands he loves working with understand that they can afford to work with him:
“A lot of bands will see something like Clap Your Hands Say Yeah on my discography and just assume we’re going to be too expensive,” he says when the subject of rates comes up. But what they tend to forget is that when Lasus recorded them, CYHSY were just like so many other Brooklyn bands: unknown and inexperienced weekend warriors, uncertain about just what to expect from some of their first real studio dates.
Lasus recalls giving their drummer Sean Greenhalgh a beer early on in their first session. They had been nervous about playing earlier in the day than usual, and that move seemed to set him at ease.
It was a way of communicating something Lasus tries to make clear in every session, one way or the other: Getting great recordings isn’t about judging the artists. It’s about understanding them. It’s about making them feel relaxed and capturing them in their most natural and un-reflexive state.
If there’s some deeper purpose to all Lasus’ high-spirited chatter and convivial energy, it’s probably that.
Justin Colletti is a Brooklyn-based audio engineer, college professor, and journalist. He records and mixes all over NYC, masters at JLM, teaches at CUNY, is a regular contributor to SonicScoop, and edits the music blog Trust Me, I’m A Scientist.
Music is a rare kind of art form that is made entirely out of vibration. It’s at once both ephemeral and yet inherently physical. We will never be able to reach out and grab it in our hands, but it certainly touches us in the most literal sense of that word. If you’re feeling poetic, you might even compare hearing itself to a specialized, hyper-sensitive form of touching; one that works across great distances.
Most of us already have some cursory understanding of how sound works. If you’ve gotten through most of high school, you probably know that sound travels as waves through air, liquids and solids. But it’s rare that we stop and think about exactly what that means, and what it implies. That’s unfortunate, because only by understanding this concept fully can we unlock the knowledge that’s key to clearing up some of the most pervasive questions and misunderstandings around sampling rates, room acoustics, equalizers – even about where music comes from and why it can be so mesmerizing.
If there’s a basic building block of all music and sound, it’s harmonic motion. Acoustician Dr. Dan Russel of Penn State has created dozens of free educational animations that help explain these concepts, and makes them available to the public on his site. I use them regularly in teaching my college courses on audio. One of my favorites is among his simplest:
I could watch this thing all day.
To the left is a diaphragm – like a speaker, or the soundboard of a guitar.
As it vibrates, it pushes forward, compressing the air molecules. Then, it pulls back, rarifying the air. Repeat this back-and-forth movement enough times in a second and we have a frequency of motion that registers on the ear as sound.
At first glance, your eye may be tempted follow the movement of the wave, itself, from left to right: “Aha!” you might think. “So, these columns of compressed air molecules travel forward, emanating from the speaker, until they arrive at my ear, one after the other!”
But physics is often counter-intuitive, and that’s just not the way it works. The air molecules do not really travel from the speaker to your ear. That would be called “wind.” Instead, each one of those molecules just kind of hangs out around a general home-base and simply moves back and forth, back and forth. Almost like a pendulum.
Go on, scroll back up and take another look. But this time: don’t focus on the wave.
Instead, look at one single particle. Trace its movement with your finger.
Do you see him there, just kind of hanging out? Going back, and forth, back and forth, back and forth, like a little metronome?
That’s harmonic motion.
Instead of an original molecule from the left making it all the way to your ear, the force of the initial vibration moves from one molecule to the next, a bit like the executive’s clacking-ball toy, “Newton’s Cradle”.
(Except in the case of air molecules, they’re spread out, and they don’t physically collide. When they get too close, they actually repel each other. But that’s topic enough for a whole ‘nuther article.)
Much like a pendulum, the speed at which each molecule moves back and forth is not constant. When a pendulum, a speaker or a molecule nears the end of its “swing” in one direction, it becomes chock full of potential energy and, eager to fly in the opposite direction, accelerates towards center.
As it passes through the central resting point, it still has plenty of energy left and keeps on moving, gradually slowing down until it reaches the final extreme on the other side. There, with no energy left to keep going forward, and a bunch of new potential energy on board ready to steer it the other way, it begins hurtling back toward center again.
Graph out this kind of gradually changing speed on a piece of paper, and you’ll have a very familiar image. It looks a little something like this:
Ah, the sine wave. This natural speeding up and slowing of harmonic motion is what give its undulating wave shape. If the speed were constant, we could draw it with straight lines, like a triangle. But that’s just not how vibration, and harmonic motion, work.
Charts like these are not to tricky to understand. If we track the movement of any individual molecule, the “up” position on this graph would represent the molecule moving as far as it can to the right, the fully “down” position would indicate movement as far as possible to the left.
From here on out, things get a little “meta”. We can zoom out and look at the wave itself. “Up” represents compressed air, “down” represents rarefied air. Or we can look at the movement of the speaker: “Up” on this graph means speaker pushes out, “down” means speaker pulls back in. In an analog system, this same image could be used to indicate the fluxuation of voltage in a circuit, the change in magnetism on a piece of tape, or the ins and outs of the groove on a vinyl record.
Digital systems are a little different. To recreate this wave perfectly, all we need to know is where the molecule has been at more than two points in each cycle. From there, knowing what we do about the laws of harmonic motion, we can extrapolate where that molecule was between each of those points. Despite popular misconception, this sine wave would not be awkwardly mangled and re-drawn as a triangle or square wave. We know how molecules accelerate and decelerate. This is math that we can do. It is not an unknown.
Of course, the animation above is a bit simple. First of all, sound propagates in all three dimensions, not just in just one direction as we have here. By it’s very nature, sounds wants to be “omnidirectional.” It is only through concerted design that we can we can effectively channel it one way or another. The stuff doesn’t simply go left-right, or even center-out. It vibrates every damn way it can.
But even though sound waves are a bit more complex than this in practice, this same fundamental kind of movement is still at play. In the early 19th century, a scientist named Joseph Fourier helped lay the groundwork for understanding that all complex molecular motion is basically built out of layers upon layers of these simple harmonic motions. More complex sounds – from the relatively pure tone of a flute to the overdriven chords of an electric guitar – are made out of what might be described as thousands of discrete sine waves – some of them harmonically related, some not.
A pure sine wave, where all molecules move back and forth together in perfect sync, is practically impossible to recreate in nature. Instead, what we get is a blend of molecules moving at different rates and at different times. We can see this effect in the motion of a guitar string, which does not vibrate at only one rate – but at several rates at once.
When we pluck the low E string on a bass guitar, we hear not only the “fundamental” pitch of about 80 Hz, but also mathematically-related harmonic overtones at 160 Hz, 240 Hz, 320 Hz, 400 Hz, 480 Hz, 560 Hz, 620 Hz and so on. (It’s actually more like 82.4 Hz and up from there, but I’ll spare you the awkward decimals.) In essence, only thing that separates this low E from the same E on a piano are the durations and proportions of these additional, harmonically-related vibrations.
What’s especially amazing about the relationship between fundamental pitches and their harmonic overtones is how, through natural law alone, they routinely fall in and out of phase with each other. This natural synchronization of vibration is what makes instruments sound so beautiful on the ear. And when you visualize it, the effect can be just as stunning:
Ok, now this I could really watch all day.
What you’re seeing here is the visual equivalent of a single, very pure note in action.
The longest of these pendulums is like the fundamental pitch, and the following pendulums are scaled proportionately shorter and faster, in essentially the same way that harmonic overtones are. This is music in motion.
On a real-world instrument, we’ll get some unrelated “enharmonic” overtones as well. These “impure” resonances are especially prevalent in instruments like distorted guitars, snare drums and wood blocks. In great doses they will obscure our sense of pitch. But these unrelated overtones are just part of makes instruments sound so damn interesting.
When we EQ sounds or treat instruments, we’re playing with these overtones – their proportions and their durations. It’s almost like creating alternate “timbres” of the mesmerizing visual pattern above by launching different pendulums at different times or from different heights; by futzing with the mathematical purity of their length; by repressing the movement of some pendulums and not others; by introducing unrelated “enharmonics”; or by making some pendulums more visible than others through changes in lighting.
Our relationship with this natural “harmonic series” is so ingrained that you can leave out the bottom pitch, and our minds will automatically fill it in. This is precisely what happens with smaller pianos, which often have soundboards too small to reproduce the deepest fundamentals.
Our ears might not “hear” the low fundamental in a literal sense, but our brains sure do. You can even try it yourself with a sine wave generator: Play your brain 110 Hz, 165 Hz, 220 Hz, 275 Hz, 330 Hz, 385 Hz, 440 Hz all at once and it will instantly go “Oh! I get it. 55 Hz. Low A,” and will fill it in without you doing a thing about it. You can’t help but hear the phantom fundamental.
This harmonic series isn’t just where tone comes from. In it lies the very foundations of all music. If you were to play these overtones together at equal intensity, you would basically get a chord like this one:
(On piano, it would sound something like this.)
These first harmonics give birth to the western 8-tone scale. And if you zoom in on only the first, most prominent handful of these overtones, you have the raw ingredients to re-create the near-universal 5-note pentatonic scale. The truth is that the full vocabulary of music comes baked right into every note. It’s almost fractal in a way.
The appeal of these natural relationships – which stem from simple molecules vibrating in and out of phase – is inescapable for us. Bobby McFerrin demonstrates just how ingrained this series is, by using it to hack your brain:
It’s easy for us to lament the kids these days with their digital downloads and their streaming. Back in the days when tape and vinyl ruled, the stuff had substance, it had weight, it had tangibility. But to feel this way is to mistake the medium for the message.
There may always be a place for manufactured accessories to music, but for most of human history, we have experienced music not as a material product to be loaded on to tractor-trailers, but as pure vibration in the air. However we consume it, music will always be as physical as it is fleeting. It is as real as the matter around us, and as impossible to bottle as a stroke of lightning. The best we can do is to create devices that will measure it and approximate it in another form, whether that be the etchings on a wax cylinder or the ones and zeros of a solid-state hard drive.
Until we can tap audio straight into our neurons, we will always need to make molecules move in order to hear the stuff. In this way, music is, and likely always will be, an ever-evolving game that we play with the physical, natural world.
Justin Colletti is a Brooklyn-based audio engineer, college professor, and journalist. He records and mixes all over NYC, masters at JLM, teaches at CUNY, is a regular contributor to SonicScoop, and edits the music blog Trust Me, I’m A Scientist.
A couple of weeks ago, a good friend watched her laptop’s screen in horror, as a complete stranger began uploading her entire concert from the night before onto YouTube.
She hadn’t seen this unknown cameraman, filming from the middle of the audience with a shaky, low-res cell-phone camera and capturing every moment: The mistakes, the tuning breaks between songs, the fleeting moments of awkward banter, and even a new unreleased song that the band was workshopping for the first time in front of an intimate audience.
It’s easy for many people to understand how being broadcast and exposed to the entire world against your will could make you feel violated and helpless. Our ability to share and broadcast music cheaply and easily may be among the great advances of the 21st century, but without consent, sharing just doesn’t feel right. This goes double when it’s on a huge commercial website, monetized without your permission and available for the entire world too see. There are laws against this kind of thing for a reason.
Some of us are more comfortable than others with the idea of our music being shared freely and indiscriminately – the good and the bad shows alike. Fortunately, it’s our right to have our own creations shared indiscriminately should we choose that path. But it’s also our right to maintain some control over what people can, and more importantly, can’t do with our work.
Even the Grateful Dead, who have always encouraged fans to record and share their performances, draw the line somewhere. It might sound surprising at first, but to them, this new model of sharing, whether on YouTube or on a pirate website, is antithetical to everything they stand for.
Their official policy is generous and free-spirited, but also clear-cut: “No commercial gain may be sought by websites offering digital files of our music, whether through advertising, exploiting databases compiled from their traffic, or any other means.”
That would clearly preclude YouTube, as well as any pirate website that sells advertising (and most do these days.) Technology may change, but ethics don’t: “The Grateful Dead and our managing organizations have long encouraged the purely non-commercial exchange of music taped at our concerts and those of our individual members. That a new medium of distribution has arisen – digital audio files being traded over the Internet – does not change our policy in this regard.”
For almost a decade, musicians and fans alike have looked overwhelmingly to the positive side of a “free” and open musical culture. But if anyone and any company can use our music however they choose, then what rights do we lose? Do we lose the right to choose whether our music can be used in TV commercials, movie soundtracks or political campaigns? Do we lose the right to choose when and whether or not we will work for free? Do we lose the right, like the Grateful Dead, to demand that our performances never be monetized, whether directly or indirectly, through the sale of ads?
These concerns are not theoretical ones: David Byrne made headlines not long ago when he successfully sued former senatorial candidate Charlie Crist for using his hit song “Road To Nowhere” in a political ad without his consent. Tom Waits likewise successfully sued both Frito-Lay and Audi for using a Tom Waits imitator after he had refused to license his music in their commercials – at any price.
Those are just two stories of artists taking control from among countless thousands of examples. And you too can start taking control of how your work is shared and monetized, even online. It doesn’t even require the hassle and grand gestures of a lawsuit. You can do it from right in your bathrobe:
A piece of weak-tea legislation called the Digital Millennium Copyright Act (or “DMCA”) is what allows sites like Google and YouTube to get away with their “Share First, Ask Questions Later” policy. But that same bill also allows musicians and other content creators to have their work removed from these websites when it is posted without their consent.
In fairness to Google, they’ve been very good about increasing the effectiveness of the tools that allow artists to flag, control, monetize or even remove unapproved content from both YouTube and Google Search. What they’ve been less good about is spreading the word. I’m amazed how many regularly exploited artists are unaware that they actually have the power to do something about it.
In the case of YouTube, you can lay claim to any videos or tracks that belong to you right now, without getting up from your chair. It only takes a few minutes.
And, if you’d like to maintain a fan-powered presence on YouTube, you don’t have to have your music taken down entirely. Using the available tools you can even decide to leave your tracks up and instead have YouTube give you analytic data about your viewers, give viewers links to places where your music can be purchased, or even monetize your tunes directly, via advertising.
Of course, if you’d like to limit the amount of your material that appears on YouTube so that you can give your fans a real incentive to buy your music if they like what they hear, then you also have the option of removing the offending tracks or videos altogether.
Not long ago, this process was a real drag. As soon as you took down an unauthorized video in one place, it would just crop up again later in another. But more recently, YouTube launched a tool called ContentID, which allows you to identify your music just once, and have it be recognized in perpetuity. From there on out, you can have YouTube automatically block, track or monetize that music, now matter who uploads it and when.
This service is not restricted to the big labels. If you’ve had issues with your music uploaded against your will in the past, you are eligible to sign up for free. And not to be outdone, Soundcloud has also launched a content identification system of its own.
A Note About Fair Use:
Some might be concerned that these tools could be abused by blocking fair use. But in my personal experience, I have found this not to be the case. When we posted our recent “Studio as an Instrument” panel to YouTube, several American major labels started selling ads on our video, which included brief song snippets. This was done automatically by the Content ID system.
Obviously, I’m all for labels and artists getting their share, but fair use is fair use, so on principle, I contested the claims. Within a day, all the American labels had retracted their claims, basically saying: “Yeah, that’s obviously fair use.” (The only one that didn’t pull their advertising and claim was a label from Germany, where the concept of fair use does not exist.)
I had a similar experience with my own online reel as well, when SoundCloud automatically removed one song for which I had obtained the artist’s permission to include. I replied to the claim using their online dispute center, and within a day, the label had approved the use and the song was quickly restored.
This kind of protection is not only limited to YouTube videos. The DMCA also allows you to have nefarious results removed from Google Search completely. Bear in mind that this won’t shut down the website in question (so direct links will still work) but it at least ensures that users won’t be able to find the stolen work through search engines.
This is great for cutting off websites that sell ads on your music without your consent, or give away torrents of your entire discography. I’ve used it successfully to take down links to unrepentant plagiarists and unauthorized monetizers of my articles as well as my music productions. (There were far more of each than I had expected.)
Today, I’m amazed at how many complete discographies and full albums I’m still able to find on sites like Google Search and YouTube, especially when blocking that kind of behavior has now become so easy. The tools are there, but the word has just not gotten out.
Even if you don’t care about your sales and want your own music to be shared as widely and completely as possible, using these tools can still allow you to learn about and engage with your fans, or to stop unscrupulous companies from monetizing your work without your consent.
Remember that whenever unauthorized websites sell ads on the traffic generated by an artist’s entire discography, whether directly or indirectly, it adds to the bloated bottom line of technology companies while keeping artists, producers and engineers eating table scraps.
Free music can be great. I listen to plenty of it, as ethically as I can. I’ll also be the first to tell you that it’s a good idea for almost any artist to make some of their music available publicly and without charge. But “free” is, and should be, a choice. When that choice is taken away, it becomes a meaningless gesture.
Both the rights and the earning potential of so many artists have been sacrificed in the past ten years as extremely profitable technology companies have lobbied hard to turn ‘Copyright’ into a dirty word. But the truth is that Copyright is your creative bill of rights. It ensures that:
1) No one has the right to take your work and use it for his or her own financial gain without your say.
2) No one has the right to pressure you into working for free if you do not want to.
3) And no one has the right to take your art and use it to support his or her own political agenda without your agreement.
Stand up and respect yourself. If you haven’t set aside a moment to gain some basic semblance of control over your own music online, now is the time.
Justin Colletti is a Brooklyn-based audio engineer, college professor, and journalist. He records and mixes all over NYC, masters at JLM, teaches at CUNY, is a regular contributor to SonicScoop, and edits the music blog Trust Me, I’m A Scientist.
Bustling cities like New York, L.A., San Francisco and Nashville may boast more recording studios per square foot than just about anywhere else on earth. With such high concentrations of talented professionals, it’s not surprising that so many commercial records are made, at least in part, inside of one of these major markets.
Then there’s a second tier of studio towns – places like Chicago, Miami, Seattle, DC, Atlanta, Philadelphia, Portland, Boston and Austin – where recording culture is alive and kicking, although perhaps not quite as densely packed or competitive as it is in the big four.
But major cities aren’t the only place to make records. Artists from Led Zeppelin and U2 to Bon Iver and Beach House have long escaped into the countryside to complete their crowning works. With that in mind this week, we’ll look at three “recording retreats” – studios with onsite living accommodations, that bring the luxuries of a metropolitan tracking room into quieter, more affordable, more scenic locales.
Echo Mountain Recording
Asheville, North Carolina
There’s a recording studio in the small city of Asheville, North Carolina, that sits nestled into the Blue Ridge Mountains, not far from where the French Broad River meets the Swannanoa.
This region, nicknamed “Land of the Sky,” has developed something of a hybrid culture over the years. The population is fairly small and spread-out, but the area is home to a startling number of transplants from the coastal cities.
These ex-pats from New York, California, Washington, and bustling cities all around the U.S. are often credited with giving the place its dynamism. But by and large, they come to the town to adapt, not to overturn, and so it’s become one of those rare places where cosmopolitan tastes meet homespun values. The whole city sits at the center of a culture that revolves in part around craftsmanship and art.
“I think the city of Asheville itself is such a big part of why people come here to record,” says Echo Mountain‘s chief engineer Julian Dreyer. “People will be playing a show in town, and come for a studio tour and they’ll say ‘Wow, there’s this great studio here, and the town is incredible. I just want to spend two weeks here and make a record.’”
“There’s a huge appreciation here for arts and crafts, and all these little communities of artists and musicians,” Dreyer says. “That attitude leads to stuff like great food and restaurants. The bar for that is set so high now that unless you’re on top of your game you just won’t survive. So it’s a city of not even 100,000 people, but we’ve got food here that would rival your most ‘hipster’ parts of Brooklyn.”
“And there’s probably more little breweries here per capita than almost anywhere,” he says. They’ve even been voted “Beer City USA” three years in a row, just narrowly beating out Portland, Oregon. “People here are so proud of Asheville that they get so pumped up to vote in that kind of thing.” And it shows: Dreyer has a slow-spoken manner and just the shadow of a Carolina drawl, but he livens up when he talks about Asheville even more than when he talks about microphones.
Of course there’s more to Echo Mountain Recording than just the town. It’s more than just a studio – almost a little musician’s complex in its own right, sporting 4 full-fledge production rooms, the largest of which – built into an deconsecrated old church – houses a drool-inducing Neve 8068 console, a Studer A800 reel-to-reel and a full-blown Pro Tools HD3 system.
This main space, as well as Echo Mountain’s newer API-based studio in the adjoining building, was designed by the legendary George Augspurger. Two smaller studios round out the space, offering even more affordable rooms for overdubs and the like. None of them are hurting for instruments or mics either, and vintage Telefunkens, AKGs and Neumanns float from room to room.
Records made at Echo Mountain earned three GRAMMY nominations and two wins this year; but don’t let names like Smashing Pumpkins, Steve Martin, T. Bone Burnett, War on Drugs, G Love, VHS or Beta, The Avett Brothers, Zac Brown or Band of Horses scare you away. They also spend a fair chunk of time recording new acts from out of town, as well as local and regional artists.
Saint Claire Recording Company
“Our motto for the longest time has been ‘Relax, Record,’” says John Parks, co-owner of Saint Claire. “We want to get you out of the city and – hopefully – to turn off your cellphone and close your laptop.”
“Often, relaxation is the last thing that people think about when they’re recording,” Parks says, “but it’s actually a pretty important thing, I think. Is that 15th hour as productive as that 3rd hour in the middle of the day?”
This concept factored into most of the decisions that the Parks made when they built Saint Claire Recording Company, a 7,800-square-foot facility just five minutes outside of downtown Lexington, Kentucky.
For anyone who hasn’t been, these parts of Kentucky can be astonishingly picturesque, especially around sunset, as dusk gathers around the rolling hills. It’s long been the style in Kentucky to cut back the trees and nurture the local bluegrass for grazing, so that when you do catch a large black oak standing on the horizon, it’s silhouetted against the sky like an old watchman looking over the homestead.
“We thought that instead of building just another studio in Nashville, we could try and tap into that slower pace of life, and maybe help put Kentucky, and Lexington in particular, on the studio map.”
If the Parks’ goal was to take the accoutrements of a world-class SSL 9000J studio and put it into the context of small-town living, they have succeeded. But as quaint as Lexington might seem to a New Yorker, it’s certainly not the boondocks. It may only be the 62nd largest city in the U.S., but it’s the 10th most educated, with nearly 40% of residents in the city proper having earned college degrees.
It has its own attractions too: the bourbon trail, historic museums, and horse racing – particularly the Kentucky Derby – which takes place not far away in Louisville, KY, a place Parks describes as “like a metropolis” compared to the small-but-growing city of Lexington.
Saint Claire has become something of a destination for some busy coastal engineers including the legendary Tony Visconti, Neil Dorfsman, and our own Zach McNees. The clients they bring with them come from fields as far flung as Japan, Spain, Ireland and Canada, and to accommodate them all, Saint Claire has five bedrooms right on premises.
“When the client is here, we want them to treat it like it’s their house,” Parks says, “and when you shut the door behind you at the end of the day, you wouldn’t even know there’s a studio footsteps away.”
Since it attracts so many traveling producer/engineers, the studio’s house engineer, Tim Price, often finds himself putting on his assistant hat. It’s a role he’s equally comfortable with, having risen up from the ranks of intern at Saint Claire.
And although the recording space is well-separated from the living quarters, the studio itself was designed with special attention placed on sight-lines:
“When we were designing it I wanted to squeeze in as many separate isolation booths as we could,” Parks says. “We ended up with four. And with the way the windows are placed it’s the closest you ever might come to the feeling of playing live in one room, while still being able to turn up the amps nice and loud.”
But as much as it’s equipped for a full-on rock session, Parks says they attract more singer/songwriters. They’re often the ones, he says, that best understand the value of getting unplugged and closing the door.
Black Dog Recording Studio
Stillwater, New York
Luckily, New Yorkers don’t have to go far to get away from it all. Black Dog Recording Studio sits just outside of Albany, tucked into the foothills of the Adirondack Mountains, in the small town of Stillwater, about three hour’s drive up the Hudson River from Manhattan.
Black Dog sports a 600-square-foot live room, a 400-square-foot control room, and three isolation booths. There’s a three-bedroom, two-bathroom house available on the property, and the gearlist offers a tempting melange of top-shelf condenser and ribbon microphones, a unique mid-70s Sphere console, and some early American tube preamps from Collins, Gates and RCA, in addition to the more standard fare from API and Quad Eight.
They may be the youngest studio on this list, so their amenities keep growing. For the spring, studio manager Seamus McNulty says they plan to add a 2” Studer machine and some rustic cabins for extra lodging.
McNulty describes the Rod Gervais-designed live room at Black Dog as “bright and tight” – and ideal for recording a whole band live together on the floor if they choose. For those who want even more control, the three iso booths are ample, with the smallest of them capable of fitting a harpist. The space is rounded out with a small library of guitars, amps and keyboards, including an original B3, and a complete line of vintage Ampegs.
Despite its size, gear and proximity to the big city, Black Dog is a shockingly affordable room (one of the perks of setting up shop in a small town.) The space has attracted its share of notable upstate acts like Ra Ra Riot, Sean Rowe and Railbird, and now, a growing number of New York City producer/engineers like Joe Blaney, Jonathan Jetter, and Andrew Maury, who gives the space rave reviews.
In the days of vinyl and tape cassettes, providing your listeners with information about your music was simple: Everything from song titles to song-writers, lyrics to album art, engineering credits to UPC codes, could be included in the sleeve or album sticker, and that was that.
Today, what drives growth in the industry are music downloads, rather than physical sales, which continue their slow decline. But even as consumers increasingly turn away from physical media, we haven’t lost liner notes entirely. They’ve just begun to move onto our hard drives and into the cloud.
Current tools for sharing essential info, basic credits and album artwork on digital files can still be improved industry-wide. But as things stand, if your release is missing all of these things, the fault does not lie with the technology. And as complex as all the metadata options may sound, breaking them down into a few main categories to help bring the whole field into focus.
Today, we’ll be discussing CD-Text, ID3 tags and online databases – the three main vehicles for distributing the information and extras that you can provide with a digital release.
The first way of sharing album information with fans digitally has been around since 1996, and can be burned right into CDs. If you ever pop a disc into a car stereo, or a home DVD/CD player and see the titles of the each song listed on your display, you can thank CD-Text for that.
The CD-Text protocol allows us to bake a wide variety of information right onto the disc: The names of artists, composers and arrangers, as well as titles of albums and songs, and even the boring-but-essential stuff like UPC codes for albums and ISRC codes for each song (which help with tracking sales and radio play.)
If you get your music professionally mastered, your mastering engineer can put this CD-Text information directly into a physical CD “Premaster” or a DDP file, that you would send to a large-scale replication house.
But even if you’re just duplicating short-run copies at home or with a small-scale duplicator that can’t handle the DDP files used at big replication firms, that doesn’t mean you have to leave this information off of your release.
If you’re burning your own CDs from a set of raw WAV or AIFF files sent by your engineer, many simple consumer programs can include CD-Text these days. In the case of iTunes, all you have to do is check a box to enable CD-Text. For a bit more power and flexibility, there are affordable programs like Roxio Toast or the free “Burn” for Mac.
Even though the sales of physical CDs are continuing to shrink, some people still prefer them, and those listeners add up to nearly half the total music-buying market. They’re an even more significant force than that if we’re talking about albums rather than singles. Those who still prefer CDs often listen to music on conventional disc players, and if you leave out CD-Text, you’re leaving out an essential perk for many of your listeners.
Artwork, album credits and other liner notes that can’t fit into CD-Text, but the answer here is obvious: Fans of CDs like the format in part because of its physicality. All of this can be included in a physical booklet – So include one!
It’s also worth noting that CD-Text does not embed any information on the music files themselves. Rather, it is part of what you might call the container file for the CD . This means that if you import your CD into a computer, information that is included only via CD-Text may not make the transition, and you’ll be leaving the majority of new music fans without even the most basic information, such as song-titles or artist and album name.
Online Databases (CDDB, AMG and more)
Computer-based music players and portable listening devices may not recognize CD-Text, but they have another way of finding and displaying the information – and even the artwork – associated with your music: These programs rely on comprehensive online databases to pull this data from the cloud and store it along with your files on your drive.
To provide this feature, iTunes uses Gracenote’s CD Database or “CDDB.” Windows Media Player provides a similar free service using AllMusic’s “AMG” database. There are also slimmer databases that are free for small-scale software developers to integrate into their programs, such as freedb and MusicBrainz.
Getting your art and information into these databases isn’t hard. If you’re releasing an album through an already-established label or a digital distributor like CD Baby or Tunecore, they’ll help you add your info to The major databases when you submit your music.
If you’re on your own on this front, you can enter tags and submit them to the CDDB easily through iTunes. To get recognized by Windows Media Player, you’ll have to mail a retail-ready physical CD to AllMusic. They take care of new submissions in 4-6 weeks.
These online databases use the same protocol as computer-based music players and portable devices like iPods. The data is stored using “ID3 tags” which embed information on MP3s, AACs, and even uncompressed WAV files.
Unlike CD-Text, ID3 tags are written right into each file. This format has many of the same fields as CD-Text, plus a few more like “Album Artist” that are handy for keeping things organized inside of a large library. With the ID3, you even have the option of including album artwork, which is impossible with CD-Text.
To add ID3 tags to your releases, you can either work with a label or digital distributor like CD Baby or Tunecore, or add them yourself using a simple tag editor – many of which are free or affordable. Popular programs include MP3Tag (PC), Tagr, Tag, or Fission, (Mac), ID3 Editor, Jaikoz and even iTunes (Mac/PC). You could also ask your Mastering Engineer to help with this. I do it as an added service for my clients all the time.
For some inexplicable reason, there are a few fields in CD-Text that are not included in ID3: chief among them are slots for UPC and ISRC codes, which can be used to help track sales and radio play.
Thankfully, ID3 incorporates an open ended “comments” section that allows for inclusion of this data as well as all sorts of extras, like web addresses, album credits, thank-you lists and the like.
In theory, there should be no limit to what you can add in the comments section, making it a near-perfect place to include digital liners. But in practice, some programs truncate the comments section. iTunes, for instance, will not let you include more than 255 characters in this field. And if you use another program to add more text, it will be chopped down to 255 characters when brought into iTunes.
(This is one of the last major issues with iTunes, along with Apple’s refusal to make sure that a transparent, intelligent, volume normalization is enabled by default. Fixing these two shortcomings would immediately help to slam the book shut on two of engineers’ favorite complaints: In one fell swoop, we could bring an end to the lack of proper digital accreditation and help to bring the loudness war to a close once and for all. As a side note, some people will tell you that automatic volume normalization features like iTunes’ Soundcheck degrade sound quality. Perhaps this was once true, but as things stand now, this is wrong. What these technologies actually do is simply and transparently turn down the volume on the loudest albums, providing a more seamless listening experience and a dis-incentive to make albums sound worse just so that they can sound louder.)
An Under-Explored Frontier: The Digital Booklet
So, if the biggest digital music retailer in the known universe has what is essentially a broken “comments” field, then what’s an artist to do about comprehensive digital liner notes?
Fortunately, there’s an alternative that has been available for several years now, but remains woefully underutilized. iTunes and Amazon now allow artists to include digital booklets of 4 pages or more along with their releases. It costs next to nothing to make these virtual liner notes available to your fans, and I recommend it to anyone who asks. (And even to people who don’t.) Unlike physical CD inserts, these digital booklets use a 4:3 ratio to take advantage of the full viewing area on-screen. Adapting the CD art you’re already using into this format is not difficult at all.
In an even more ambitious move, Apple announced a next-generation interactive booklet called the iTunes LP in 2009. Thanks to the fact that it was initially restricted to major labels only – as well as the fact that those labels were less than enthusiastic to participate – the format has yet to take off.
However, this new high-res, interactive take on metadata still has plenty of promise. And it’s now open to independent artists. Hopefully, at some point, they might lead the way in doing a lot more with it than the majors have. Compared to building an interactive app for new albums, such as Björk and Philip Glass have done, creating a stunning iTunes LP takes relatively little skill.
Like many good ideas, the possibility of engaging, comprehensive digital liner notes may have become made feasible before the market was ready for it. But in the future, it seems likely that immersive, full-featured digital album art will someday become the norm. It’s certainly one of the things I miss most about physical formats.
Even if digital media can never completely reproduces the tactile satisfaction of a format like vinyl, if we can begin to offer even a sliver of that experience, by moving metadata out of the realm of tech geekery and into the realm of art, we’ll have gone a long way toward improving the experience of recorded music for countless millions of fans.
Towards the end of Dave Grohl’s directorial debut, the rock documentary Sound City, drummer Mick Fleetwood warns us about “the downside” to all the technological advances that have so changed the face of music production: That they might lead a person into “thinking that ‘I can do this all on my own.’”
“Yes, you can do this all on your own,” Fleetwood quickly concedes. “But you’ll be a much happier human being to do it with other human beings. And I can guarantee you that.”
Sound City is at its best whenever it takes this tone – Which it does most of the time. Those of us who feared (like I did) that the film might come across as an ode to diamond-encrusted buggy whips, can breathe easy.
That’s not to say that Grohl and his interview subjects – the likes of Tom Petty, Paul McCartney, Rick Rubin – don’t pine for increasingly impractical analog technologies that have been largely supplanted over the years. Or that they don’t sometimes look down their noses on the digital tools that have come to dominate music production. They certainly do both, from time to time.
But when they do, it’s largely because they’re out to promote the values that these outmoded technologies tend to reinforce: Practice, preparation, dedication, collaborative spontaneity and that in-the-moment experience of making inspiring music with inspired peers.
Despite its steadfast and somewhat conservative perspective on how music should be made, the tone of Sound City remains one of aspiration, inspiration and affection – never derision or condemnation. Even Neil Young who, now nearing his 70s, can be something of a crotchet when it comes to audio technology, is made to seem accepting of other ways of working – even as he makes a curiously unstudied remark about the birth of the CD.
His is not the only small technical lapse that may raise eyebrows among sound engineers in the know. Immediately after extolling the virtues of the amazing ambient character of Sound City’s live room and how good it is for drums, the film cuts to making a big deal out of the drum sounds on Fleetwood Mack’s 1975 release by way of example. Although it’s a damn cool sound, they are in fact, some of the deadest, driest drum tracks imaginable, and could have probably been made just about anywhere, given enough baffling.
But these questionable moments don’t detract much from the movie at all. As much as Sound City pivots around changes in technology, it never obsesses over the geeky, techy details. For the most part, that’s actually a good thing. In addition to keeping the pace light and forward-moving, it allows the film a potential to reach beyond the market of a few tens of thousands of working musicians, engineers, and recording enthusiasts.
A brief cameo by that legendary designer of recording consoles, Rupert Neve, sets the tone in that department: Director Dave Grohl hams it up for the camera, nodding and smiling as if dumbstruck while Rupert Neve talks about his namesake console, which the film centers around. Grohl’s feigned ignorance is likely to comfort lay audiences as he makes pretend that basic audio terms like “microphone amplifier” and “crosstalk” are the very height of techno-babble.
This kind of self-effacing affability is part of what makes Grohl so likeable throughout Sound City. As much as he tries to make the studio and its vintage recording console the stars of the movie, it’s the personalities of the subjects that shine through. Perhaps his own, most of all.
Grohl can be both silly and sincere, sometimes at once. He has a cadence that borders on that of the ADD-surfer dude, and he seems unpretentious and un-self-serious, displaying the kind of understated confidence that comes along with knowing that you’re really damn good at playing the drums.
You don’t have to like the Foo Fighters to like Dave Grohl. And that’s a good thing, because as much as this is a story about a studio and a way of working, it’s also a personal story for Grohl. Nirvana’s Nevermind, the album that changed his life and the lives of so many others, was recorded there. And that story is tied up with the story of Sound City.
Although Grohl likes to wax poetic about how great the Sound City Neve console sounds; about how magical their room was, and about how their way was the best way to make real records, apparently the rest of the world didn’t think so for long stretches at a time. The truth is that the crusty old studio with the carpet on the walls was on the verge of going under more than once before it finally closed for business in 2011.
It had been on the verge of bankruptcy just before the Nevermind sessions came through. And it wasn’t until after that record shot past Michael Jackson and Michael Bolton on the way to #1 that the studio was hopping again.
Grohl romanticizes that console and that space, but in reality, it was the fact that great music was recorded within its walls that put the studio on the map to begin with. After a long dark period, the fact that great music was recorded there once again is what made it a hot spot once again. None of the gear had really changed.
The truth is that compared to the power of a great record, a good room and a great console have almost no power at all. Sound City’s many successes and failures are clear testament to that.
Although that point may have been lost on Dave Grohl at times, he does a surprisingly good job as both director and emcee. Paul Crowder’s editing and pacing are commendable as well.
The one place where the film gets just a touch self-indulgent is toward the very end when Grohl – rather than taking on the quixotic mission of trying to save Sound City Studios – simply buys their old console for himself and installs it in what’s essentially an oversized home studio. Here, Grohl collaborates with a string of A-list rock stars, to mixed results.
Some of the pairings are more awesome in theory than they could ever be in real life, such as when Sir Paul McCartney and Nirvana bassist Krist Noveselic swing by to join Grohl in writing a new rockish romp, reminiscent of Helter Skelter, right on the spot.
A jam session with Trent Reznor of NIN and Josh Homme of Queens of the Stone Age leaves the two seeming just a bit pompous compared to the down-to-earth Grohl, but the result is a downright memorable instrumental track, plus a few mixed words in defense of both digital tools and formal music training.
For me, the standout musical moment was an unexpected one: Lee Ving of Fear sings a bewildering punk rock tune at breakneck speed that sounds just a little bit like Nomeansno. Out of the entire movie, it’s probably the one song that Kurt Cobain would have really, really liked.
Even this whole section, the spottiest in the movie, is still a good watch. The only thing that really doesn’t work in the entire film is – ironically enough – the sound mix.
At times, the level fluctuations in Sound City are laughably ill-advised. I’ve never in my life found myself riding the volume control on my remote like I had to while watching this movie.
Perhaps those jarring jumps in loudness between music and dialogue were intended to be exhilarating. Maybe they even work inside of a movie theater. But seeing that the movie is playing in exactly two theaters worldwide, it’s safe to say that the majority of viewers have been watching at home, just like me. In this context, the rollercoaster levels are at times beyond awkward, even bordering on frustrating.
But these quibbles aside, Sound City is a surprisingly able debut. Regardless of whether you’re 100% sold on all of the film’s conclusions, it makes its case warmly and often, and it’s easy to recommend to any fan of rock music or recording technology.
At $7 to rent and up to $13 to download, the price is slightly higher than average, but it makes sense for a niche film like this one. Based on the overwhelmingly positive user reviews for the movie, it’s safe to say that most of the thousands of people who have ordered it so far have felt it was money well-spent.
In the end, it’s an uplifting movie, even if the moment of the studio’s shutting down strikes you like an honest tragedy. I found myself getting a little choked up as the original studio came to a close. And not just because it was so sad, but because the movie made it all seem so avoidable, as if it was merely principled stubbornness over technology and workflow that came between the studio and financial solvency.
As Sound City reached its heartbreaking nadir, my girlfriend turned to me and asked, “Why didn’t they just adapt? It seems like it would have been so much easier.” It’s a good question. And I didn’t have an answer. I still don’t.
In this episode of Input\Output, Geoff and Eli talk to David Lowery, the former frontman for Cracker and Camper Van Beethoven, who is now an economics professor at the University of Georgia.
Last summer, Lowrey wrote an open letter to Emily White, an NPR intern who claimed to have had almost 12,000 songs in her personal library, but to only have paid for just over a dozen albums. This letter generated a firestorm of attention, drawing upwards of a half million visits a day to Lowrey’s artists’-rights blog The Trichordist.
As a label-owner, econ professor, a former “quant” for the financial sector, and a platinum-selling musician with indie cred and a cult following, Lowery brings a singular perspective to the business of music.
In this podcast — the first in a 3-part series where Geoff and Eli talk to experts about copyright and intellectual property in the 21st century — Lowery offers some compelling ideas about how we got where we are, and where the industry is headed next.
Listen to the podcast below, or right click here to download.
Sometimes people do their best work when they step outside their comfort zones. Widowspeak‘s latest release, Almanac, out January 22 on Captured Tracks, is evidence of just that. Its sound is the product of a few creative people veering just slightly off course. And the album is better for it.
The band’s 2011 self-titled debut was a fair album that got fair reviews. On it, Widowspeak are rickety, restrained and reverb-drenched, inviting countless comparisons to Mazzy Star, and drawing on many of the same lush and washed-out rock and shoegaze references that seem to be making the rounds in recent years.
Their latest album, Almanac, starts off in very much the same way, but it ends with a far more distinct personality. Thanks to a few key choices, they manage to avoid becoming just another could-have-been accessory to more inventive reverb-loving contemporaries like Beach House, Beach Fossils and Warpaint.
Something happens around the half-way point of Almanac, just as the song “Ballad of the Golden Hour” kicks in, sounding a little like a melancholic and driving version of The Cardigans, only with a lot more teeth and a little less ornamentation. It’s the sound of a band beginning to find its voice.
From there on out, the album seems to take on a new life. Everything they do leaves behind their past legacy of fair-but-middling navel-gazer dirges and pedestrian Chris Isaak covers. Suddenly, they seem to sound like themselves.
The most frequently repeated part of the Widowspeak story is that singer Molly Hamilton was initially frightened to death of performing, only pretending to sing during rehearsals and recording vocals only when no one else was around.
That perhaps, explains part of her decision to hide behind copious walls of reverb on the band’s first release, and on the first half of Almanac. But as the new album progresses, that veil is lifted and Hamilton steps forward. She reveals her personality, a newfound comfort with tunefulness and the subtleties of expression, and the band does the same.
Guitarist Robert Earl Thomas’ melodic lines still remain intriguingly wet and atmospheric, drawing in equal parts from surf rock, shoegaze and spaghetti westerns, but the remaining instruments begin to come into sharp focus along with Hamilton’s voice. From tight, dry, 70s drum kits to drivingly woody acoustic guitars, no longer does Widowspeak’s sound sit like a long-boiled soup. It’s become a substantive dish that shows some evidence of real song craft and mature performances.
Some of this comes from what might be considered “production” choices – clear ideas about aesthetic direction, and a new approach in which a reformed rhythm section works to deliver authoritative takes free of compromise.
Wringing Out The Influences
“You have to be careful not to sound exactly like your influences, or like other things that people may compare you to,” says Kevin S. McMahon, a producer and engineer who had previously worked with The Walkmen, Swans, Cult of Youth, Frightened Rabbit and Titus Andronicus.
His fear was that the constant Mazzy Star comparisons were holding the band back from finding their own voice and their own audience – and that some of their associations with an easily-identifiable contemporary style could be more of a liability than an asset.
“Once a ‘thing’ starts happening it’s already done,” he says. Better, he suggests, to find touchstones from a couple of generations prior. According to McMahon, the effort to slowly back away from the Mazzy Star comparisons and the overly washed-out sound expected of young Brooklyn bands over the past several years was “tenaciously intentional”.
Inspirations came from unexpected places. “On first seeing them, I got a very strong, strange Fleetwood Mac kind of vibe,” he says, referencing a band that has been so uncool for so long that they’re just about ready to become hip again.
Guitarist Robert Earl Thomas’ initial reaction was something along the lines of “I hate Fleetwood Mac,” but singer Molly Hamilton and producer Kevin McMahon had just started uncovering those old records as a compelling reference point, particularly 1979′s Tusk.
Even McMahon says, “It’s not something I would gravitate towards normally out of musical taste,” but there were elements in those recordings that served as guideposts, and particularly in the album’s decisively more striking second half: the hard-panned, unconventional and interwoven guitar lines, the tight, dry, machine-like drum kits, and the clean, chugging, strummed rhythms that support a hauntingly subdued voice.
For McMahon, Tusk and other records that helped bridge the 70s with the 80s served as the most “diametrically opposed” frame reference possible to Mazzy Star. But it wasn’t only about taking the band out of their comfort zone. McMahon had to leave his as well: “It was some of the most deliberate studying of a sound I’ve done and it required changing up all the things I normally do.”
“This studio [Marata Recording, outside New Paltz] has become a big gravitating place for bands that want to do a live record. I often record things live in one room with no baffles,” he says. “I’ve gone to great lengths to make this live room one of the livest live-rooms anyone is likely to encounter, he says, explaining how he normally nails drums into place on a wood platform to help maximize reflections throughout the room.”
Chasing after a hint of the 1970s sonics and Americana aesthetics required a very different approach. McMahon instead deadened the room heavily. And, since the most compelling aspects of Widowspeak are the contrasts between Thomas’ emotive, lyrical guitar playing and Hamilton’s subdued, almost deadpan delivery, they crafted the tracks backwards – starting with a click and the bare core of the songs, and adding final drum takes in the end.
“It wasn’t about playing it all live, it was about constructing things,” McMahon says. “They basically lost their rhythm section prior to going into the record,” and when a new drummer came in, one accustomed to playing in speed metal bands, McMahon says the drummer was quick to admit “’I'm way out of my element. The rug is pulled out from underneath me, so guide me.’”
McMahon was happy to take advantage of this, and couldn’t have wished for a more ready partner to chase down the 70s inspired “drum-machine-like, super-dead, thuddy, amazing snare thing.” To achieve this, real-time feedback from recorded sound had to “dictate” the drummer’s performance.
“The sound he was getting in his headphones would clearly prevent him from playing louder. It was built from really winding up the mic preamps and from some of them being super-heavily compressed so if you were to play it normally it would go into distortion. If he were to hit cymbals really hard, the sound would have exploded.”
And so, a light touch, and a physically-deadened space and drum kit led to a foundation that allowed plenty of room for the more expansive elements of the music to stretch out and become immersive without being forced to hide behind a soft focus or a wall of wetness.
For McMahon, this relatively dry and segmented approach was “a grand departure from what has been a major part of my life for a long time.” Perhaps the band would say the same thing.
As a listener, my only wish is that instead of hedging their bets, the band went full-on with this approach for the entire record, and not just the second half. It suits them well. When they stop adhering too strictly to what’s become something of an overdone indie aesthetic, Widowspeak cease to sound like a style, and start sounding a lot like individuals. As Almanac goes along, it becomes clear Thomas and Hamilton have come a long way in a short while.
Dave Derr, designer of the instant-classic analog Distressor, says his high-end audio company is ready to move “furiously” and “excitedly” into the digital domain.
Dave Derr of Empirical Labs got his start in audio as an analog man at a digital company, testing circuit components for Eventide Electronics’ breakthrough hit, the H3000 UltraHarmonizer.
When he invented the instant-classic Distressor compression unit, he remained an analog man in what was an increasingly digital world. It was the late 1990s, and in a time before widespread clones of the iconic LA-2A and 1176 compressors, Dave Derr used analog FET circuits to emulate them, squeezing their charms into a brutal swiss army box dubbed the EL-8.
It went on to become, arguably, the most popular boutique analog compressor of all time. To walk into a well-appointed modern studio is to see a Distressor somewhere in the racks.
When I asked Derr about what was to come next for his company, I teased him slightly, playing devil’s advocate as I am obliged to do, in the hopes of prompting a poetic wax about his die-hard love of analog magic:
“Dave – I love the Distressor,” I said, “But tell me: Why should I care about an analog compressor now? And why should I care about one in ten years? Why stay so committed to analog?”
“Actually, its funny you say that,” Derr responded without so much as a pause. “We’re pretty much in the process of going all digital right now.”
“I worked in an digital company – Eventide – for years, and I love digital. For one thing, there’s the consistency and the repeatability. And then, you can do things in digital that you could never do in analog. That’s very appealing.”
This is not to say that Empirical Labs has plans to pull the plug on the manufacture of their Distressors and Fatsos and Lil Freq EQs. They are all still shipping now, and selling at a steady clip. Derr, a self-professed “pain-in-the-ass” spent as much as twoyears designing each of them to be hardware that would stay relevant in perpetuity.
“The goal for us is a few great products,” he says. “Not to throw out a whole bunch of products to see what sticks. So we always test the heck out of stuff, sometimes beta testing for over a year. The hardest product was probably our EQ. The goal was to make an extremely powerful EQ with a ton of features, that would last forever.”
“But I also designed 3 or 4 other products where, after up to a year of testing, we decided “Nah, this is not up to the standard of what we do.’
“People probably would have liked some of them,” he says, mentioning a DI and a handful of compressor designs that didn’t make the grade. “And we do have some test units out there that people won’t give back.” But ultimately, for Derr to release a design, it has to be among the best in its class, it has to come in at an inspiring price point, it has to be repeatable and reliable, and it has to be stuffed to capacity with both character and features.
That last bit is probably Derr’s defining genius if he has one – Every Empirical Labs unit is crammed with control and does something, or some combination of things, that no other box really can.
The EL8 Distressor can blow up audio, compressing and distorting at the same time, or cleanly and authoritatively tame peaks, adding just a bit of character and girth. It can give the impression of an LA-2A or an 1176 or a vintage dbx160, or do things none of those boxes could ever hope to do.
The Fatso Jr saturates and “warms”, sending signal through transformers and multiple non-linear circuits, while the Lil Freq packs in more features than almost any EQ this side of a computer screen, every square inch of its faceplate crammed with control.
Then there’s the Mike-e preamp, which starts with an input stage that’s flat from 3hz to 200,000hz and ends with a “CompSat” section capable of adding a little vibe or tearing it all apart.
That last one that drives home Derr’s design philosophy: As much as he loves the idea of saturation and pleasant degradation, he also wants his tools to be as hi-fi and as consistent as he can make them. He never officially released the opto-compressors he designed over the years, citing lack of consistency.
“I think the problem there is the opto-couplers themselves,” he says. “They’re like snowflakes. No two are alike.”
“There are companies that make renowned opto compressors that they’ve sold thousands of, and I can tell, they’re not within a dB or two of each other – And they have to spend hours testing parts to even get them that close.”
Engineers in the field reportedly loved some of Derr’s discarded prototype test units, but they did not pass one of his main criteria: undeviating audio fidelity. And to him, that’s one of the most exciting prospects of digital.
Adapting to Digital
“I’m friends with 10 different developers,” says Derr. “Right now we’re just trying to narrow things down.”
A few years ago, Empirical Labs put a big toe into the digital market with the release of the EL7 Fatso Jr./Sr. for Universal Audio’s UAD platform. Derr’s guess was that it would be one of the hardest pieces to emulate, because it is so non-linear. If they had some success there, he could be convinced.
“Everything in [the Fatso] is non-linear,” he says. “At first I asked Dave Berners [of UAD] if he’d even be interested in doing it, because trying to recreate that thing is like trying to model 8 Distressors.”
Their results with the plugin version of the Fatso proved two things for Empirical Labs: First, that it was possible for a plugin to live up to Derr’s exacting standards and to accurately emulate its analog counterpart.
“Right off the bat, [Berners] got the soft clipping sounding really good. I compared the soft clipping to the soft clipping of the fatso under a microscope and it was just incredibly close. As soon as I saw that I said ‘yeah, he’s going to be able to do it’.
The two went back-and-forth for about a year, perfecting the response of the plugin. Derr glows as he talks about Berners’ work, citing the man’s patience, and persistence and hunger for detailed feedback that he could put to work in the emulation.
In the end, Derr says that UAD was able to get the software to behave in a way that was stunningly faithful to the original, even as they worked together to add in bucketloads of new features. “You get the total Fatso vibe with that plugin. Even here at the studio, I’m more likely to just use the plugin unless I’m doing something really crazy. It captures not only the soft clipping, but the warmth, the saturation, the compression.”
The experience taught a second lesson as well: That a successful plugin doesn’t spell doom for hardware sales. If anything, they discovered first hand, it seems that the success of one may go hand-in-hand with success for the other.
The original analog Fatso is easily one of Derr’s most popular rack units, despite the $2,500 list price. But not long after the software version came out, software sales swiftly outpaced them, although Derr says both markets continued to grow.
“Anyone who has done this will tell you that software plugins will not adversely affect hardware sales,” he remarks. “And we have found that to be true. The Fatso plugin and hardware have not directly competed with each other. I doubted it at first, very seriously. But now, two and a half years later, I just don’t doubt it any more.”
In an industry where hardware manufacturers might be lucky to keep 10-20% of their list price as honest revenue, software, which can have far lower per-unit costs, means a company can keep profits going, even while charging less and serving more customers who they were never able to reach before.
It’s a good thing, because for Derr, new profits mean new designs.
Derr expects Empirical Labs to have a new plugin out sometime in 2013. But just as with his analog designs, Derr approaches design with performance in mind, not deadlines, so a solid date is not forthcoming.
Still, “We are definitely moving that way,” he says, “and we will definitely be selling plugins on our site.” He even says that they’re “winding down” as far as analog development is concerned. There might “be a couple more” new analog units in the works, but after that, Empirical Labs has its eye squarely on where the market is headed.
“We’re moving furiously into digital,” says Derr. “I’m looking really excitedly at it.”
The company has a few tools that would be an obvious fit for emulation. But the next new plugin EL that releases will not be based on a pre-existing analog device.
“It may have some similarities. It may do some of the things other products do. But very few parts of the circuit will come from hardware.”
Derr cites several benefits when he talks about designing directly for DSP. There’s the flexibility of interface, clearly a playground for him, and the near-limitless power to shape every aspect of a non-linear curve.
But another reason he’s not aiming to release a direct emulation of Empirical Labs hardware immediately is for the sake of protection.
If it wasn’t for cracks, Dave Derr says, “A Distressor plugin would have been out 10 years ago.”
Derr says he has received near-constant requests for a plugin version of his flagship design. And that’s precisely why he has not released one.
All that demand indicates that a Distressor plugin is especially likely to be targeted for cracking by disreputable coders with too much time and not enough scruples on their hands.
“The Distressor is a flagship product. If we’re going to do it, we’re going to put our heart and soul into it. But to go through all that, only to have it cracked within a year? I’m just not willing to do that.”
Those of us who work in the music industry are acutely aware of how a lack of control over intellectual property can sap creativity and focused effort from the world – not to mention economic activity and jobs. Let’s hope that the prospects of a Distressor plugin – sure to be a hit if it were developed and released – are not another casualty.
In the meantime, Derr and Empirical Labs are prepared to test the waters with a new plugin next year in the hopes of discovering that copy protection has improved to a point where they can continue to invest in developing new tools for the frontier where so many engineers have and are continuing to move.
I, for one, have got my fingers crossed. If a software version of the Distressor ever does come out, I’ll be among the first on line to buy it. In the meantime, Empirical Labs remains one of the most respected and accessible high-end hardware companies around. Whatever the future holds, that doesn’t seem likely to change anytime soon.
Today’s audio processors tend to fall into two broad categories: Those that try to recreate the past, and those that tire of it. iZotope’s new Trash 2 plugin falls squarely in that second camp.
iZotope released the original Trash plugin in back in 2003, and it was marketed as a multi-band distortion unit.
The new version, nearly 10 years in the making, is less an update, and more a complete overhaul based on that same general theme of complete audio annihilation.
What It Does
Trash 2 is a multi-purpose sonic mangler, comprised of more than a half-dozen individual audio processors.
Rather than mimic a single piece of equipment, this plugin is an entire toolbox that would require at least a 3-foot-high rack of outboard gear if you wanted to even begin to replicate it in the physical world.
It can easily overdrive, pulverize, or otherwise radically re-morph your sounds. But used judiciously, and with help from a master wet/dry control, Trash 2 can also act as a subtle enhancer.
How it Does It
Although it’s marketed as sound distortion software, it’s hard to say which effect truly lies at the heart of this plugin, since each processing stage is so flexible and fully-realized.
Trash 2 consists of six discrete modules: Distortion (named “Trash”), Impulse Response Filters (called “Convolve”), Delay, Dynamics, and two separate Filters.
They can be placed in any order you desire, and then individually solo’d, muted, or combined.
The “Trash” Module
Trash 2 features more than 60 custom distortion algorithms that mimic everything from tape, tubes and fuzzboxes to AM breakup and the satisfying bit-smashing of a Nintendo Gameboy.
As with each processor that makes up Trash 2, the distortion module is almost endlessly customizable.
It offers the option to click and drag in order to create your own personal non-linearities, or to even assign different types of overdrive to each frequency band.
If you were so inclined, you might give your low end a little bit of subtle tube grit, while your high-end gets some tape-like saturation and your midrange is pulverized into smithereens of granular white noise.
Alternately, you could choose to saturate only a single band, for instance the high-end, effectively turning the Trash section into an aural exciter.
The “Convolve” module
The other uncommon component in Trash 2 is the “Convolve” section. It is a convolution or “impulse response” filter – the very same type of processing employed by many of today’s best software reverbs.
In order to lighten the CPU load (and to focus on what Trash does best) this plugin is loaded up with very short impulse responses that radically reshape tone rather than add long reverb tails.
The library comes packed with more than 100 IRs, dominated by things like guitar speakers, snare drums, wooden cabinets, and everyday household objects. There’s even a whole section of impulse responses culled from human vowel sounds and animal noises.
Ever wonder what that bass guitar would sound like re-interpreted through the snorts of a pig? Neither have I, but now we can find out.
It’s also worth noting that you can load your own samples, or even increase the maximum sample time of this section (provided you’re not too worried about CPU load) turning Trash into a convolution reverb unit on steroids.
The “Filter” Modules
Trash has two separate filter modules.
They are identical, and by default they appear both directly before and after the distortion module. Of course, you can move them anywhere in the chain that you like, or even arrange them in parallel rather than in series, if you prefer.
For me, this was among the most powerful and the most fun parts of Trash 2. I’d even say that you could just as easily call this “a filter plugin”.
Each of Trash’s filters offer six bands. Each of the bands can be assigned a filter curve from a list of more than 20 varieties.
You could, for instance, combine the low-pass filter of a vintage synthesizer with the low-shelf curve of a Pultec, and then add a midrange boost with one of the cleanest-sounding peaking filters you’re likely to hear.
The filter types are organized under names like “vintage”, “screaming” “clean”, and “saturated”, not one of which is misleading. There are even a couple of “vocal” filters than can give vowel-like tone and texture to your sounds
Some of these filters are so powerful, colorful, and ready to be pushed that I often found myself using just one or two of them to dramatically reshape tones. But what really impressed me most about this section were the filter modulations. This is where you can create wahs, tremolos and talk-box-like effects using LFOs of a variety of shapes and speeds. The modulations can also sync to the track’s dynamics or its tempo, and can even be triggered by a sidechain input.
The Delay Module
The delay module was another of my favorites. It’s not quite as full-featured as the other sections, but that’s only to say that iZotope stopped short of putting the kitchen sink into this one.
It might be nice to have a delay modulation or panning function built-in, but otherwise, pretty much any feature you’d want out of a good delay is in there.
There are a handful of delay degradation profiles based on things like tape machines, early digital delays, and there’s even one that sounds a bit like a Cooper Time Cube – an early analog device that used what was essentially a-garden-hose-in-a-box to delay signal.
They all sound surprisingly good (even when they are pushed to sound “bad”) and are endlessly fun to manipulate.
The degradation of each algorithm can be controlled with a separate “trash” fader, and there’s control for stereo width as well. The feedback circuit goes well past 100%, leading to instant dub-freakouts when desired.
The Dynamics Module
The multi-band dynamics module has just about everything in it that you could ask for in a basic digital compressor, sounds and works just fine.
For me, it is probably the least inspiring of the six modules. Of course, it is hard to keep pace with the over-the-top allure of all the other sections, each of which can be bogglingly powerful.
What I did like most about the dynamics section of Trash 2 was its fairly novel visual feedback. It shows gain reduction over time, overlaid on top of the waveforms of the source signal.
There’s no substitute for mixing with ones ears, but this could be a good learning tool for those learning to listen for the subtleties of different attack and release settings and slopes.
Finally the frequency-sensitive triggering of the dynamics section can come in handy when working with resonant filters, and a nice added touch is that a transparent limiter protects the final output stage. This allows you to crank up the saturation without worrying about unintended overloads or awkward gain-staging.
At first, what I liked least about Trash was its spartan and dreary GUI.
But soon into demoing this plugin, I had a complete reversal of opinion, and came to find that the spare visual presentation was actually one of its greatest assets. It’s rare to find a plugin that fits in so many features and controls in such a logical and uncluttered way.
Due to the sheer depth and flexibility of the plugin, Trash 2 could easily run the risk of becoming overwhelming. But such care has been put into the layout that this potential risk never emerges as a genuine threat.
It’s surprisingly easy to find what you’re looking for in Trash and mouse-over text easily explains the few unfamiliar knobs. If you’re comfortable with computers and truly understand audio, you will “get” how to use each of the modules in this plugin very quickly.
What seemed like such an uninspiring monochrome at first glance ultimately turned out to be just the thing for such an experimental and tweakable plugin. As you find yourself lost in sounds after many minutes of stimulated tone-bending, the GUI seems to disappear.
On my system, the controls were so seamless and responsive that my cognitive mind was lulled to sleep, and it almost felt like I was able to reach out and sculpt pure, liquid sound. It’s a rare feat, but the plugin’s layout invites deep listening, even while providing so much visual feedback.
Trash 2 is a very specialized plugin.
Those who would benefit from it the most are likely to fall into one of two categories: Electronic Musicians, for whom this might be a thrilling tool that would find near-constant use, and Sound Designers, who craft new sonic environments and specials effects for film, television and video games.
For the EDM producers, Trash can warp samples, drum machines and synthesizers into molten, pulsating nuggets of hot twisted audio. Alternately, it can usher them into dripping, tripping, dreamlike spaces.
For sound designers, Trash can help sculpt unexpected ambiences and create horrific or sci-fi sound effects. It can also convincingly imitate a variety of phones, speakers and radio effects.
The only plugin to rival (and perhaps surpass) Trash on the latter front is Audioease’s Speakerphone, which takes a decidedly more literal and visual approach to replicating real-world devices. But it also costs more than twice as much.
At less than $200 street, Trash 2 may be a no-brainer for many EDM producers and post-production engineers. But creative engineers who work on more conventional recordings may find it valuable as well.
The speaker models in the “convolve” section, when coupled with the distortion algorithms of the “trash” module can make this plugin into a pretty convincing amp simulator.
As much as I’ve embraced digital, I still love working with real amplifiers, and so this has not been my favorite use of the plugin. I have liked Trash, however, as a distorter and reinforcer of drum breaks, bass-lines, instrumental leads and even vocals.
Patching into a chain of quirky old hardware boxes can be more fun, inspiring, and certainly makes for a better a story. But working with Trash is about as seamless and satisfying as things can get in the digital world.
For producers and engineers who tend to work on more straight-ahead recordings, this is the kind of plugin you might use once on every album, or perhaps once on every song, depending on your taste for experimentation.
For some, that will be satisfying enough to warrant the price of admission. For others it might not. That call is yours. A generous 10-day demo period makes it pretty easy to decide for yourself.
Quibbles and Qualms
Trash allows you to apply automation to any of its hundreds of parameters. It’s welcome functionality, but there’s one drawback: In my version of PT 10, I was unable to use the normal keyboard shortcuts to simply click the desired control, instantly enabling it for automation.
Instead, the only way to apply automation to the desired control was to go hunting for it in an enormous list of text — easily the largest I’ve ever seen.
This parameter list offers no search function (at least not in Pro Tools) but its entries are intelligently named and well-organized. This wasn’t enough to make that enormous list fun to scroll through, but it did make it possible.
If iZotope were to fix anything in a future update, this should be one of the first places to look.
My only other critique is that the delay module — which I really did enjoy — could be fleshed out just a bit, in order to help it keep pace with the remarkable power of all the other sections. A modulation option would be nice here, as well as complete control over stereo panning and the number of delayed repeats.
Otherwise, the plug-in easily succeeds at what it sets out to do. Whether or not that’s appealing to you is a question that can only be answered by demoing the product.
I like Trash 2.
It is most certainly not a single-function plugin, and stands in stark contrast to all the faithful, one-off emulations of vintage compressors, fuzz boxes and EQs out there.
Of course, there is something to be said for the deliberate and considered approach of auditioning one vintage squeezebox or another and then moving on to the next sound. But there’s also something to be said about diving right into a project with an oversized swiss army knife, and seeing what you might come out with. That’s what Trash 2 is for.
Auditioning sounds on a plugin like Trash has absolutely nothing in common with debating which vintage of the same Bordeaux wine has a superior “bouquet”.
Instead, Trash presents you with clear and dramatic choices: Would you like a bold Cabernet with dinner or a nice light Pinot Grigio? Maybe the answer is “neither”, and what would really hit the spot would be a grape soda, a tomato juice, a Belgian ale, a sparkling water, or a Pabst Blue Ribbon.
Or perhaps: a fluorescent lime Hi-C spiked with vodka and red pepper flakes, then lit on fire and hurled at a passing automobile.
Now that’s what Trash 2 is all about.