Reading through audio forums and magazines, it’s easy to become convinced that the next upgrade is the one that will really make a difference, taking your tracks from mundane to impressive.
Some upgrades of course, are bound to be more significant than others. This week we asked a handful of busy engineers for some of their most essential “big wins” in recent years: Those simple changes that brought significant results.
Sometimes, big results come from big investments. For instance, Vaughan Merrick, an engineer and mixer who’s worked with Amy Winehouse, Mark Ronson, Ghostface Killah, and The Chemical Brothers, cites his recent upgrade to a Pro Tools HDX2 system as one of his most important and satisfying new acquisitions.
Merrick is a vocal proponent of high sample rates and 32-bit floating systems, and he’s also the kind of guy who considers expensive mic-pres, compressors and microphones to be an “invariably excellent investment.” Still, even he’s quick to admit that “choices in the high-end market are often more about aesthetic choices than quality.”
In addition to sexy and significant investments in new systems and boutique audio gear – which are often among the first things we think of when we hear the word “upgrade” – I also wanted to suss out those simple and essential changes that always seem to have a positive impact on every other link in the chain, and pay dividends for years to come.
#1: A Room That Works
When we asked a handful of our favorite engineers about their recent big wins, it wasn’t gear, but environment that was probably the most common theme.
“My biggest wins recently are the facilities I’ve been working in,” says freelance engineer Ted Young, whose discography includes work with Andrew W.K., Secret Machines, The Hold Steady, and Sonic Youth. “Namely Room 17 studios in Bushwick (my new favorite drum room) and Fluxivity in Williamsburg (the best mixing room in the New York area).”
From an empirical perspective, it’s easy to see how the rooms we record in have the potential to be among our weakest links or our greatest assets. Common issues like low frequency resonances and flutter echoes effect not just the sounds we capture, but the very choices we make.
“Your monitoring environment should be as scrutinized as your personal engineering technique,” says Andrew Maury, a young engineer whose list of clients already includes the likes of Ra Ra Riot, Tegan and Sara, Death Cab for Cutie, RAC and The Static Jacks.
There are many ways to treat a room, from DIY diffusors and broadband absorbers to completed room kits and architectural redesigns. Maury’s own favorite solution has been a mobile one: His ASC Attack Wall.
“Its a set of cylindrical tube traps mounted vertically on adjustable poles/stands which you arrange to form a U-shaped wall as a mix environment,” he says. “You set up your monitors inline with the perimeter of the wall, essentially creating a virtual ‘room in a room.’ Transient detail is impeccable, low end both extends and tightens, imaging is panoramic, and the sweet spot seems to double in size. Not to mention, you can use them in tracking to create micro acoustic environments. It is a very pricey endeavor because ideally, you want to have 16-20 of them. But it’s easily the best money I’ve ever spent.”
As crucial as room tuning is, environment isn’t just about getting the sound right. Eli Crews (tUnE-yArDs, ?uestlove, Sean Lennon, Deerhoof), is a recent transplant to Brooklyn from Oakland, CA. He says that “This year, the biggest upgrade I made for myself was creating a room that only I work in, and nobody else touches.”
“Coming out of a ten-year run at shared studio space, a place that is purely my own has become immediately essential for me. It being in my apartment means that I can wander in any time of day or night and listen to a mix, tweak a few things, go for a bike ride, come back and tweak some more, go get a bagel, come back, take a nap, listen, and so on. Easy recall is an essential part of the equation as well…For the remote mixing I’m mostly occupied with these days, this all has proven more valuable to me than a perfectly-treated control room or expensive monitors.”
For Kyle “Slick” Johnson (Fischerspooner, The Hives, Modest Mouse) moving out of New York City and into a new space that made more sense for his lifestyle has easily been one of the biggest wins in recent memory.
“I moved to New York City in 2001 and immediately fell in love with it,” he says. “I spent the first several years trying to get my feet wet in the audio world doing live sound, sales and assisting at a small studio in Greenwich Village. The record industry was already in full nose dive mode while the NYC real estate market was flourishing. Over the years my client base and my skills in audio grew, but that really didn’t seem to matter very much to my life outside of audio. I was always broke and my prospects of ever owning a studio that was more than just a closet in some less than ideal neighborhood seemed like a pipe dream.”
“So with the encouragement of my (now) wife, we decided to move out of New York to Philadelphia. Oh man, that was the smartest thing I’ve ever done for my career. I now have an apartment, a private studio (three rooms!), a car, a savings account and a daughter, all of which I’m finally able to afford after scraping the bottom of my checking account for 8 years in New York.”
“I live in a nice neighborhood, my studio is in an “up and coming” neighborhood but the cost of both of those places is likely about 1/4 of what it would be paying for the same places in NYC. Plus, nearly 50% of my clients still make the trip from New York to Philly to work with me. It’s only a 2 hour drive and because I have a lower overhead here in Philly I can keep my rates affordable for most bands and I don’t have to take records purely because they have budgets. I get to work on music that I really like.”
#2: Microphones, Speakers and Instruments
It’s easy and fun for us engineers to geek out about our favorite magic boxes. But the electronic devices that make for the most obvious differences in the studio are usually those with moving parts. Transducers – those physical elements in microphones and speakers that change sound into electricity and vice versa – are still some of the hardest things for manufacturers to do well.
Adam Lasus (Helium, Clap Your Hands Say Yeah, Dawn Landes, Matt Keating), the bicoastal owner of Fireproof Recording and Room 17 says that if he was just getting started today, “the first thing I would invest in to really get better sounds and sonic depth is a good microphone. MJE/Oktavamod and ADK both have mic’s starting around $300 that really shine and sound as good as mics that are much more expensive.”
Eli Janney of Girls Against Boys and SonicScoop‘s InputOutput Podcast seems to agree: “Investing in a great vocal mic was a purchase I never regretted (Every project has vocals, right?). I actually have two: a Soundelux U99B, which is dark but amazing on the right vocal [and] a CharterOak S538B, a more modern sound but not over crispy, never fails to deliver during vocal tracking.”
Eli Crews mentions that “there are a handful of pieces of gear that immediately starting making an imprint on the sound of my recordings — I believe for the better — the day I acquired them: Chandler Limited TG-1, original rackmount Sansamp, Eventide H3000, Lynx Auroras, UAD-2 plug-ins, RETRO Powerstrip, Coles 4038, Neumann M49, Royer SF-12, SoundToys software, etc.” But that’s not what he thinks of most.
“A few years ago I decided I mainly wanted to spend money on instruments, and have amassed a ’60s Ludwig drum set, an Estey folding pump organ from the 1920s, a Doepfer modular synth, a beat-down, barely working Optigan, and various old Moogs, ARPs, Rolands and Casios. Having these instruments available to bands significantly changed the sounds I was recording, and in the long run have had much more of an influence over ‘my sound’ than which mic or preamp I use.”
Another common answer revolved around studio monitors. Your speakers are your only meaningful link to the sounds you’re recording. As with microphones and instruments, there is tremendous variability among speakers. Although there are more good affordable models than ever before, this definitely remains a “get what you pay for” category, and an upgrade here is almost always money well spent.
#3 New Skills and Special Tools
Right alongside the environment and mission-critical gear like mics, speakers and instruments, are our own skills as listeners and manipulators of sound.
One of the most lasting parts of my own audio training was taking the time to learn to differentiate between frequency ranges, compressor settings, delay times, reverb types, audio codecs, mic placements, pitches, harmonies, and to continuously push the limits of my own hearing.
I’ve found that the best way to work these critical listening muscles is not just with studio work, but through deliberate practice and blind listening drills. I’d consider time spent on these exercises among the biggest wins of my entire career, second only to time spent listening to music and working with bands. A good critical listening program, whether it’s one you buy or one you design yourself, can be a lifelong “big win.”
Although you could spend a lifetime honing these fundamentals, there are always new application-specific skills to be learned as well, and they can open up tremendous new possibilities. For Geoff Sanoff, a producer/engineer and co-host of the InputOutput Podcast, sitting down to really master tuning programs was one of those game-changers. He recommends “learning how to use tuning plug-ins so you can’t tell you’re using a tuning plug-in.”
“It takes more time, but the result is simultaneously more natural and yet not gratingly out-of-tune. It also can save your ass when re-tracking isn’t an option. I’ve fixed guitars and pianos with Melodyne that would have been unusable, and I’ve used it to subtly correct singers whose vibe is not supposed to be perfect.”
Another purchase he’s never regretted is similarly specialized. Geoff gushes like Billy Mays when he offers that “iZotope RX2 is the best value function-wise for all manner of sonic restoration needs. Gets rid of guitar buzz, hum and unwanted pops as well as any other program under $5,000!” These are the kinds of tools that allow you to do jobs that might otherwise be impossible.
For Andrew Maury, figuring out digital gain-staging and saturation were huge skill adds that still pay off every day:
“Use simple gain plugins like crazy if you mix ITB. Everywhere. Let them do all the dirty work for your balancing and gain structure feeding plugins. Its an equivalent practice to how analog console mixers use the line trim knob at the top of the channel and they’re dirt cheap in CPU resources! 90% of your mix balance can come from well-placed, static gain adjustments in plugin chains. The result is that your output faders in the DAW become simple tweak handles, not heavy lifters.”
“As an engineer and mixer, also start thinking about “packing” your transients. There’s only so much room in the medium, and achieving “size” is no easy task. Especially in the context of rock music, you simply must introduce distortions and saturations to start shaving transients, which will buy you headroom to fit the whole thing up into the ceiling.”
“You don’t necessarily have to compromise the integrity of the audible signal to put a serious dent in a track’s technical dynamic range…Tasteful distortion can make something “louder” and meter lower. There are so many great analog modeling plugins (Slate VCC, UAD Studer, SoundToys, etc) that are out now which have blown open the door to make this process more and more believable ITB.”
Odds and Ends
Huge wins are just waiting to happen on the ergonomic end of the studio as well. Geoff Sanoff’s list includes “a patch bay for allowing me to incorporate analog gear and route things quickly and efficiently. Saving time means more brain power spent on music.” Other common answers include multi-channel headphone systems like those by Aviom and Hear, and the addition of a console or faderpack to an otherwise DAW-based studio.
There are some topics that engineers will quibble about until the early hours of the morning: analog summing, super-high sample rates, the relative importance of contemporary preamps and converters, this VCA compressor versus that one.
For every engineer on one side of these debates, there seems to be another crowing the opposite story. Some of these questions can potentially be settled with science, and some may forever live solely within the realm of preference and opinion.
Wherever they stand on those issues, it seems that almost every engineer can at least agree that the few indisputable essentials include rooms, mics, speakers, instruments and skills. Throw in a killer song, a good performance, and you have all the materials needed to make a great sounding recording.
When you focus first on upgrades in these few broad categories, you are deep in the realm of the big wins. And once you have these areas well-covered, you can go anywhere from there.
Agree? Disagree? Share some of your biggest wins in the comments section below, and tell us all about those essential upgrades you’d never second-guess.
Justin Colletti is a Brooklyn-based audio engineer, college professor, and journalist. He records and mixes all over NYC, masters at JLM, teaches at CUNY, is a regular contributor to SonicScoop, and edits the music blog Trust Me, I’m A Scientist.
If you read SonicScoop, chances are you already know a thing or two about compressors. You’re probably already familiar with basic controls like your threshold, which allows you to set the point at which a compressor “kicks in”, and your ratio, which allows you to adjust the amount of compression you’ll get.
Early on into their training, most new engineers will also understand the concepts behind controls like attack, release and knee, which essentially adjust how swiftly your compressor reacts to signals that approach the threshold. (Even though it can take a while, sometimes years, to hear these variables well and to develop good instincts about tweaking them.)
But one of the last things that new engineers tend to explore when it comes to compression is sidechaining. For those who are unfamiliar with it, this is the process of using the output of one track to control the action of a compressor on a completely different track.
There was a time, decades ago, when many compressors lacked this function. But by the early 2000s, companies like dbx, Alesis and FMR had started to offer sidechain inputs on even their most affordable units. And today, a good DAW will allow you to add a sidechain input to almost any compressor, regardless of how simple its layout might be.
Sidechaining is a technique that can be used like a scalpel or a paintbrush, a hammer or a piece of fine-grain sandpaper. Today we’ll explore a few of its most common and potent applications.
In the beginning, one of the most common uses of a sidechain was a pragmatic one: Automatically reducing the level of music to make room for the human voice.
Today, most engineers mixing films and pre-recorded TV shows are likely to use volume automation to ride music levels, but sidechaining can still be handy, especially in live broadcast or event situations where music must make way for commentary.
Using a sidechain in this way is pretty simple. Just strap a compressor across your music track, and set it with a fairly low threshold, high ratio, a fast attack and a long release time. But, instead of the compressor reacting to fluctuations in the music, you’ll use the sidechain input to force the compressor to react to your dialogue tracks instead.
This way, whenever a voice enters the scene, the music is brought down, hard. If you set your release long enough, the background music will stay down between words and phrases, automatically making way for extended narration.
This isn’t the only way to use a sidechain to make way for the human voice. In dense pop production, some mixers like to use much more subtle version of this setup to clear space for a lead vocal.
In this scenario you’d likely want to go with a lower ratio and a faster release, so that the competing bits of music duck ever-so-slightly, almost imperceptibly, and then come back up to full volume rather quickly.
Instead of using your vocal track to trigger a compressor that’s strapped across the entire of the music mix, you might target just an instrument or two that want to stay loud between vocal lines, but step just outside the spotlight whenever the singer makes an entrance. With a fast enough release, you can even set your compressor to let go in-between notes or phrases.
The goal with this approach is not to hear the music ducking, but to hear the vocal come through unobstructed – while leaving competing elements at an appropriately loud level in between. This can make for a subtly different effect than volume automation, as the volume dips and swells breathe with the rhythm of the voice. It can sometimes sound a bit more refined than automation, and it’s usually a little less tedious, too.
Punching Through a Pad
One of the other most common ways to use a sidechain compressor in a music mix is to allow fleeting instrumental elements to penetrate through lush synths, string pads, or ever-present guitar tracks.
There are times where you may find that there’s an unacceptable tradeoff between crafting string pads and similar sounds to sound appropriately full, impressive and thick, and keeping them from obscuring other parts. In these cases, you can slap a compressor onto the instrument in question, and use a sidechain to trigger that compressor whenever the instrument it is masking is makes an appearance.
This way, you can give supporting sounds the expansive feel they deserve, and also let a fleeting instrumental element poke through, whether it’s an arpeggiated guitar, a busy bassline or the crack of a snare.
Clearing Room For The Kick
Perhaps the most common application — particularly in music with electronic elements — is to use a sidechain compressor to let the kick drum to punch a little hole right through a bassline. To do this, simply insert your compressor on the bass, and use a bus to send your kick drum signal into the sidechain input.
Whether it’s a deep and viscous synth bassline or a super-busy bass guitar pattern, you can be quite subtle with this technique and still have it pay dividends. In some contexts, this approach can be far more powerful than EQing a kick drum to help it cut through the track. This way, there’s no need to compromise on the low end or to sculpt an overly “clicky” kick sound in order to let it poke through.
Letting it Pump
Of course there’s no law that says you have to be subtle. Whole genres of music have become obsessed with the radical use of this effect.
Set your ratio a little higher, your threshold a little lower, and your release a little longer, and you can with a kick drum sound reminiscent of the ones popularized by French house groups around the turn of the millennium. For an especially dramatic effect, don’t just use the sidechain to compress the bass track – You can use it to influence a compressor strapped across the entire mix.
Daft Punk’s “One More Time” and “Around The World” are often cited as some of the most iconic examples of a heavy handed approach to sidechained kick drum. (They used the sidechain input on a cheapo Alesis 3630.) By the mid-2000s, this kind of sound had become a significant flavor throughout American pop and EDM.
Used tastelessly, it can sound hopelessly dated. But for some types of music this fundamental technique borders on necessity. In EDM, it’s like a secret handshake. And in more experimental strains of rock, pop and Hip Hop, it can be used to tremendous effect without conjuring cliches.
UnGate It: Sidechain Compression in Reverse
Sidechains can be used to do more than just clamp down. Some engineers like to flip this arrangement around and to trigger a gate.
One popular application is to have your kick drum trigger a low frequency sine wave for extra weigh and some synthy, low-end resonance.
To do this, place a gate over a signal generator set to output low frequency sine wave (preferably in tune with the track) and then route your kick drum channel through a bus so that it opens a noise gate on your signal generator.
You can take this effect even further by playing a long-toned bassline on a synthesizer instead of using a fixed generator. The result is a musical bassline synced perfectly to your kick.
Tame That Frequency: A Band-Sensitive Compressor
You can also use a sidechain to tune a compressor so that it’s most effective around certain frequencies. This is essentially the concept behind a de-esser. To use a compressor in this way, feed a heavily EQ’d version of the track in question into your compressor’s sidechain input.
For instance, if you had a bass with one particularly out-of-control note, or a singer whose voice sounds overwhelming in a certain range, you could use an EQ to emphasize those frequencies, and then send that signal into the sidechain input. Conversely, if the low frequency content of a signal is causing a compressor to go haywire, a sidechain EQ can ensure that it only reacts to mid-frequency content.
Fortunately, many modern-style plugin compressors, like Pro Tools’ Dynamics III have a sidechain EQ feature like this built right in, and need no extra routing to achieve this effect. But if you want to use a vintage-style plugin or piece of hardware, getting up and running is not difficult at all.
If you think it through, you’ll come to realize that this kind of band-sensitive approach lies at the root of any multiband compressor. In fact, using sidechains and duplicate channels, you could feasibly create a multiband processor out of just about any plugin compressor you own.
Routing it All
Setting up a sidechain can seem daunting if you’ve never done it, but it’s really pretty easy. Most hardware compressors will have a simple 1/4” TRS input that you can feed through a bus or aux send.
Inside of a DAW like Pro Tools, things are just as simple and perhaps even quicker. To make a kick drum the sidechain input of a compressor that’s strapped across a bass track, simply instantiate send on your kick drum kick track and route it to any unused bus – Let’s say bus 1. Just like you were going to send it out to a reverb.
Then, strap a compressor onto that bass track and set its sidechain input to bus 1. You can find this option next to the little key-shaped icon on the top left of a Pro Tools plugin, or next to the words “Side Chain” on the top right of a plugin in Logic. Then, feed an appropriate amount of signal and set your compressor to taste.
In the old days, sidechain inputs were largely seen as utility features, helpful in broadcast or for controlling sibilance in vocal tracks. But like always in the music world, once a tool is invented, someone’s going to find a way to use it creatively.
Applied with restraint, sidechains can be used to anchor the low end or to create multidimensional mixes, where subtle ducking helps usher sounds in and out of the foreground, creating a clear and powerful blend that constantly shift in focus. Used with abandon, a sidechain can introduce radical and unforgettable pumping effects, creating balances that are otherwise impossible to achieve.
There are many cases in which creative use of a sidechain might be woefully ill-advised. (For instance on a jazz recording or a rootsy Americana album.) But for engineers who work across genres, it’s just another essential technique to keep ready in the toolkit.
Justin Colletti is a Brooklyn-based audio engineer, college professor, and journalist. He records and mixes all over NYC, masters at JLM, teaches at CUNY, is a regular contributor to SonicScoop, and edits the music blog Trust Me, I’m A Scientist.
PACE launched a new desktop-based version of their industry-leading license manager late Sunday night. Unfortunately, the new release left some iLok users with a bit more than “Zero Downtime.”
Professionals know to expect some degree of hiccups and incompatibility issues with almost any type of new software launch these days, but the new iLok license manager seems to have been particularly unready for prime time.
For those who were unlucky enough to find themselves downloading the new license manager on Monday morning, a bug caused some types of licenses to temporarily fail. This was especially a problem for users who had issues with their license for Pro Tools, which relies on iLok for startup.
Fortunately, within 24 hours, PACE had already clamped down on the issue, putting fixes in place overnight late Monday evening. Tuesday morning, service on the website was sluggish as the system updated and many users worked to fix their issues, but at least a solution was in place.
I decided to take iLok at their word and risk my own personal authorizations for the sake of this write-up. After testing early Tuesday afternoon, I’m happy to report that despite the issues on Monday, the new system is now working pretty smoothly.
I did find that connecting to the server took longer than in the past. Instead of syncing within 2 or 3 minutes, it took more like 10 or 15. (I stepped away from the computer and a quick snack and by the time I was back, the sync was done.) My first attempt at starting Pro Tools was buggy, but a simple restart took care of that, and I’ve been up and running without issue ever since.
For iLok clients who updated their system within the 24 hour window when there was a major issue, PACE recommends the following fix:
Steps you need to take once you see the license type of “License” instead of “Temporary”:
- If your iLok with the license had not been seen by the server, just the updating of the database record for your license will take care of the issue.
- If your iLok had been seen by our server and your license is no longer working, please plug in the iLok and deactivate each of the affected licenses. To do this, open the Detail pane, then select the license from the license grid at the top. Choose the Deactivate link. Once the licenses are deactivated, you may then activate it to any allowed location.
If you are still experiencing difficulties please use this link to contact us and we will work with you to fix your issues. We are monitoring our support tickets closely during extra long hours to right matters.
Some users who updated that first day report that they have also had to go through a tedious process of deleting corrupt files from their machines. To those who updated on the first day, iLok offered this public apology:
“We have let you down and we know it. We could have done a better job communicating that we were hard at work to fix the issues some of you have been having and for that we are truly sorry.
We handle tens of millions of licenses for users, some stretching back over a decade with legacy code. Whilst we have worked to ensure a seamless transition, some issues occurred.”
Fire at the Water Cooler
They say good news travels faster than bad news on social media. But warnings seem to travel even faster still.
Almost immediately, the story of iLok’s rocky launch began to spread through forum postings, and even more rapidly, through shares on Facebook and Twitter.
One popular messageboard already has 12 pages of fervent posts about the issue. (Oh, wait. Make that 13.)
But despite the clamor on the web, a representative from PACE confirmed for us that only a small number of users were affected:
“All of our support tickets to PACE on this issue totaled less than 75 and some of those were duplicates,” he said. “We strive to answer all support tickets as quickly as possible.”
That number seemed surprisingly small to me, especially when compared to the pitched tenor of criticism on the web. I wondered how many total users there were in the iLok system for a sense of scale:
“Total users/accounts in the system… just shy of 1 million. Tens of millions of licenses moving around, and that number grows exponentially,” he said, adding, “Yes, 75 is a small number but any number is unacceptable to us.”
Despite the actual service outage which may have ruined a day’s worth of work for scores of users, PACE’s biggest problem following the launch of the new license manager may have been lack of communication.
After the newly-introduced system was known to be experiencing issues, no message appeared on the company’s website advising that new users hold off from installing until the issues were resolved. The company seemed to take no initiative in making their customers aware of the issue, or of its timetable for completion.
This, coupled with the fact that the iLok website lists no contact information, is probably central to why PACE seems to attract such a significant amount of schadenfreude, despite offering a fairly reliable and reasonably-priced service that supports most of the major software developers in the industry.
Consumers, especially in the music world, just don’t feel very warmly about companies that seem like faceless monoliths. (Even if they so readily buy and rely on their wares without even thinking much about it.)
Of course, professional users know to never (as in never, ever) upgrade essential software either the first week it’s released (much less the first day) or immediately before a session.
With that in mind, many of iLok’s core users may be willing to accept a 24-hour startup SNAFU. But in the Internet age communication is expected. And it’s expected immediately.
Steven Slate, whose company relies on iLok for piracy protection, stepped up to issue a public statement of his own:
“I think PACE should have been much more up front yesterday in acknowledging the problems, and I’m glad that today they have finally taken measures to do so. That of course is not enough consolation for those who were unable to work yesterday. But at least it shows that they are taking some positive steps to communicate.
I can relate to PACE because like them, we are a small technology company that is trying to make a good product…Clearly, the system migration wasn’t tested enough before going live. This was a mistake, and now PACE, and anyone who is having problems, are paying the price for it. However, I can attest that fixes are already in place, and deactivating and reactivating licenses that say temporary seem to solve all issues.
Even before this new iLok client update error, there seemed to be an aura of hate against iLok and I truly don’t understand why. Because the iLok is the most convenient thing on the planet and I can’t live without it. And I’m talking as a USER now – not a developer. With the iLok, I never had to worry about switching systems, bringing plugins to other studios. I would be back into my session with all my plugins in minutes…
What happened yesterday is a real tragedy. But I hope more people will be forgiving, especially since in less than 24 hours there is already a solution, and realize that PACE isn’t the enemy. The enemy is SOFTWARE PIRACY. If you’re gonna be really angry at someone, be angry at the guys who are careless about the years of work and dedication from software developers and demolish their products by removing the much deserved revenue streams that they need to keep afloat.”
Personally, I like, use, and respect iLok as a product – exven if I have had to begrudgingly accept that PACE is a small, Web 1.0-era type of company that likes to systematize and streamline customer interaction, keeping the consumer at a bit of an arm’s length.
Perhaps this experience will teach PACE that customer interaction is crucial in this day and age, and particularly in this small industry. (Even if it does cost good money to fund a more accessible customer support or PR team.)
With new piracy protection systems from The Plugin Alliance and Waves becoming significant players in the field, PACE may finally have a real incentive to step up iLok’s game on that front.
Fortunately, the company already appears to be listening. This new license manager, which had such an unfortunate launch, is actually a direct answer to market demands and pressure from their competitors:
For the first time, users will be able to license iLok protected software directly to their machines – with no dongle necessary. Yes: iLok has finally gone ahead and taken the iLok out of iLok.
(Of course, not every plugin developer may choose allow this option, due to the increased security risk that goes along with any machine-based authorization system.)
They’ve also added a much-requested “TLC” option to their “Zero Downtime” coverage, which will officially insure lost and stolen iLoks for the first time.
So it’s clear that PACE does hear the market. They just have to do an even better job about responding to it in the future.
Justin Colletti is a Brooklyn-based audio engineer, college professor, and journalist. He records and mixes all over NYC, masters at JLM, teaches at CUNY, is a regular contributor to SonicScoop, and edits the music blog Trust Me, I’m A Scientist.
The best way to deal with a troublesome noise is to avoid recording it in the first place. In a controlled environment, like a recording studio or a film set, you’re blessed with a quiet space, clean power and revealing monitors, so that isn’t too difficult to do.
But these days, for better and for worse, more audio is being recorded in compromised environments than ever before, and at every level in the industry. Music is increasingly tracked in home studios where refrigerators hum, amps buzz and cars zip by outside windows; a growing amount of video is shot remotely on makeshift sets where booms and lavs won’t go, camera preamps hiss, and ambient noise can begin to overwhelm.
All this conspires to make audio cleanup a recurring task for engineers who rarely had to deal with these jobs in the past. Fortunately, the tools have been keeping up with the increased demand. On today’s market, you’ll find a few of the most powerful sonic-scrubbers ever devised, with prices that range from $100, right on up to near $10,000.
Some of them, like Sony’s Spectral Layers, will even let you “Unbake The Cake,” by removing not just broadband noise and ambience, but individual sounds and instruments from within a single track. We put a few of the most popular of these plugins through their paces to find out how they stack up.
In the Beginning
Not long ago, if you wanted to clean up noise your primary option would have been to employ simple filters, expanders and noise gates. A high-pass filter could allow you to fight rumble and plosives, a low-pass might help tame hiss, and a series of notch filters could help with 60-cycle hum. Basic expansion or gating could even push low-level noise down further, making the noise floor seem to disappear.
There are a series of tradeoffs inherent in any of these primitive solutions: Filters effect tone and rarely stand up to broadband noise, while conventional expander/gates are only really effective if the noise level is quiet enough to begin with, so that it’s masked by the desired signal once the gate opens.
But the tools of the trade took a huge step forward when Dolby released the Cat43. It was an analog device that combined several frequency-selective gates, one each for bass, low-mids, high-mids and treble. It was a fairly complex circuit for its era, but devices like this one made reducing significant ambient and broadband noise a real possibility for the first time.
Although today’s best noise reduction tools are far more powerful, they rely on the same principles that let the Cat43 do its job. Where the earliest analog noise reducers made do with a few bands of frequency-sensitive downward expansion, today’s digital processors are often comprised of hundreds of bands working in concert.
Digital De-Noisers: Broadband
There are two major types of broadband de-noisers on the market today: Those with fader-based interfaces reminiscent of the Cat43, and those with graphical, noise-sampling interfaces that are only practical within the DAW domain.
Fader-Based Noise Reducers
The first category of fader-based noise-reducers, such as the Cedar Audio DNS One or the Waves WNS and W43, pay homage to the old-school processors like the Cat43. They rely on the ear more than the eye, inviting the user to manually set a threshold for each band individually, until maximum noise reduction is achieved without compromising the sound of the original signal in that frequency range.
These types of processors can be extremely transparent, and rarely lead to the strange burbling and resonant artifacts that can come with sloppy use of a graphical noise reducer. Their greatest advantage however, lies in their lack of latency and processor load. They also take easily to automation and can be used freely as inserts, even in hardware-based mixing applications. But they’re also limited: in many cases, there’s only so much noise reduction you can do, and learning to set these types of processors up quickly and effectively can take a good bit of practice.
Still, the Cedar DNS series remains among the most popular noise reducers in broadcast and post circles thanks to its super-low latency, hardware integration, and a proven record for cleaning up dialog tracks in real-time.
Waves has recently taken a step into this world, releasing the WNS and the W43, their own versions of these no fader-based no-latency noise reducers. In practice, they might not be quite as effective as a Cedar system, but they carry 1/10th the price tag, and the WNS’ welcome “suggest” function can be a great shortcut and learning aid for new users.
Surprisingly, even once the learning curve is overcome, one of the most effective noise reducers from Waves turns out to be the NS-1 from their “Single Knob” series, street price: only $99. By simply turning up that single fader as far as possible before artifacting began to set in, there were many cases in which when the NS-1 allowed me to obtain results that matched or surpassed its big brothers, and in significantly less time.
Graphical Noise Reducers
Far more popular in music and restoration are plugins in our second category: Graphical noise-reducers such iZotope RX2, Wave Arts’ MR Noise, the Sonnox Oxford DeNoiser, and Z-Noise by Waves that rely on “noise profiles.” They take after an earlier generation of tools popularized by Sonic Solutions, Sony, Digidesign, and BIAS, the now-defunct developers of SoundSoap.
These types of processors demand a different approach. When used in real-time, they tend to be resource hogs that add tremendous latency and suck up CPU power. They’re often at their best when used to render files, and are not an ideal choice as individual track inserts, especially where high track-counts are concerned.
Despite this drawback, these graphical noise reducers tend to be among the most powerful on the market. They allow you to “sample” the profile of your noise, automatically generating a custom, frequency-dependent threshold that will vary across the harmonic spectrum. Once you’ve captured this profile, you’re often able to manipulate a whole swath of variables, including attack and release, the amount of noise reduction overall, as well as side-chain EQ to help focus on the activity of the noise reducer in one area or another.
Used carelessly, these kinds of processors can introduce zingy, warbling artifacts that can be worse than the noise itself. But with a careful touch, this class of plugins can bring down an incredible amount of noise without a trace. Although they’re capable of removing huge gobs of noise in a single pass, many users find they’re most transparent when applied less drastically across two or more stages of more subtle noise reduction.
Out of the three programs I tested intensively this month, iZotope RX2 proved to be the one to beat. Without touching a single variable, RX2′s De-Noise function was able to scrub away astonishing amounts of broadband noise with minimal artifacts. For even better results on tricky material, it offers a host of tweakable parameters, but more often than not, they were barely necessary. At only $300 for a whole suite of restoration plugins, it is a no-brainer.
In second place was Wave Arts’ MR Noise, which was just as effective and transparent as RX2 after a bit of futzing with the sidechain EQ. At only $250 for the whole suite, it’s a solid buy.
Waves’ Z-Noise found itself in a respectable 3rd place. It’s capable of getting many jobs done, but not quite at the scale or with the ease-of-use found in the best noise reducers.
Even after sampling the noise profile and going through the extra step of setting the threshold and NR range before hearing any improvement, Z-Noise’s default settings just aren’t what they should be. You’ve got to play with the attack and release times to even approach the kind of results that RX2 and MR Noise deliver before any futzing. Even once it was set to the best of its ability, I found that Z-Noise couldn’t scrub out quite as much interference as was possible with comparable settings in RX2 or Mr Noise.
At $500 and up for the single plugin and $1,100 for the entire Restoration package (Native), it’s hard to recommend Z-Noise with so many great alternatives out there – except as a welcome value-add to a larger bundle of some of the better-realized Waves plugs.
Hums, Clicks, Crackles, Pops and Plosives
These kinds of noise-reducers can do a great job of reducing broadband noise, but they’re practically useless for reducing intermittent noises like clicks, crackles, pops, plosives and clipping. They can also be pretty rotten at reducing the high-level hum caused by ground loops, amplifiers and electric guitars, which often occupy the same frequency ranges as the program material you’re looking to preserve.
Waves’ X-Click and X-Crackle did a commendable job of taming many high-level transient sound-bursts, as did the de-noising and de-crackling modules from iZotope and Wave Arts. Each of their packages also included a hum-busting processor that essentially notches out a set of frequencies, such as 60Hz and all the harmonics above it, perfect for taking care of ground noise.
(For those of you who only need a hum reducer, the most cost-effective option is probably McDSP’s NF575 hum filter for only $130 Native, and as a welcome addition to their bundles. But similar results can be had with any bank of simple notch filters and little bit of setup.)
Until very recently, pops and plosives were usually best taken care of by hand, through the judicious use of high-pass filters and gain rides on tiny snippets of audio. To this day, the biggest problem automated processors have with these low low-frequency aberrations is not with eliminating them but with identifying them in the first place. Specialized tools are now available from the cutting-edge developers at Cedar and even as part of Wave Arts’ specialized noise reducer package, Dialog, that help clean up these problem areas without effecting the surrounding audio.
Izotope takes a unique and especially transparent tool for removing plosives and intermittent interferences in RX2: next-generation Spectral Analysis. So far, they’re the only company I know of that has included this type of feature in a DAW-based noise-reduction plugin.
Today’s spectral analyzers are a whole new class of multi-function audio tools. They do more than just provide visual feedback of the frequency distribution, essentially allowing the user to “unmix” audio tracks by zooming in and removing tiny portions of the source sound.
One of the earliest and most powerful consumer-facing spectral analyzers has been Sony’s SpectraLayers, which takes a Photoshop-like approach to audio.
With SpectraLayers, you can zero in on a single sound out of many embedded in one file (say, an ambulance siren among chirping birds, a honking horn in the middle of a stretch of dialog, or an out-of-tune horn in a music mix) and extract it to its own “layer,” separate from the main mix. From here, you can effect it in isolation, extract it from the surrounding the material, or remove it completely.
SpectraLayers is powerful, but it works only as a standalone app. RX2′s spectral functionality might not be quite as exhaustive as that of SpectraLayers, but it’s supremely user-friendly, and comes embedded in a DAW-based plugin and offers great new tricks for routine noise reduction.
Encounter a pop or plosive? Reach out, grab just the frequencies that are affected and mute them. Need to delete an intermittent word, click, squeak or breath but don’t have any room tone? RX2′s spectral “replace” function erases the offending sound and automatically fills it in with surrounding tone. No copying or pasting needed.
For the first time, these new types of processors allow us to effectively remove discrete noises that occur concurrently with our desired audio: phones ringing, sirens blaring, birds tweeting, horns honking.
These are tools that would have seemed like audio science fiction a generation ago. Although no noise reducer may be able to fix every problem, the processors around today have transformed once-impossible jobs into everyday realities.
I just hope my clients don’t learn to take them for granted. As sophisticated as these tools get, the most surefire way to end up with clean and impressive audio still remains “recording it that way to begin with.”
When Adam Lasus decided to partner up with Joe Rogers and Scott Porter at the new Room 17 in Brooklyn, it was something of a homecoming. Until high rents and new opportunities convinced him and his wife to move to LA in 2006, Lasus had run his Fireproof Recording Studio Ghostbusters-style, out of a converted 19th century firehouse in Red Hook. Between that space and an even earlier studio in Philly, Lasus had worked with a long line of indie rock artists like Helium, Yo La Tengo, Ben Harper, Dawn Landes, Matt Keating, and Clap Your Hands Say Yeah.
Although his own personal studio and Neotek Elan console still live on the West Coast, Lasus seems thrilled to be commuting back east, sometimes staying for a week or more in order to work on projects in this new room. When we met, he was in town to record a new solo album for a songwriter named Aaron Lee Tasjan. “In L.A. there are maybe 20, 30 really awesome indie bands doing great things,” Lasus says. “Here in Brooklyn there are that many on this block.”
Lasus is a youthful-seeming 44. He’s ginger-haired and gregarious, with a charming, almost boyish sense of enthusiasm for both his tools and for the people he records with. One of those people is Joe Rogers, a young label-owner, songwriter, and a former client who now runs day-to-day operations at Room 17 and engineers the bulk of the sessions. Rogers started putting out records over 10 years ago, working out of a makeshift studio in the Bronx, and has recorded with artists like The Shivers and Kelli Scarr.
The two of them sit together for an extended interview in a cavernous yet surprisingly well-controlled mix room, and occasionally finish each other’s thoughts and sentences. They share some central ideas: That trust and camaraderie are the most important aspects of the client/engineer relationship; That digital is fine but tape is more fun; And that smashing mic signals through cheap old transistor stereos is a badass thing to do.
Unable to make this meeting is a third partner, the musician and investor Scott Porter. Like Rogers, he’s a close friend and former client of Lasus’, who has made the transition from performer to producer/engineer in his own right.
Room 17 sits on a revitalizing Bushwick block, part of a once-industrial strip close to the border of East Williamsburg. The studio is located just down the street from local “DIY” venue The House of Yes, and not far from 3rd Ward, Shea Stadium, The Sweatshop, and essentially, the whole burgeoning Bushwick art and music scene.
As I walk toward their building, I pass an old minibus, parked about dozen yards from their door. It’s spray-painted in technicolor graffiti and stuffed full of the Brooklyn equivalent of hippies (presumably psych-folk fans) brandishing iPhones and acoustic guitars. They’re perhaps indicative of this new Bushwick, although by no means emblematic of it.
As austere and industrial as the area might seem to the outside eye, the three studio partners still had a hell of a time finding a 10 year lease here (perhaps one of the only arrangements that really makes sense for a fairly high-cost, low-profit business like an affordable music studio.) New York landlords know the deal: Once the artists start moving in, residential rents start going up, and soon after, commercial rents will follow. In real life, just as in the online world, art and culture are perhaps among the biggest drivers of perceived value and economic growth. (If only more artists knew how to capitalize on that).
The inside of the studio mirrors the area itself. It’s a large warehouse space that blends thrifty professionalism with a sensible minimalist build. Rather than re-imagining the concrete raw space, the studio instead re-purposes it, keeping much of the site’s lofty, wide-open appeal intact.
Each of the rooms is huge, and somewhat spare, with stone floors and a few strategically placed carpets. But they are also unexpectedly well-balanced. There’s barely a parallel wall in the whole place, and the 14-foot-high ceilings are stuffed full of 6-12 inches of insulation, practically eliminating the need for additional trapping. Otherwise all that’s there is cement, glass and drywall, allowing the space to retain some subtle reflections that make the room sound airy and alive.
The main tracking space is enormous on its own, and it connects to two ample iso booths that are larger than some other studios’ live rooms. Even the control room by itself is bigger than many Brooklyn apartments. All these spaces are linked by immense glass doors, and downstairs there’s a makeshift echo chamber that sometimes doubles as an additional live room. Put together, it’s well over 3,000 square feet of recording space.
Gear at Room 17 is as distinctive as the space. The console is a rare Trident – an early 80 series refurbed with a newly upgraded master section. The main recorder is an equally unusual 2” Otari, once property of Manhattan’s legendary Unique Recording Studios, and it comes equipped with both the 24- and 16-track headstacks.
Naturally, there’s also a Pro Tools HD rig, and an island of rack gear is stuffed with some interesting and esoteric pieces from Valley People, Manley, ADR, TapCo, Focusrite, MXR, Allison Research and Symetrix. The mic locker is full of vibey old dynamics and some great-sounding, cost-effective mics from Peluso, Gefell, AKG, Oktava, Michael Joly Engineering and Mojave.
The idea here is to keep things affordable while offering a larger, less intimidating space that bands might otherwise find in a similar price bracket. To Lasus, one of the few challenges is helping the kinds of bands he loves working with understand that they can afford to work with him:
“A lot of bands will see something like Clap Your Hands Say Yeah on my discography and just assume we’re going to be too expensive,” he says when the subject of rates comes up. But what they tend to forget is that when Lasus recorded them, CYHSY were just like so many other Brooklyn bands: unknown and inexperienced weekend warriors, uncertain about just what to expect from some of their first real studio dates.
Lasus recalls giving their drummer Sean Greenhalgh a beer early on in their first session. They had been nervous about playing earlier in the day than usual, and that move seemed to set him at ease.
It was a way of communicating something Lasus tries to make clear in every session, one way or the other: Getting great recordings isn’t about judging the artists. It’s about understanding them. It’s about making them feel relaxed and capturing them in their most natural and un-reflexive state.
If there’s some deeper purpose to all Lasus’ high-spirited chatter and convivial energy, it’s probably that.
Music is a rare kind of art form that is made entirely out of vibration. It’s at once both ephemeral and yet inherently physical. We will never be able to reach out and grab it in our hands, but it certainly touches us in the most literal sense of that word. If you’re feeling poetic, you might even compare hearing itself to a specialized, hyper-sensitive form of touching; one that works across great distances.
Most of us already have some cursory understanding of how sound works. If you’ve gotten through most of high school, you probably know that sound travels as waves through air, liquids and solids. But it’s rare that we stop and think about exactly what that means, and what it implies. That’s unfortunate, because only by understanding this concept fully can we unlock the knowledge that’s key to clearing up some of the most pervasive questions and misunderstandings around sampling rates, room acoustics, equalizers – even about where music comes from and why it can be so mesmerizing.
If there’s a basic building block of all music and sound, it’s harmonic motion. Acoustician Dr. Dan Russel of Penn State has created dozens of free educational animations that help explain these concepts, and makes them available to the public on his site. I use them regularly in teaching my college courses on audio. One of my favorites is among his simplest:
I could watch this thing all day.
To the left is a diaphragm – like a speaker, or the soundboard of a guitar.
As it vibrates, it pushes forward, compressing the air molecules. Then, it pulls back, rarifying the air. Repeat this back-and-forth movement enough times in a second and we have a frequency of motion that registers on the ear as sound.
At first glance, your eye may be tempted follow the movement of the wave, itself, from left to right: “Aha!” you might think. “So, these columns of compressed air molecules travel forward, emanating from the speaker, until they arrive at my ear, one after the other!”
But physics is often counter-intuitive, and that’s just not the way it works. The air molecules do not really travel from the speaker to your ear. That would be called “wind.” Instead, each one of those molecules just kind of hangs out around a general home-base and simply moves back and forth, back and forth. Almost like a pendulum.
Go on, scroll back up and take another look. But this time: don’t focus on the wave.
Instead, look at one single particle. Trace its movement with your finger.
Do you see him there, just kind of hanging out? Going back, and forth, back and forth, back and forth, like a little metronome?
That’s harmonic motion.
Instead of an original molecule from the left making it all the way to your ear, the force of the initial vibration moves from one molecule to the next, a bit like the executive’s clacking-ball toy, “Newton’s Cradle”.
(Except in the case of air molecules, they’re spread out, and they don’t physically collide. When they get too close, they actually repel each other. But that’s topic enough for a whole ‘nuther article.)
Much like a pendulum, the speed at which each molecule moves back and forth is not constant. When a pendulum, a speaker or a molecule nears the end of its “swing” in one direction, it becomes chock full of potential energy and, eager to fly in the opposite direction, accelerates towards center.
As it passes through the central resting point, it still has plenty of energy left and keeps on moving, gradually slowing down until it reaches the final extreme on the other side. There, with no energy left to keep going forward, and a bunch of new potential energy on board ready to steer it the other way, it begins hurtling back toward center again.
Graph out this kind of gradually changing speed on a piece of paper, and you’ll have a very familiar image. It looks a little something like this:
Ah, the sine wave. This natural speeding up and slowing of harmonic motion is what give its undulating wave shape. If the speed were constant, we could draw it with straight lines, like a triangle. But that’s just not how vibration, and harmonic motion, work.
Charts like these are not to tricky to understand. If we track the movement of any individual molecule, the “up” position on this graph would represent the molecule moving as far as it can to the right, the fully “down” position would indicate movement as far as possible to the left.
From here on out, things get a little “meta”. We can zoom out and look at the wave itself. “Up” represents compressed air, “down” represents rarefied air. Or we can look at the movement of the speaker: “Up” on this graph means speaker pushes out, “down” means speaker pulls back in. In an analog system, this same image could be used to indicate the fluxuation of voltage in a circuit, the change in magnetism on a piece of tape, or the ins and outs of the groove on a vinyl record.
Digital systems are a little different. To recreate this wave perfectly, all we need to know is where the molecule has been at more than two points in each cycle. From there, knowing what we do about the laws of harmonic motion, we can extrapolate where that molecule was between each of those points. Despite popular misconception, this sine wave would not be awkwardly mangled and re-drawn as a triangle or square wave. We know how molecules accelerate and decelerate. This is math that we can do. It is not an unknown.
Of course, the animation above is a bit simple. First of all, sound propagates in all three dimensions, not just in just one direction as we have here. By it’s very nature, sounds wants to be “omnidirectional.” It is only through concerted design that we can we can effectively channel it one way or another. The stuff doesn’t simply go left-right, or even center-out. It vibrates every damn way it can.
But even though sound waves are a bit more complex than this in practice, this same fundamental kind of movement is still at play. In the early 19th century, a scientist named Joseph Fourier helped lay the groundwork for understanding that all complex molecular motion is basically built out of layers upon layers of these simple harmonic motions. More complex sounds – from the relatively pure tone of a flute to the overdriven chords of an electric guitar – are made out of what might be described as thousands of discrete sine waves – some of them harmonically related, some not.
A pure sine wave, where all molecules move back and forth together in perfect sync, is practically impossible to recreate in nature. Instead, what we get is a blend of molecules moving at different rates and at different times. We can see this effect in the motion of a guitar string, which does not vibrate at only one rate – but at several rates at once.
When we pluck the low E string on a bass guitar, we hear not only the “fundamental” pitch of about 80 Hz, but also mathematically-related harmonic overtones at 160 Hz, 240 Hz, 320 Hz, 400 Hz, 480 Hz, 560 Hz, 620 Hz and so on. (It’s actually more like 82.4 Hz and up from there, but I’ll spare you the awkward decimals.) In essence, only thing that separates this low E from the same E on a piano are the durations and proportions of these additional, harmonically-related vibrations.
What’s especially amazing about the relationship between fundamental pitches and their harmonic overtones is how, through natural law alone, they routinely fall in and out of phase with each other. This natural synchronization of vibration is what makes instruments sound so beautiful on the ear. And when you visualize it, the effect can be just as stunning:
Ok, now this I could really watch all day.
What you’re seeing here is the visual equivalent of a single, very pure note in action.
The longest of these pendulums is like the fundamental pitch, and the following pendulums are scaled proportionately shorter and faster, in essentially the same way that harmonic overtones are. This is music in motion.
On a real-world instrument, we’ll get some unrelated “enharmonic” overtones as well. These “impure” resonances are especially prevalent in instruments like distorted guitars, snare drums and wood blocks. In great doses they will obscure our sense of pitch. But these unrelated overtones are just part of makes instruments sound so damn interesting.
When we EQ sounds or treat instruments, we’re playing with these overtones – their proportions and their durations. It’s almost like creating alternate “timbres” of the mesmerizing visual pattern above by launching different pendulums at different times or from different heights; by futzing with the mathematical purity of their length; by repressing the movement of some pendulums and not others; by introducing unrelated “enharmonics”; or by making some pendulums more visible than others through changes in lighting.
Our relationship with this natural “harmonic series” is so ingrained that you can leave out the bottom pitch, and our minds will automatically fill it in. This is precisely what happens with smaller pianos, which often have soundboards too small to reproduce the deepest fundamentals.
Our ears might not “hear” the low fundamental in a literal sense, but our brains sure do. You can even try it yourself with a sine wave generator: Play your brain 110 Hz, 165 Hz, 220 Hz, 275 Hz, 330 Hz, 385 Hz, 440 Hz all at once and it will instantly go “Oh! I get it. 55 Hz. Low A,” and will fill it in without you doing a thing about it. You can’t help but hear the phantom fundamental.
This harmonic series isn’t just where tone comes from. In it lies the very foundations of all music. If you were to play these overtones together at equal intensity, you would basically get a chord like this one:
(On piano, it would sound something like this.)
These first harmonics give birth to the western 8-tone scale. And if you zoom in on only the first, most prominent handful of these overtones, you have the raw ingredients to re-create the near-universal 5-note pentatonic scale. The truth is that the full vocabulary of music comes baked right into every note. It’s almost fractal in a way.
The appeal of these natural relationships – which stem from simple molecules vibrating in and out of phase – is inescapable for us. Bobby McFerrin demonstrates just how ingrained this series is, by using it to hack your brain:
It’s easy for us to lament the kids these days with their digital downloads and their streaming. Back in the days when tape and vinyl ruled, the stuff had substance, it had weight, it had tangibility. But to feel this way is to mistake the medium for the message.
There may always be a place for manufactured accessories to music, but for most of human history, we have experienced music not as a material product to be loaded on to tractor-trailers, but as pure vibration in the air. However we consume it, music will always be as physical as it is fleeting. It is as real as the matter around us, and as impossible to bottle as a stroke of lightning. The best we can do is to create devices that will measure it and approximate it in another form, whether that be the etchings on a wax cylinder or the ones and zeros of a solid-state hard drive.
Until we can tap audio straight into our neurons, we will always need to make molecules move in order to hear the stuff. In this way, music is, and likely always will be, an ever-evolving game that we play with the physical, natural world.
A couple of weeks ago, a good friend watched her laptop’s screen in horror, as a complete stranger began uploading her entire concert from the night before onto YouTube.
She hadn’t seen this unknown cameraman, filming from the middle of the audience with a shaky, low-res cell-phone camera and capturing every moment: The mistakes, the tuning breaks between songs, the fleeting moments of awkward banter, and even a new unreleased song that the band was workshopping for the first time in front of an intimate audience.
It’s easy for many people to understand how being broadcast and exposed to the entire world against your will could make you feel violated and helpless. Our ability to share and broadcast music cheaply and easily may be among the great advances of the 21st century, but without consent, sharing just doesn’t feel right. This goes double when it’s on a huge commercial website, monetized without your permission and available for the entire world too see. There are laws against this kind of thing for a reason.
Some of us are more comfortable than others with the idea of our music being shared freely and indiscriminately – the good and the bad shows alike. Fortunately, it’s our right to have our own creations shared indiscriminately should we choose that path. But it’s also our right to maintain some control over what people can, and more importantly, can’t do with our work.
Even the Grateful Dead, who have always encouraged fans to record and share their performances, draw the line somewhere. It might sound surprising at first, but to them, this new model of sharing, whether on YouTube or on a pirate website, is antithetical to everything they stand for.
Their official policy is generous and free-spirited, but also clear-cut: “No commercial gain may be sought by websites offering digital files of our music, whether through advertising, exploiting databases compiled from their traffic, or any other means.”
That would clearly preclude YouTube, as well as any pirate website that sells advertising (and most do these days.) Technology may change, but ethics don’t: “The Grateful Dead and our managing organizations have long encouraged the purely non-commercial exchange of music taped at our concerts and those of our individual members. That a new medium of distribution has arisen – digital audio files being traded over the Internet – does not change our policy in this regard.”
For almost a decade, musicians and fans alike have looked overwhelmingly to the positive side of a “free” and open musical culture. But if anyone and any company can use our music however they choose, then what rights do we lose? Do we lose the right to choose whether our music can be used in TV commercials, movie soundtracks or political campaigns? Do we lose the right to choose when and whether or not we will work for free? Do we lose the right, like the Grateful Dead, to demand that our performances never be monetized, whether directly or indirectly, through the sale of ads?
These concerns are not theoretical ones: David Byrne made headlines not long ago when he successfully sued former senatorial candidate Charlie Crist for using his hit song “Road To Nowhere” in a political ad without his consent. Tom Waits likewise successfully sued both Frito-Lay and Audi for using a Tom Waits imitator after he had refused to license his music in their commercials – at any price.
Those are just two stories of artists taking control from among countless thousands of examples. And you too can start taking control of how your work is shared and monetized, even online. It doesn’t even require the hassle and grand gestures of a lawsuit. You can do it from right in your bathrobe:
A piece of weak-tea legislation called the Digital Millennium Copyright Act (or “DMCA”) is what allows sites like Google and YouTube to get away with their “Share First, Ask Questions Later” policy. But that same bill also allows musicians and other content creators to have their work removed from these websites when it is posted without their consent.
In fairness to Google, they’ve been very good about increasing the effectiveness of the tools that allow artists to flag, control, monetize or even remove unapproved content from both YouTube and Google Search. What they’ve been less good about is spreading the word. I’m amazed how many regularly exploited artists are unaware that they actually have the power to do something about it.
In the case of YouTube, you can lay claim to any videos or tracks that belong to you right now, without getting up from your chair. It only takes a few minutes.
And, if you’d like to maintain a fan-powered presence on YouTube, you don’t have to have your music taken down entirely. Using the available tools you can even decide to leave your tracks up and instead have YouTube give you analytic data about your viewers, give viewers links to places where your music can be purchased, or even monetize your tunes directly, via advertising.
Of course, if you’d like to limit the amount of your material that appears on YouTube so that you can give your fans a real incentive to buy your music if they like what they hear, then you also have the option of removing the offending tracks or videos altogether.
Not long ago, this process was a real drag. As soon as you took down an unauthorized video in one place, it would just crop up again later in another. But more recently, YouTube launched a tool called ContentID, which allows you to identify your music just once, and have it be recognized in perpetuity. From there on out, you can have YouTube automatically block, track or monetize that music, now matter who uploads it and when.
This service is not restricted to the big labels. If you’ve had issues with your music uploaded against your will in the past, you may be eligible to sign up for free. Not to be outdone, Soundcloud has also launched a content identification system of its own.
A Note About Fair Use:
Some might be concerned that these tools could be abused by blocking fair use. But in my personal experience, I have found this not to be the case. When we posted our recent “Studio as an Instrument” panel to YouTube, several American major labels started selling ads on our video, which included brief song snippets. This was done automatically by the Content ID system.
Obviously, I’m all for labels and artists getting their share, but fair use is fair use, so on principle, I contested the claims. Within a day, all the American labels had retracted their claims, basically saying: “Yeah, that’s obviously fair use.” (The only one that didn’t pull their advertising and claim was a label from Germany, where the concept of fair use does not exist.)
I had a similar experience with my own online reel as well, when SoundCloud automatically removed one song for which I had obtained the artist’s permission to include. I replied to the claim using their online dispute center, and within a day, the label had approved the use and the song was quickly restored.
This kind of protection is not only limited to YouTube videos. The DMCA also allows you to have nefarious results removed from Google Search completely. Bear in mind that this won’t shut down the website in question (so direct links will still work) but it at least ensures that users won’t be able to find the stolen work through search engines.
This is great for cutting off websites that sell ads on your music without your consent, or give away torrents of your entire discography. I’ve used it successfully to take down links to unrepentant plagiarists and unauthorized monetizers of my articles as well as my music productions. (There were far more of each than I had expected.)
Today, I’m amazed at how many complete discographies and full albums I’m still able to find on sites like Google Search and YouTube, especially when blocking that kind of behavior has now become so easy. The tools are there, but the word has just not gotten out.
Even if you don’t care about your sales and want your own music to be shared as widely and completely as possible, using these tools can still allow you to learn about and engage with your fans, or to stop unscrupulous companies from monetizing your work without your consent.
Remember that whenever unauthorized websites sell ads on the traffic generated by an artist’s entire discography, whether directly or indirectly, it adds to the bloated bottom line of technology companies while keeping artists, producers and engineers eating table scraps.
Free music can be great. I listen to plenty of it, as ethically as I can. I’ll also be the first to tell you that it’s a good idea for almost any artist to make some of their music available publicly and without charge. But “free” is, and should be, a choice. When that choice is taken away, it becomes a meaningless gesture.
Both the rights and the earning potential of so many artists have been sacrificed in the past ten years as extremely profitable technology companies have lobbied hard to turn ‘Copyright’ into a dirty word. But the truth is that Copyright is your creative bill of rights. It ensures that:
1) No one has the right to take your work and use it for his or her own financial gain without your say.
2) No one has the right to pressure you into working for free if you do not want to.
3) And no one has the right to take your art and use it to support his or her own political agenda without your agreement.
Stand up and respect yourself. If you haven’t set aside a moment to gain some basic semblance of control over your own music online, now is the time.
Bustling cities like New York, L.A., San Francisco and Nashville may boast more recording studios per square foot than just about anywhere else on earth. With such high concentrations of talented professionals, it’s not surprising that so many commercial records are made, at least in part, inside of one of these major markets.
Then there’s a second tier of studio towns – places like Chicago, Miami, Seattle, DC, Atlanta, Philadelphia, Portland, Boston and Austin – where recording culture is alive and kicking, although perhaps not quite as densely packed or competitive as it is in the big four.
But major cities aren’t the only place to make records. Artists from Led Zeppelin and U2 to Bon Iver and Beach House have long escaped into the countryside to complete their crowning works. With that in mind this week, we’ll look at three “recording retreats” – studios with onsite living accommodations, that bring the luxuries of a metropolitan tracking room into quieter, more affordable, more scenic locales.
Echo Mountain Recording
Asheville, North Carolina
There’s a recording studio in the small city of Asheville, North Carolina, that sits nestled into the Blue Ridge Mountains, not far from where the French Broad River meets the Swannanoa.
This region, nicknamed “Land of the Sky,” has developed something of a hybrid culture over the years. The population is fairly small and spread-out, but the area is home to a startling number of transplants from the coastal cities.
These ex-pats from New York, California, Washington, and bustling cities all around the U.S. are often credited with giving the place its dynamism. But by and large, they come to the town to adapt, not to overturn, and so it’s become one of those rare places where cosmopolitan tastes meet homespun values. The whole city sits at the center of a culture that revolves in part around craftsmanship and art.
“I think the city of Asheville itself is such a big part of why people come here to record,” says Echo Mountain‘s chief engineer Julian Dreyer. “People will be playing a show in town, and come for a studio tour and they’ll say ‘Wow, there’s this great studio here, and the town is incredible. I just want to spend two weeks here and make a record.’”
“There’s a huge appreciation here for arts and crafts, and all these little communities of artists and musicians,” Dreyer says. “That attitude leads to stuff like great food and restaurants. The bar for that is set so high now that unless you’re on top of your game you just won’t survive. So it’s a city of not even 100,000 people, but we’ve got food here that would rival your most ‘hipster’ parts of Brooklyn.”
“And there’s probably more little breweries here per capita than almost anywhere,” he says. They’ve even been voted “Beer City USA” three years in a row, just narrowly beating out Portland, Oregon. “People here are so proud of Asheville that they get so pumped up to vote in that kind of thing.” And it shows: Dreyer has a slow-spoken manner and just the shadow of a Carolina drawl, but he livens up when he talks about Asheville even more than when he talks about microphones.
Of course there’s more to Echo Mountain Recording than just the town. It’s more than just a studio – almost a little musician’s complex in its own right, sporting 4 full-fledge production rooms, the largest of which – built into an deconsecrated old church – houses a drool-inducing Neve 8068 console, a Studer A800 reel-to-reel and a full-blown Pro Tools HD3 system.
This main space, as well as Echo Mountain’s newer API-based studio in the adjoining building, was designed by the legendary George Augspurger. Two smaller studios round out the space, offering even more affordable rooms for overdubs and the like. None of them are hurting for instruments or mics either, and vintage Telefunkens, AKGs and Neumanns float from room to room.
Records made at Echo Mountain earned three GRAMMY nominations and two wins this year; but don’t let names like Smashing Pumpkins, Steve Martin, T. Bone Burnett, War on Drugs, G Love, VHS or Beta, The Avett Brothers, Zac Brown or Band of Horses scare you away. They also spend a fair chunk of time recording new acts from out of town, as well as local and regional artists.
Saint Claire Recording Company
“Our motto for the longest time has been ‘Relax, Record,’” says John Parks, co-owner of Saint Claire. “We want to get you out of the city and – hopefully – to turn off your cellphone and close your laptop.”
“Often, relaxation is the last thing that people think about when they’re recording,” Parks says, “but it’s actually a pretty important thing, I think. Is that 15th hour as productive as that 3rd hour in the middle of the day?”
This concept factored into most of the decisions that the Parks made when they built Saint Claire Recording Company, a 7,800-square-foot facility just five minutes outside of downtown Lexington, Kentucky.
For anyone who hasn’t been, these parts of Kentucky can be astonishingly picturesque, especially around sunset, as dusk gathers around the rolling hills. It’s long been the style in Kentucky to cut back the trees and nurture the local bluegrass for grazing, so that when you do catch a large black oak standing on the horizon, it’s silhouetted against the sky like an old watchman looking over the homestead.
“We thought that instead of building just another studio in Nashville, we could try and tap into that slower pace of life, and maybe help put Kentucky, and Lexington in particular, on the studio map.”
If the Parks’ goal was to take the accoutrements of a world-class SSL 9000J studio and put it into the context of small-town living, they have succeeded. But as quaint as Lexington might seem to a New Yorker, it’s certainly not the boondocks. It may only be the 62nd largest city in the U.S., but it’s the 10th most educated, with nearly 40% of residents in the city proper having earned college degrees.
It has its own attractions too: the bourbon trail, historic museums, and horse racing – particularly the Kentucky Derby – which takes place not far away in Louisville, KY, a place Parks describes as “like a metropolis” compared to the small-but-growing city of Lexington.
Saint Claire has become something of a destination for some busy coastal engineers including the legendary Tony Visconti, Neil Dorfsman, and our own Zach McNees. The clients they bring with them come from fields as far flung as Japan, Spain, Ireland and Canada, and to accommodate them all, Saint Claire has five bedrooms right on premises.
“When the client is here, we want them to treat it like it’s their house,” Parks says, “and when you shut the door behind you at the end of the day, you wouldn’t even know there’s a studio footsteps away.”
Since it attracts so many traveling producer/engineers, the studio’s house engineer, Tim Price, often finds himself putting on his assistant hat. It’s a role he’s equally comfortable with, having risen up from the ranks of intern at Saint Claire.
And although the recording space is well-separated from the living quarters, the studio itself was designed with special attention placed on sight-lines:
“When we were designing it I wanted to squeeze in as many separate isolation booths as we could,” Parks says. “We ended up with four. And with the way the windows are placed it’s the closest you ever might come to the feeling of playing live in one room, while still being able to turn up the amps nice and loud.”
But as much as it’s equipped for a full-on rock session, Parks says they attract more singer/songwriters. They’re often the ones, he says, that best understand the value of getting unplugged and closing the door.
Black Dog Recording Studio
Stillwater, New York
Luckily, New Yorkers don’t have to go far to get away from it all. Black Dog Recording Studio sits just outside of Albany, tucked into the foothills of the Adirondack Mountains, in the small town of Stillwater, about three hour’s drive up the Hudson River from Manhattan.
Black Dog sports a 600-square-foot live room, a 400-square-foot control room, and three isolation booths. There’s a three-bedroom, two-bathroom house available on the property, and the gearlist offers a tempting melange of top-shelf condenser and ribbon microphones, a unique mid-70s Sphere console, and some early American tube preamps from Collins, Gates and RCA, in addition to the more standard fare from API and Quad Eight.
They may be the youngest studio on this list, so their amenities keep growing. For the spring, studio manager Seamus McNulty says they plan to add a 2” Studer machine and some rustic cabins for extra lodging.
McNulty describes the Rod Gervais-designed live room at Black Dog as “bright and tight” – and ideal for recording a whole band live together on the floor if they choose. For those who want even more control, the three iso booths are ample, with the smallest of them capable of fitting a harpist. The space is rounded out with a small library of guitars, amps and keyboards, including an original B3, and a complete line of vintage Ampegs.
Despite its size, gear and proximity to the big city, Black Dog is a shockingly affordable room (one of the perks of setting up shop in a small town.) The space has attracted its share of notable upstate acts like Ra Ra Riot, Sean Rowe and Railbird, and now, a growing number of New York City producer/engineers like Joe Blaney, Jonathan Jetter, and Andrew Maury, who gives the space rave reviews.
In the days of vinyl and tape cassettes, providing your listeners with information about your music was simple: Everything from song titles to song-writers, lyrics to album art, engineering credits to UPC codes, could be included in the sleeve or album sticker, and that was that.
Today, what drives growth in the industry are music downloads, rather than physical sales, which continue their slow decline. But even as consumers increasingly turn away from physical media, we haven’t lost liner notes entirely. They’ve just begun to move onto our hard drives and into the cloud.
Current tools for sharing essential info, basic credits and album artwork on digital files can still be improved industry-wide. But as things stand, if your release is missing all of these things, the fault does not lie with the technology. And as complex as all the metadata options may sound, breaking them down into a few main categories to help bring the whole field into focus.
Today, we’ll be discussing CD-Text, ID3 tags and online databases – the three main vehicles for distributing the information and extras that you can provide with a digital release.
The first way of sharing album information with fans digitally has been around since 1996, and can be burned right into CDs. If you ever pop a disc into a car stereo, or a home DVD/CD player and see the titles of the each song listed on your display, you can thank CD-Text for that.
The CD-Text protocol allows us to bake a wide variety of information right onto the disc: The names of artists, composers and arrangers, as well as titles of albums and songs, and even the boring-but-essential stuff like UPC codes for albums and ISRC codes for each song (which help with tracking sales and radio play.)
If you get your music professionally mastered, your mastering engineer can put this CD-Text information directly into a physical CD “Premaster” or a DDP file, that you would send to a large-scale replication house.
But even if you’re just duplicating short-run copies at home or with a small-scale duplicator that can’t handle the DDP files used at big replication firms, that doesn’t mean you have to leave this information off of your release.
If you’re burning your own CDs from a set of raw WAV or AIFF files sent by your engineer, many simple consumer programs can include CD-Text these days. In the case of iTunes, all you have to do is check a box to enable CD-Text. For a bit more power and flexibility, there are affordable programs like Roxio Toast or the free “Burn” for Mac.
Even though the sales of physical CDs are continuing to shrink, some people still prefer them, and those listeners add up to nearly half the total music-buying market. They’re an even more significant force than that if we’re talking about albums rather than singles. Those who still prefer CDs often listen to music on conventional disc players, and if you leave out CD-Text, you’re leaving out an essential perk for many of your listeners.
Artwork, album credits and other liner notes that can’t fit into CD-Text, but the answer here is obvious: Fans of CDs like the format in part because of its physicality. All of this can be included in a physical booklet – So include one!
It’s also worth noting that CD-Text does not embed any information on the music files themselves. Rather, it is part of what you might call the container file for the CD . This means that if you import your CD into a computer, information that is included only via CD-Text may not make the transition, and you’ll be leaving the majority of new music fans without even the most basic information, such as song-titles or artist and album name.
Online Databases (CDDB, AMG and more)
Computer-based music players and portable listening devices may not recognize CD-Text, but they have another way of finding and displaying the information – and even the artwork – associated with your music: These programs rely on comprehensive online databases to pull this data from the cloud and store it along with your files on your drive.
To provide this feature, iTunes uses Gracenote’s CD Database or “CDDB.” Windows Media Player provides a similar free service using AllMusic’s “AMG” database. There are also slimmer databases that are free for small-scale software developers to integrate into their programs, such as freedb and MusicBrainz.
Getting your art and information into these databases isn’t hard. If you’re releasing an album through an already-established label or a digital distributor like CD Baby or Tunecore, they’ll help you add your info to The major databases when you submit your music.
If you’re on your own on this front, you can enter tags and submit them to the CDDB easily through iTunes. To get recognized by Windows Media Player, you’ll have to mail a retail-ready physical CD to AllMusic. They take care of new submissions in 4-6 weeks.
These online databases use the same protocol as computer-based music players and portable devices like iPods. The data is stored using “ID3 tags” which embed information on MP3s, AACs, and even uncompressed WAV files.
Unlike CD-Text, ID3 tags are written right into each file. This format has many of the same fields as CD-Text, plus a few more like “Album Artist” that are handy for keeping things organized inside of a large library. With the ID3, you even have the option of including album artwork, which is impossible with CD-Text.
To add ID3 tags to your releases, you can either work with a label or digital distributor like CD Baby or Tunecore, or add them yourself using a simple tag editor – many of which are free or affordable. Popular programs include MP3Tag (PC), Tagr, Tag, or Fission, (Mac), ID3 Editor, Jaikoz and even iTunes (Mac/PC). You could also ask your Mastering Engineer to help with this. I do it as an added service for my clients all the time.
For some inexplicable reason, there are a few fields in CD-Text that are not included in ID3: chief among them are slots for UPC and ISRC codes, which can be used to help track sales and radio play.
Thankfully, ID3 incorporates an open ended “comments” section that allows for inclusion of this data as well as all sorts of extras, like web addresses, album credits, thank-you lists and the like.
In theory, there should be no limit to what you can add in the comments section, making it a near-perfect place to include digital liners. But in practice, some programs truncate the comments section. iTunes, for instance, will not let you include more than 255 characters in this field. And if you use another program to add more text, it will be chopped down to 255 characters when brought into iTunes.
(This is one of the last major issues with iTunes, along with Apple’s refusal to make sure that a transparent, intelligent, volume normalization is enabled by default. Fixing these two shortcomings would immediately help to slam the book shut on two of engineers’ favorite complaints: In one fell swoop, we could bring an end to the lack of proper digital accreditation and help to bring the loudness war to a close once and for all. As a side note, some people will tell you that automatic volume normalization features like iTunes’ Soundcheck degrade sound quality. Perhaps this was once true, but as things stand now, this is wrong. What these technologies actually do is simply and transparently turn down the volume on the loudest albums, providing a more seamless listening experience and a dis-incentive to make albums sound worse just so that they can sound louder.)
An Under-Explored Frontier: The Digital Booklet
So, if the biggest digital music retailer in the known universe has what is essentially a broken “comments” field, then what’s an artist to do about comprehensive digital liner notes?
Fortunately, there’s an alternative that has been available for several years now, but remains woefully underutilized. iTunes and Amazon now allow artists to include digital booklets of 4 pages or more along with their releases. It costs next to nothing to make these virtual liner notes available to your fans, and I recommend it to anyone who asks. (And even to people who don’t.) Unlike physical CD inserts, these digital booklets use a 4:3 ratio to take advantage of the full viewing area on-screen. Adapting the CD art you’re already using into this format is not difficult at all.
In an even more ambitious move, Apple announced a next-generation interactive booklet called the iTunes LP in 2009. Thanks to the fact that it was initially restricted to major labels only – as well as the fact that those labels were less than enthusiastic to participate – the format has yet to take off.
However, this new high-res, interactive take on metadata still has plenty of promise. And it’s now open to independent artists. Hopefully, at some point, they might lead the way in doing a lot more with it than the majors have. Compared to building an interactive app for new albums, such as Björk and Philip Glass have done, creating a stunning iTunes LP takes relatively little skill.
Like many good ideas, the possibility of engaging, comprehensive digital liner notes may have become made feasible before the market was ready for it. But in the future, it seems likely that immersive, full-featured digital album art will someday become the norm. It’s certainly one of the things I miss most about physical formats.
Even if digital media can never completely reproduces the tactile satisfaction of a format like vinyl, if we can begin to offer even a sliver of that experience, by moving metadata out of the realm of tech geekery and into the realm of art, we’ll have gone a long way toward improving the experience of recorded music for countless millions of fans.
Towards the end of Dave Grohl’s directorial debut, the rock documentary Sound City, drummer Mick Fleetwood warns us about “the downside” to all the technological advances that have so changed the face of music production: That they might lead a person into “thinking that ‘I can do this all on my own.’”
“Yes, you can do this all on your own,” Fleetwood quickly concedes. “But you’ll be a much happier human being to do it with other human beings. And I can guarantee you that.”
Sound City is at its best whenever it takes this tone – Which it does most of the time. Those of us who feared (like I did) that the film might come across as an ode to diamond-encrusted buggy whips, can breathe easy.
That’s not to say that Grohl and his interview subjects – the likes of Tom Petty, Paul McCartney, Rick Rubin – don’t pine for increasingly impractical analog technologies that have been largely supplanted over the years. Or that they don’t sometimes look down their noses on the digital tools that have come to dominate music production. They certainly do both, from time to time.
But when they do, it’s largely because they’re out to promote the values that these outmoded technologies tend to reinforce: Practice, preparation, dedication, collaborative spontaneity and that in-the-moment experience of making inspiring music with inspired peers.
Despite its steadfast and somewhat conservative perspective on how music should be made, the tone of Sound City remains one of aspiration, inspiration and affection – never derision or condemnation. Even Neil Young who, now nearing his 70s, can be something of a crotchet when it comes to audio technology, is made to seem accepting of other ways of working – even as he makes a curiously unstudied remark about the birth of the CD.
His is not the only small technical lapse that may raise eyebrows among sound engineers in the know. Immediately after extolling the virtues of the amazing ambient character of Sound City’s live room and how good it is for drums, the film cuts to making a big deal out of the drum sounds on Fleetwood Mack’s 1975 release by way of example. Although it’s a damn cool sound, they are in fact, some of the deadest, driest drum tracks imaginable, and could have probably been made just about anywhere, given enough baffling.
But these questionable moments don’t detract much from the movie at all. As much as Sound City pivots around changes in technology, it never obsesses over the geeky, techy details. For the most part, that’s actually a good thing. In addition to keeping the pace light and forward-moving, it allows the film a potential to reach beyond the market of a few tens of thousands of working musicians, engineers, and recording enthusiasts.
A brief cameo by that legendary designer of recording consoles, Rupert Neve, sets the tone in that department: Director Dave Grohl hams it up for the camera, nodding and smiling as if dumbstruck while Rupert Neve talks about his namesake console, which the film centers around. Grohl’s feigned ignorance is likely to comfort lay audiences as he makes pretend that basic audio terms like “microphone amplifier” and “crosstalk” are the very height of techno-babble.
This kind of self-effacing affability is part of what makes Grohl so likeable throughout Sound City. As much as he tries to make the studio and its vintage recording console the stars of the movie, it’s the personalities of the subjects that shine through. Perhaps his own, most of all.
Grohl can be both silly and sincere, sometimes at once. He has a cadence that borders on that of the ADD-surfer dude, and he seems unpretentious and un-self-serious, displaying the kind of understated confidence that comes along with knowing that you’re really damn good at playing the drums.
You don’t have to like the Foo Fighters to like Dave Grohl. And that’s a good thing, because as much as this is a story about a studio and a way of working, it’s also a personal story for Grohl. Nirvana’s Nevermind, the album that changed his life and the lives of so many others, was recorded there. And that story is tied up with the story of Sound City.
Although Grohl likes to wax poetic about how great the Sound City Neve console sounds; about how magical their room was, and about how their way was the best way to make real records, apparently the rest of the world didn’t think so for long stretches at a time. The truth is that the crusty old studio with the carpet on the walls was on the verge of going under more than once before it finally closed for business in 2011.
It had been on the verge of bankruptcy just before the Nevermind sessions came through. And it wasn’t until after that record shot past Michael Jackson and Michael Bolton on the way to #1 that the studio was hopping again.
Grohl romanticizes that console and that space, but in reality, it was the fact that great music was recorded within its walls that put the studio on the map to begin with. After a long dark period, the fact that great music was recorded there once again is what made it a hot spot once again. None of the gear had really changed.
The truth is that compared to the power of a great record, a good room and a great console have almost no power at all. Sound City’s many successes and failures are clear testament to that.
Although that point may have been lost on Dave Grohl at times, he does a surprisingly good job as both director and emcee. Paul Crowder’s editing and pacing are commendable as well.
The one place where the film gets just a touch self-indulgent is toward the very end when Grohl – rather than taking on the quixotic mission of trying to save Sound City Studios – simply buys their old console for himself and installs it in what’s essentially an oversized home studio. Here, Grohl collaborates with a string of A-list rock stars, to mixed results.
Some of the pairings are more awesome in theory than they could ever be in real life, such as when Sir Paul McCartney and Nirvana bassist Krist Noveselic swing by to join Grohl in writing a new rockish romp, reminiscent of Helter Skelter, right on the spot.
A jam session with Trent Reznor of NIN and Josh Homme of Queens of the Stone Age leaves the two seeming just a bit pompous compared to the down-to-earth Grohl, but the result is a downright memorable instrumental track, plus a few mixed words in defense of both digital tools and formal music training.
For me, the standout musical moment was an unexpected one: Lee Ving of Fear sings a bewildering punk rock tune at breakneck speed that sounds just a little bit like Nomeansno. Out of the entire movie, it’s probably the one song that Kurt Cobain would have really, really liked.
Even this whole section, the spottiest in the movie, is still a good watch. The only thing that really doesn’t work in the entire film is – ironically enough – the sound mix.
At times, the level fluctuations in Sound City are laughably ill-advised. I’ve never in my life found myself riding the volume control on my remote like I had to while watching this movie.
Perhaps those jarring jumps in loudness between music and dialogue were intended to be exhilarating. Maybe they even work inside of a movie theater. But seeing that the movie is playing in exactly two theaters worldwide, it’s safe to say that the majority of viewers have been watching at home, just like me. In this context, the rollercoaster levels are at times beyond awkward, even bordering on frustrating.
But these quibbles aside, Sound City is a surprisingly able debut. Regardless of whether you’re 100% sold on all of the film’s conclusions, it makes its case warmly and often, and it’s easy to recommend to any fan of rock music or recording technology.
At $7 to rent and up to $13 to download, the price is slightly higher than average, but it makes sense for a niche film like this one. Based on the overwhelmingly positive user reviews for the movie, it’s safe to say that most of the thousands of people who have ordered it so far have felt it was money well-spent.
In the end, it’s an uplifting movie, even if the moment of the studio’s shutting down strikes you like an honest tragedy. I found myself getting a little choked up as the original studio came to a close. And not just because it was so sad, but because the movie made it all seem so avoidable, as if it was merely principled stubbornness over technology and workflow that came between the studio and financial solvency.
As Sound City reached its heartbreaking nadir, my girlfriend turned to me and asked, “Why didn’t they just adapt? It seems like it would have been so much easier.” It’s a good question. And I didn’t have an answer. I still don’t.