About 5 minutes outside of the center of Providence, Rhode Island, there’s an old industrial mill town called Pawtucket. If that name sounds familiar, chances are that it’s because you’ve heard it on the TV show Family Guy, which takes place in a fictionalized version of the area.
But as far as anything most people would call “culture” goes, that’s about all that Pawtucket had contributed to the outside world for a long time. Since the 1700s, its exports had mostly been “things” like textiles, iron, and later, toys, when Hasbro moved in to town.
To be fair, Pawtucket has been the birthplace of a couple of renowned musicians over the years. The early synthesizer pioneer Wendy Carlos and Americana guitarist David Rawlings come to mind, but both of them moved to New York City to find creative opportunities when they were getting started.
Today, that pattern is beginning to reverse.
The “SoHo Effect”
In 2004, The New York Times ran a story about an effort made by the town of Pawtucket to attract arts and artists into the community.
The old mills that had once contributed to a booming industrial revolution lay dormant and un-utilized.
The town’s board, looking south to New York City, eventually realized that these were exactly the kinds of buildings that had powered the wildly profitable waves of gentrification that had swept through NYC over the past four decades.
By then, there was clearly enough of a pattern to try and emulate:
Step 1: Artists move in, renovate, and give the place some class.
Step 2: A few pioneering businesses set up shop and make the area even more interesting.
Step 3: Developers start getting involved and pitch the neighborhood to an even better-heeled, second wave of gentrifiers.
Step 4: Everybody makes money, and hopefully, no one worth keeping around gets priced out.*
(*This is the Very Tricky Part.)
In the early 2000s, the plan worked pretty well. When the Times profile was written, Pawtucket was offering loans to restaurateurs to open trendy eateries; Architects and graphic designers from New York were moving into 7,500 square foot former shoe stores to live and work; And huge warehouses were being converted into enormous, airy lofts that sold briskly to young professionals who would commute to Boston.
In 2008, PBS showcased a feature-length documentary called Pawtucket Rising, chronicling the town’s transformation from dusty run-down mill town to “Rhode Island’s Creative Community.”
Things slowed down a bit in the great recession shortly thereafter, and property values dropped slightly, much like they did everywhere else, but by then, Pawtucket had already been transformed.
Machines with Magnets
In 2003 producers/engineers Keith Souza and Seth Manchester teamed up in Souza’s all-analog recording studio, which he called “Machines with Magnets”. It was based in East Providence, RI. Soon after, they hooked up with some backers, bought an old industrial building in nearby Pawtucket, and began transforming it into something bigger and more ambitious. This new and current version of Machines with Magnets opened in 2006.
The building houses not only a full-fledged recording studio, but an 800 square-foot art gallery and a 2,000 square-foot performance space, complete with a full bar, and wired up for 32 channels of live recording. Equipment onsite includes a 26-channel Neotek Elite console, API DSM 24 custom sidecar, Thermionic Cutlure Fat Bustard 12-channel valve mixer, an Otari 2″ machine, and a long list of effects, mics, and instruments. Both owners live onsite in their own lofty apartments.
“The studio supports the building for the most part,” says Seth Manchester.
“Shows make a little bit of money, but mostly it’s recording. I’d say it’s about 75% bands, and maybe 25% from film and voiceover work. That kind of sustains the whole building, unless you include the two living spaces which help cover the mortgage too.”
Yes, you read that correctly New Yorkers: There are places in the world where recording studios support less profitable businesses like bars and art galleries, rather than the other way around.
Of course like anything that sounds too good to be true, there’s a catch: As vibrant as the burgeoning local music scene may be, it’s not enough to keep the studio churning all by itself.
“It’s probably about 60% local, and maybe 40% bands from New York,” says Manchester. “We get a lot of bands from there. I think it’s because we’re a little bit of a getaway, and also that we’re cheaper than a lot of studios this size.”
Although New York City is about a 2.5 hour drive away, the sheer volume of musicians that live here (plus the staggering cost of real estate) conspire to make it so that clients from New York far outweigh those from Boston, which sits just 45 minutes to the northeast.
To date, Machines with Magnets has attracted a number of gritty and often un-categorizable bands from small labels like Thrill Jockey, Sub Pop, Warp, Luaka Bop, Western Vinyl and Load Records.
The Value of a Scene
Throughout most of the 20th century live shows acted as a bit of a “loss leader” for the sale of recorded music.
The truth is that good live shows are expensive to put on, they don’t scale up very easily, and if you want them to make a respectable amount of money, the unfortunate reality is that you kind of have to charge a lot more than some of your audience might be able to pay.
At Machines with Magnets the tradeoffs are similar. They keep shows inexpensive and infrequent – Seth Manchester says they have about 8 each month – but they’re also invaluable. Shows help create a scene, without which their business and their town just wouldn’t be the same.
Sometimes the shows even lead directly to studio work:
“The Skull Defekts played a show here,” Manchester says, “and when they found our their next stop on the tour got canceled, they just decided to do an album here the next day. They recorded here for 15 hours straight the day after their show and released it on Thrill Jockey.”
Sometimes, concerts can even act as kind of live, in-person crowdfunding campaign. To that end, MWM has developed a program called “The People’s Recording Project” in which they use shows at the venue to help local musicians supplement the cost of their productions.
And sometimes, shows are just damn good fun that can help bring an entire community together:
“I think the biggest we ever had was Dan Deacon, two years ago,” says Manchester. “It was a huge crowd – almost over capacity maybe. People were really excited. It was pulling from every demographic in Providence, from the college kids to the people who’ve been living in this town forever.”
“There’s a lot of opportunity in a place like this,” Manchester says. Then with a friendly laugh, he takes a moment to remind me of Patti Smith’s advice to young artists in recent years. (Namely: “Don’t Come To New York.”)
But in Providence and Pawtucket, things feel different: “You can buy and develop something here,” he says. “You can have time to work on art, to do what you want to do, and make a living at it.”
It seems to be working for Machines with Magnets. Seth Manchester and Keith Souza are keeping busy with progressively bigger bands from bigger labels, all while staying affordable and finding novel ways to support artists from their local scene. They’re even attracting work that would have once stayed local to New York City.
If you’re in a band, considering affordable options outside of the biggest cities, perhaps MWM is the kind of place you should add to your list. And if you’re a New York-based producer or engineer worried about losing work to these out-of-state usurpers? Go with them. Consider bringing your New York know-how and connections to smaller markets and help them elevate what they do, even if it’s just on one project at a time.
There’s a lot that nearby cities can offer to each other. And, just as it is with building new and vibrant neighborhoods, it’s up to the artists to lead the way.
A few years, ago Peterson Goodwyn of diyrecordingequipment.com was living in Milwaukee, WI, perhaps an unlikely place to pursue his dream of becoming a recording engineer.
Goodwyn and his fiancee had been bouncing from one post-college job to another, hoping to land some kind of meaningful and sustainable work. At a certain point, he says that “we kind of threw our hands up in the air,” and the two decided to move to Seoul, South Korea for a year and teach English.
Almost immediately, things changed.
In Korea, Goodwyn met other temporary expats, and discovered there were “thousands of people, like me, who all of a sudden had disposable income.” Luckily for Goodwyn, a good number of them got to thinking it might finally be time to make that record they were always talking about.
“I think I ended up recording something like 12 albums that year,” he says.
Discovering DIY Electronics
While in Korea, Goodwyn also began toying around with electronics in earnest.
His first venture into this world was with a DIY preamp kit by Hamptone. From there, he branched out, trawling the web for new kits and tutorials, and scouring through the open-air electronics markets that Seoul had to offer.
One of the things that disappointed him was how “dispersed and intimidating” all the available information could be. Taking a cue from some of the online open-source communities he loved, Goodwyn began creating a free, comprehensive database of the most popular resources and tutorials available for DIY gear geeks.
“I wanted to bring them altogether in one place,” he says. “What are all the preamps that are out there? What are the compressor kits?
The website he launched, diyrecordingequipment.com, began with “no commercial aspirations.” Eventually, that would change too. Today, selling entry-level recording kits is Goodwyn’s primary source of income. (Although he still gives away most of what he does for free.)
From the Hobby to Job
One of the things that helps Peterson Goodwyn do his job, is that he brings with him the optimistic and self-effacing zeal of a die-hard hobbyist.
If you visit his website today, you’ll find that it is still predominantly a free online resource geared toward getting people excited about – and comfortable with – the idea of building their own recording equipment. It even provides links to other people’s kits and products than Goodwyn’s own. And to visit there is to feel that this whole new world of circuits and resistor values is something accessible.
“I guess I look into the camera and speak to beginners in a way that others don’t,” he says. “Because I still feel like a beginner.”
“One day in 2011 I thought “I know enough now – or at least I thought I did – to offer a re-amp kit. I kind of buried it on the website because I was very queasy about the whole idea of promoting myself or making any profit off of it. And then I got 5 orders, almost immediately. That was a revelation. I had bought enough parts for 3 kits, and I thought that would last a month.”
“Basically my job today is still a continuation of that: I choose a project I’d like to see offered in a beginner friendly way for a really good price; and get the information I need to complete that project.
“That’s pretty much the sum total of my electronics training. I mean, I have a broader base now. But I still come at it very much from the perspective of a musician and an engineer who dabbles in electronics.” That may be exactly what makes his site work.
More recently, Goodwyn has started developing a new project – a kind of ‘lunchbox-within-a-lunchbox’ called “Colour.”
“Sometimes we go to such great lengths to get that last 1% of color and tone,” he says, “harmonic distortion, transient shaping. You might run your signal out to a $15,000 preamp, that kind of thing. The idea here is to just focus on the parts of the circuit that impart that sound.”
To that end, the Colour unit itself is basically a blank chassis meant to sit inside of a 500 series lunchbox. It’s meant to allow DIY enthusiasts to just spend their time working on “the fun stuff.”
“The 500 series rack is by far the most cost-effective option,” Goodwyn says. “And this way, we free up designers from having to think about anything but the audio circuit. The chassis, the power supply, the front panels and IO – Basically all the not-fun stuff is taken care of for you.”
Inside this single-space 500 series unit are three slots, each of which can be filled with a custom-made DIY audio circuit called a “colour module.” Goodwyn plans to start out by offering a few module kits of his own – basically harmonic distortion units with a wet/dry control.
“For lack of a better term, we’ll be doing a tape-ish one; a tube-ish one; a rectifier kind of thing.” And in the spirit of open-source technology, other developers can create their own modules and sell their own kits as well. He says one designer is already at work on an “SSL talkback, ‘crush’ kind of compressor.”
The only catch is that it’s not available just yet. To get the project off the ground, Goodwyn plans on launching a crowd-funding campaign in the coming months. Until then, he’s posting periodic updates and sneak peaks of the circuit on his site.
Finding the Path
When Goodwyn talks about these new designs it’s surprising to think that he landed in this world by accident. These projects seem to consume so much of his mind that you’d imagine he was born to work on DIY kits.
“I’m having a blast,” he says. “Although sometimes in the back of my head I’ll think – wow, this isn’t what I was dreaming of doing. It’s not recording in the studio, exactly. Honestly I never thought I’d be working with circuits!”
“But just the other day I was in the studio, pressing record doing some edits, and I thought – ‘You know what? This is just a job too!’ It’s not necessarily better or more creative than designing circuits, or making how-to videos or building a kit.”
If anything, there was a time when tooling around with electronics and learning the craft of audio design was as important a part of the recording engineer’s job as anything else. Now, as budgets crunch down and musicians take an ever more active role in the recording process itself, that interest seems to be coming back.
An increasing number of young recordists are learning to work with circuits and signal paths in a hands-on, design-focused way, building their own gear from scratch. Diyrecordingequipment.com aims to serve as a doorway into that world.
All this may be a departure from what many young engineers expected their jobs might entail. But if this continues, in many ways, it signals a bit of a return to where the field began.
Justin Colletti is a Brooklyn-based producer/engineer, journalist and educator. He records and mixes all over NYC, masters at JLM, teaches at CUNY, is a regular contributor to SonicScoop, and edits the music blog Trust Me, I’m a Scientist.
A lot of competent audio engineers working in the field today have some real misconceptions and gaps in their knowledge around digital audio.
Not a month goes by that I don’t encounter an otherwise capable music professional who makes simple errors about all sorts of basic digital audio principles – The very kinds of fundamental concepts that today’s 22 year-olds couldn’t graduate college without understanding.
There are a few good reasons for this, and two big ones come to mind immediately:
The first is that you don’t really need to know a lot about science in order to make great-sounding records. It just doesn’t hurt. A lot of people have made good careers in audio by focusing on the aesthetic and interpersonal aspects of studio work, which are arguably the most important.
(Similarly, a race car driver doesn’t need to know everything about how his engine works. But it can help.)
The second is that digital audio is a complex and relatively new field – its roots lie in a theorem set to paper by Harry Nyquist 1928 and further developed by Claude Shannon in 1946 – and quite honestly, we’re still figuring out how to explain it to people properly.
In fact, I wouldn’t be surprised if a greater number of people had a decent understanding of Einstein’s theories of relativity, originally published in 1905 and 1916! You’d at least expect to encounter those in a high school science class.
If your education was anything like mine, you’ve probably taken college level courses, seminars, or done some comparable reading in which well-meaning professors or authors tried to describe digital audio with all manner of stair-step diagrams and jagged-looking line drawings.
It’s only recently that we’ve come to discover that such methods have led to almost as much confusion as understanding. In some respects, they are just plain wrong.
What You Probably Misunderstand About Bit Depth
I’ve tried to help correct some commonly mistaken notions about ultra-high sampling rates, decibels and loudness, the real fidelity of historical formats, and the sound quality of today’s compressed media files.
Meanwhile, Monty Montgomery of xiph.org does an even better job than I ever could of explaining how there are no stair-steps in digital audio, and why “inferior sound quality” is not actually among the problems facing the music industry today.
After these, some of the most common misconceptions I encounter center around “bit depth.”
Chances are that if you’re reading SonicScoop, you understand that the bit depth of an audio file is what determines its “dynamic range” – the distance between the quietest sound and the loudest sound we can reproduce.
But things start to go a little haywire when people start thinking about bit depth in terms of the “resolution” of an audio file. In the context of digital audio, that word is technically correct. It’s only what people think the word “resolution” means that’s the problem. For the purpose of talking about audio casually among peers, we might be even better off abandoning it completely.
When people imagine the “resolution” of an audio file, they tend to immediately think of the “resolution” of their computer screen. Turn down the resolution of your screen, and the image gets fuzzier. Things get blockier, hazier, and they start to lose their clarity and detail pretty quickly.
Perfect analogy, right? Well, unfortunately, it’s almost exactly wrong.
All other things being equal, when your turn down the bit depth of a file, all you’ll get is an increasing amount of low-level noise, kind of like tape hiss. (Except that with any reasonable digital audio file, that virtual “tape hiss” will be far lower than it ever was on tape.)
That’s it. The whole enchilada. Keep everything else the same but turn down the bit depth? You’ll get a slightly higher noise floor. Nothing more. And, in all but extreme cases, that noise floor is still going to be – objectively speaking – “better” than analog.
On Bits, Bytes and Gameboys
This sounds counter-intuitive to some people. A common question at this point is: “But what about all that terrible low-resolution 8-bit sound on video games back in the day? That sounded like a lot more than just tape hiss.”
That’s a fair question to ask. Just like with troubleshooting a signal path, the key to untangling the answer is to isolate our variables.
Do you know what else was going on with 8-bit audio back in the day? Here’s a partial list: Lack of dither, aliasing, ultra-low sampling rates, harmonic distortion from poor analog circuits, low-quality dither, low-quality DA converters and filters, early digital synthesis, poor quality computer speakers… We could go on like this. I’ll spare you.
Nostalgia, being one of humanity’s most easily renewable resources, has made it so that plenty of folks around my age even remember some of these old formats fondly. Today there are electronic musicians who make whole remix albums with Nintendos and Gameboys, which offer only 4 bits of audio as well as a myriad of other, far more significant issues.
(If you like weird music and haven’t checked out 8-Bit Operators’ The Music of Kraftwerk, you owe it to yourself. They’ve also made tributes to Devo and The Beatles.)
But despite all that comes to mind when we think of the term “8 Bits,” the reality is that if you took all of today’s advances in digital technology and simply turned down the bit depth to 8, all you’d get is a waaaaaaay better version of tape cassette.
There’d be no frequency problems, no extra distortion, none of the “wow” and “flutter” of tape, nor the aliasing and other weird artifacts of early digital. You’d just have a higher-than-ideal noise floor. But with at least 48 dB of dynamic range, even the noise floor of modern 8-bit audio would still be better than cassette. (And early 78 RPM records, too.)
Don’t take my word for it. Try it yourself! Many young engineers discover this by accident when they first play around with bit-crushers as a creative tool, hoping to emulate old video game-style effects. They’ll often become confused and even disappointed to find that simply lowering the bit count doesn’t accomplish 1/50th of what they were hoping for. It takes a lot more than a tiny touch of low-level white noise to get a “bad” sounding signal.
The Noise Floor, and How It Effects Dynamic Range
This is where the idea of “dynamic range” kicks in.
In years past, any sound quieter than a certain threshold would disappear below the relatively high noise floor of tape or vinyl.
Today, the same is true of digital, except that the noise floor is far lower than ever before. It’s so low, in fact, that even at 16 bits, human beings just can’t hear it.
An 8-bit audio file gives us a theoretical noise floor 48dB below the loudest signal it can reproduce. But in practice, dithering the audio can give us much more dynamic range than that. 16-bit audio, which is found on CDs, provides a theoretical dynamic range of 96dB. But in practice it too can be even better.
Let’s compare that to analog audio:
Early 78 RPM records offered us about 30-40 dB of dynamic range, for an effective bit depth of about 5 -6 bits. This is still pretty useable, and it didn’t stop people from buying 78s back in the day. It can even be charming. It’s just nowhere near ideal.
Cassette tapes started at around 6 bits worth of “resolution”, with their 40 dB of dynamic range. Many (if not most) mass-produced cassettes were this lousy. Yet still, plenty of people bought them.
If you were really careful, and you made your tapes yourself on nice stock and in small batches, you could maybe get as much as 70dB of dynamic range. This is about equivalent to what you might expect out of decent vinyl.
Yes, it’s true, it’s true. Our beloved vinyl, with its average dynamic range of around 60-70dB, essentially offers about 11 bits worth of “resolution.” On a good day.
Professional-grade magnetic tape was the king of them all. When the first tape players arrived in the U.S. after being captured in Nazi Germany at the end of World War II, jaws dropped in the American music community. Where was the noise? (And you could actually edit and maybe even overdub? Wow.)
By the end of the tape era, you could get anywhere from 60dB all the way up to 110dB of dynamic range out of a high-quality reel – provided you were willing to push your tape to about 3% distortion. Those were the tradeoffs. (And even today, some people still like the sound of that distortion in the right context. I know I do.)
Digital can give us even more signal-to-noise and dynamic range, but at a certain point, it’s our analog circuits that just can’t keep up. In theory, 16-bit digital gives us 96 dB of dynamic range. But in practice, the dynamic range of a 16-bit audio file can reach well over 100 dB – Even as high as 120 dB or more.
This is more than enough range to differentiate between a fly on the wall halfway across your home and a jackhammer right in front of your face. It is a higher “resolution” than any other consumer format that came before it, ever. And, unless human physiology changes over some stretch of evolution, it will be enough “resolution” for any media playback, forever.
Audio capture and processing however, are a different story. Both require more bits for ideal performance. But there’s a limit as to how many bits we need. At a certain point, enough is enough. Luckily, we’ve already reached that point. And we’ve been there for some time. All we need to do now is realize it.
Why More Bits?
Here’s one good reason to switch to 24 bits for recording: You can be lazy about setting levels.
24 bits gives us a noise floor that’s at least 144 dB below our peak signal. This is more than the difference between leaves rustling in the distance and a jet airplane taking off from inside your home.
This is helpful for tracking purposes, because you have all that extra room to screw up or get sloppy about your gain staging. But for audio playback? Even super-high-end audiophile playback? It’s completely unnecessary.
Compare 24-bit’s 144 dB of dynamic range to the average dynamic range of commercially available music:
Even very dynamic popular music rarely exceeds 4 bits (24dB) or so worth of dynamic range once it’s mixed and mastered. (And these days, the averages are probably even lower than that, much to the chagrin of some and the joy of others.) Even wildly dynamic classical music rarely gets much over 60 dB of dynamic range.
But it doesn’t stop there: 24-bit consumer playback is such overkill, that if you were able to set your speakers or headphones loud enough so that you could hear the quietest sound possible above the noise floor of the room you were in (let’s say, 30db-50dB) then the 144 dB peak above that level would be enough to send you into a coma, perhaps even killing you instantly.
The fact is, that when listening to recorded music at anything near reasonable levels, no one is able to differentiate 16-bit from 24-bit. It just doesn’t happen. Our ears, brains and bodies just can’t process the difference. To just barely hear the noise floor of dithered 16 bit audio in the real world, you’d have to find a near-silent passage of audio and jack your playback level up so high that if you actually played any music, you’d shear through speakers and shatter ear drums.
(If you did that same test in an anechoic chamber, you might be able to get away with near-immediate hearing loss instead. Hooray anechoic chambers.)
But for some tasks, even 24-bits isn’t enough. If you’re talking about audio processing, you might go higher still.
32 Bits and Beyond
Almost all native DAWs use what’s called “32-bit Floating Point” for audio processing. Some of them might even use 64 bits in certain places. But this has absolutely no effect on either the raw sound “quality” of the audio, or the dynamic range that you’re able to play back in the end.
What these super-high bit depths do, is allow for additional processing without the risk of clipping plugins and busses, and without adding super-low levels of noise that no one will ever hear. This extra wiggle room lets you do insane amounts of processing and some truly ridiculous things with your levels and gain-staging without really thinking twice about it. (If that happens to be your kind of thing.)
To get the benefit of 32-bit processing, you don’t need to do anything. Chances are that your DAW already does it, and that almost all of your plugins do too. (The same goes for “oversampling,” a similar technique in which an insanely high sample rate is used at the processing stage).
Some DAWs also allow the option of creating 32-bit float audio files. Once again, these give your files no added sound quality or dynamic range. All this does is take your 24-bit audio and rewrite it in a 32-bit language.
In theory, the benefit is that plugins and other processors don’t have to convert your audio back and forth between 24-bit and 32-bit, thereby eliminating any extremely low-level noise from extra dither or quantization errors that no one will ever hear.
To date, it’s not clear whether using 32-bit float audio files are of any real practical benefit when it comes to noise or processing power. The big tradeoff is that they do make all of your projects at least 50% larger. But if you have the space and bandwidth to spare, it probably can’t hurt things any.
Even if there were a slight noise advantage at the microscopic level, it would likely be smaller than the noise contribution of even one piece of super-quiet analog gear.
Still, if you have the disk space and do truly crazy amounts of processing, why not go for it? Maybe you can do some tests of your own. On the other hand, if you mix on an analog desk you stand to gain no advantage from these types of files. Not even a theoretical one.
A Word On 48-bit
Years ago, Pro Tools, probably the most popular professional-level DAW in America, used a format called “48-Bit Fixed Point” for its TDM line.
Like 32-bit floating, this was a processing format, and it had pretty much nothing to do with audio capture, playback, or effective dynamic range.
The big difference was in how it handled digital “overs”, or clipping. 32-bit float is a little bit more forgiving when it comes to internal clipping and level-setting. The tradeoff is that it has a potentially higher, and less predictable noise floor.
The noise floor of 48-bit fixed processing was likely to be even lower and more consistent than 32-bit float, but the price was that you’d have to be slightly more rigorous about setting your levels in order to avoid internal clipping of plugins and busses.
In the end, the differences between the two noise floors is basically inaudible to human beings at all practical levels, so for simplicity’s sake, 32-bit float won the day.
Although the differences are negligible, arguing about which one was better took up countless hours for audio forum nerds who probably could have made better use of that time making records or talking to girls.
All Signal, No Noise
To give a proper explanation of the mechanics of just how the relationship between bit depth and noise floor works (and why the term “resolution” is both technically correct and so endlessly misleading for so many people) would be beyond the scope of this article. It requires equations, charts, and quite possibly, more intelligence than I can muster.
The short explanation is that when we sample a continuous real-world waveform with a non-infinite number of digital bits, we have to fudge that waveform slightly in one direction or another to have it land at the nearest possible bit-value. This waveform shifting is called a “quantization error,” and it happens every time we capture a signal. It may sound counter-intuitive, but this doesn’t actually distort the waveform. The difference is merely rendered as noise.
From there, we can “dither” the noise, reshaping it in a way that is even less noticeable. That gives us even more dynamic range. At 16 bits and above, this practically unnecessary. The noise floor is so low that you’d have to go far out of your way to try and hear it. Still, it’s wise to dither when working at 16 bits, just to be safe. There are no real major tradeoffs, and only a potential benefit to be had. And so, applying dither to a commercial 16-bit release remains the accepted wisdom.
Now You Know
If you’re anything like me, you didn’t know all of this stuff, even well into your professional career in audio. And that’s okay.
This is a relatively new and somewhat complex field, and there are a lot of people who can profit on misinforming you about basic digital audio concepts.
What I can tell you is that the 22-year olds coming out of my college courses in audio do know this stuff. And if you don’t, you’re at a disadvantage. So spread the word.
Thankfully, lifelong learning is half the point of getting involved in a field as stimulating, competitive and ever-evolving as audio or music.
Keep on keeping up, and just as importantly, keep on making great records on whatever tools work for you – Science be damned.
Justin Colletti is a Brooklyn-based producer/engineer, journalist and educator. He records and mixes all over NYC, masters at JLM, teaches at CUNY, is a regular contributor to SonicScoop, and edits the music blog Trust Me, I’m a Scientist.
In July of 2010, my band Grandfather recorded its debut album, Why I’d Try, with Steve Albini at Electrical Audio, in Chicago. It was a defining experience for us [documented here on SonicScoop.] The album gave us perspective on our music and solidified our ideas about recording and releasing music independently.
After a lineup change and a year of touring, we began writing our second album, In Human Form. This is the story of how it all came together.
THE DEMO TAPE
Inspired by the Albini sessions, I bought an old Otari MX5050 8 track reel-to-reel, and started learning tape. Tyler, our new bass player and I combined our studio gear and built a mobile, all-analog recording rig.
In February 2012, we recorded a demo tape of six new songs and passed them around. The demos caught the ear of producer, Alex Newport, who happened to be the perfect fit for us to accomplish our goals on our second album.
The idea behind our first album was to record the band live with minimal overdubs, and to do it quickly and inexpensively. Albini was the perfect fit for this no-frills approach; we managed to record and mix an album in three days and release it by the end of the week.
On this album, we wanted to craft a more deliberate sonic-space for our songs, while retaining the sound and energy of our live performance. We also wanted to work with a producer who would be more hands-on in our development.
Alex fit the mold. First and foremost, we loved his work. We thought the drum sound on The Locust albums he produced was incredible. We also had a lot of respect for his role in the development of one of our favorite bands, The Mars Volta.
Alex and I spoke on the phone while we were on tour in Nashville. He explained that he was committed to capturing inspired performances, relying primarily on analog recording and mixing methods. His goal as a producer was to get the sound and performance right at the source, rather than edit and manipulate it in post-production.
He also said that he would be there to extract the best version of us, rather than try to alter our vision or turn us into something we’re not. The conversation flowed freely and I instinctively knew he was right for the job.
By April, we settled on a plan and a budget. We scheduled six days of pre-production, seven days of tracking at The Magic Shop in NYC and about two weeks at Alex’s personal mix/overdub room, Future Shock Studio, for December. With eight months ahead of us, we began to think about how to pull it off.
We wanted to completely devote ourselves to the writing process. Sharing a lockout with other bands would have made it impossible to spend the kind of time we felt was necessary to do our best work. We wanted to be able to write whenever we felt inspired, and practice everyday leading up to the sessions.
Determined to make the album a reality on our own terms, we also had to find a way to save money to afford the studio costs. This would have been impossible if we had to pay rent on a private lockout in addition to separate apartments.
LIVING UNDER THE RADAR
We decided to look for a space where we could live together and practice around-the-clock. After viewing a couple of apartments with basements, we realized there was no way we’d get away with practicing in a residential neighborhood; we’re just too loud.
We figured out that the only way to pull it off was to find an industrial-zoned space. Through Craigslist, we found an abandoned office space, on top of an art-shipping warehouse on the outskirts of Greenpoint. There were six “offices”, which provided us with enough space to each have our own room, plus a practice/demo studio. The landlord ensured us that we could practice there 24/7 without any noise complaints. The rent was also dirt-cheap.
The space was perfect, except for one thing…technically, you’re not allowed to live in a commercial space. Regardless, we decided to take it and live there under the radar.
Unfortunately, all of our attempts to make the space livable failed. We built a makeshift shower, though it had to be removed when it began leaking on the tenants below. Our kitchen consisted of a hotplate and a microwave. Beds were exchanged for couches to hide the fact that we were living there. We devised a way to hide our belongings in the event that an authority showed up.
At times, the living situation drove us insane, though it forced us to escape into our music. The daily struggle was a constant source of inspiration.
We wired up our practice space to record every session, and spent the next five months manically writing, gathering hundreds of hours of music. We’d play at night, and sort through the recordings during the day. The easy access to our gear enabled us to work freely. We ended up demoing the entire album twice before pre-production, which allowed us to make sure all of our creative decisions actually worked the way we envisioned them in our heads.
We began pre-production in November. Alex decided to do two days with us at the beginning of the month, and another four days at the end of the month right before entering the studio.
The first two days were spent zooming in to our songs. Alex made suggestions to try out, including some changes to the arrangements and tempos. We asked Alex questions that we were debating amongst ourselves, and he asked us questions about our music that we hadn’t yet considered.
Following those sessions, we had three weeks to solidify our music and run the entire album everyday, before resuming with Alex at the end of the month. While we made some last minute adjustments to our music during the final four days of pre-production, they served more as a way for us all to get on the same wavelength and into the mindset of making a record.
On December 4th, we entered The Magic Shop .We began working immediately, stringing guitars, changing drumheads and setting up our gear while Alex and his assistant set up mics. The amps were placed in isolation rooms and the drums were set up in the middle of the live room.
The majority of the first day was spent dialing in sounds and figuring out how to get the most out of the space. We decided to play live as a band with a focus on capturing the drum tracks, and overdub bass, guitar and vocals later. In actuality, this approach let us capture more of the live energy than trying to track the entire band together with the intention of keeping everyone’s performance. Since Tyler, Josh and I knew we were going to redo most of our parts, we could play with intensity and not worry about making obvious mistakes.
If we played something phenomenally, we’d still have the option to keep it, though Phil was the only one under the microscope during the first 2 days of tracking.
This is when we began to truly understand the nature of Alex’s relentless pursuit of quality and attention to detail. Since he wasn’t going to chop things up, edit or quantize anything in post, he pushed Phil to get metronome like precision take after take. He also brought a couple of different snare drums, swapping them out for each song based on its sound and tempo.
By the fourth day, we began tracking bass and guitars. Again, Alex’s attention to detail was unwavering. He had us make adjustments to the tones on our amps and pedals for each section, getting the sound right at the source rather than relying on studio processing to change it later. If the intonation of a guitar was slightly off, he’d have us swap instruments or retune it for a particular part. Most importantly, if a take wasn’t stellar he’d ask us to do it over.
We trusted him completely, which let us focus on performing and not feel any pressure to make crucial decisions.
Days 6 and 7 were spent alternating between lead guitar and vocals. I would lay down a song or two, and then Josh would sing, switching back and forth so as not to tire ourselves out. We were resolved not to make any compromises on the quality of our performances, and inevitably ran out of studio time with a few vocal tracks unfinished. Alex and Josh made plans to complete them at Future Shock Studio later that week before beginning the mix.
To mix the record, we really just left Alex to do his thing, and exchanged some notes at the end of each day. He basically completed one mix a day, and managed to create a unique sonic-space for each song that reflected the feeling of the music and lyrics. He blew us away with his work; after the amount of time and energy we spent making the album, we were thrilled to hear the final versions of our songs. They were exactly what we had envisioned from the start.
Ultimately, Alex made us a better band by pushing us to our potential as writers, arrangers and performers, not by altering our sound and vision.
Alex mixed-down to both ½” tape and digital, though we all agreed the analog mix-downs sounded best. We mailed the tapes out to Howie Weinberg (Nirvana, Smashing Pumpkins, Soundgarden) in Los Angeles, to master the album in early 2013. We had to pinch ourselves when Howie agreed to work on our album. He had mastered most of our favorite albums growing up, and we couldn’t wait to run our music through his ears.
What’s great about Howie is that he doesn’t flaunt his technical knowledge or get into esoteric details. He’s simply got golden ears, a great feel for all types of music and tons of experience. Howie nailed it on the first pass.
Needless to say, we’re ecstatic to finally release this album. After considering a number of options, we’ve decided to self-release the album as a free digital download directly on our website, www.GrandfatherMusic.com.
If you’re reading this, you’ve probably read Justin Colletti’s recent article on Spotify rates; or perhaps you remember Steve Albini’s article, “The Problem With Music” from the early 90’s, or Courtney Love’s rant in Salon Magazine in 2000. Clearly, it’s always been a struggle for artists to make money off of album sales. Whatever the solution is for the music industry (and I think Justin is on to something), we have to do what’s right for our band at this moment in time.
Our goal right now is to spread our music so that we can sustain our band on the road as soon as possible. In order to facilitate that, we’re giving our music away for free, and asking that you simply share it and spread the word in return. Pay us your attention, and pay it forward.
Our intention isn’t to devalue our music by giving it away for free; it’s to give our music freedom because we value it being heard more than anything else. We hope that this album brings us one step closer to our goals as a band, and enables us to continue to make music on our own terms.
In Human Form is available to download at our website www.GrandfatherMusic.com
Thanks for reading, for listening and for spreading the word.
- Michael Kirsch, Guitar/Lyrics, Grandfather
Spotify made big news again in mid-July when producer Nigel Godrich and Radiohead’s Thom Yorke pulled their music from the streaming service.
Their move was not without precedent. Just a few months earlier, Jeremy DeVine, head of the indie record label Temporary Residence Ltd, came on to our InputOutput podcast to discuss his plans to withhold new releases from Spotify. Earlier this week, The Huffington Post confirmed that several other independent artists planned to follow suit.
Ever since then, a misleading and hopelessly outdated infographic that I thought I’d never have to see again began resurfacing and making the rounds on social media. It claims that artists would need to rack up over 4,000,000 plays each month – more than 130,00 every day – just to make minimum wage.
How Much Does it Really Pay?
In reality, we’re continuing to see average gross payouts just shy of a half-cent ($.005) per play for ad-supported streams, about three-quarters of a cent ($.0075) for “unlimited” streams, and around one-and-a-half cents ($0.015) for “premium” streams.
This means that if you self-released your music and only attracted listeners on the ad-supported service, you’d need about 230,000 spins each month – about 7,700 plays every day – in order to earn minimum wage for just one person. Bleak, perhaps, but a far cry from 4 million.
On the premium service, you’d need more like 77,000 plays a month – or 2,600 plays each day – to crack that same nut. Not every band in the world is going to attract this much attention, but for many of the good ones, it is an achievable goal.
Although this revenue share is far better than the $0.00 offered by pirate websites, it remains an unworkable replacement for recorded music sales. Even at $.015 per stream, you’d have to listen to your favorite artist’s song 46 times in order for them to earn the same $.70 they would have ended up getting if you had bought that song on iTunes.
(Do me a favor real quick, and check your iTunes library to see just how many songs you’ve listened to that many times! The answer tends to be “not that many”.)
Despite falling technology costs, musicians’ biggest expense in making music remains time. And costs there have not gone down as far as some might expect.
At this point, we haven’t even gotten into factoring in the additional costs faced by artists who have labels, managers or more than one member — Not to mention artists who have the expectation of making more than minimum wage off from one of their primary products!
Just as with iTunes or physical album sales, if you have a record label or management team, chances are you’ll owe them a portion of this revenue. Self-released artists with managers often expect to share 10% or 15% of their income. Traditionally, artists on big indie labels might share 50% of their recording revenue. And if you’re on a major label? Numbers vary widely, but chances are your net take will be significantly less than 50%.
So: How Much is Enough?
The good news is that despite all of this, we’re not too far off. It may take some kicking and screaming and full-throated advocacy, but it’s feasible that in time artists could be looking at fair rates for music streaming – whether it’s from Spotify or an alternative service.
If we could get rates up to just $.02 per play, streaming would start to become a pretty fair deal artists. At that rate, you’d need a bit over 55,000 plays per month to crack minimum wage, or somewhere near 1,900 plays per day.
This sounds pretty reasonable to me – Especially when you account for the fact that even people who don’t like your record still end up kicking you some coin. (If someone listens to your song 10 seconds, hate it to pieces and write the most scathing Facebook review in the history of the universe, you’d still be getting paid.)
Get the rate up to $.03 per play, and streaming arguably becomes a better deal for musicians than iTunes ever was. At this rate, you’d just need 39,000 plays a month or 1,300 plays each day. What’s more, it would take just 23 plays to equal one iTunes download. And once again, even people who hate your song still end up contributing to these play counts.
Is Raising Rates Even Possible?
With high-profile indie artists beginning to pull out of the service? Maybe so. But wait: By now, you’ve probably heard that Spotify isn’t even profitable. How is it supposed to find that extra cash?
Well, the reality is that Spotify isn’t profitable because the company’s CEO, Daniel Ek, doesn’t yet want it to be profitable. “The question of when we’ll be profitable actually feels irrelevant,” he said just last year. “Our focus is all on growth. That is priority one, two, three, four and five.”
(Consider it “The Amazon Approach”: Undercut everybody and become a near-monopolistic behemoth that the competition just can’t touch. Then start worrying about profit.)
With a few minor tweaks, the company could easily pay out higher rates or even become profitable quite soon. They’d just have to give up their goal of growing to a market-dominating size as swiftly as possible.
There is a legitimate question as to whether some artists have a slightly better deal with their labels or with Spotify than others do. (People who have exceptionally great contracts usually don’t like to discuss the details too openly. Such is the nature of leverage.)
But with that aside, the fundamentals of Spotify’s business model aren’t that cryptic at all: Basically, the pay-per-stream is calculated as a percentage of gross revenues, divided by the total number of plays across the service. (This is done separately for the ad-supported and premium streams.)
Spotify actually claims to pay out 70% of gross revenue, which is right on par with iTunes. So the problem isn’t so much the split – rather it’s the company’s income, when compared with the total number of streams.
Fixing this simple problem would require either raising income or lowering the number of steams. To do this, Spotify’s options are: A) Put caps on how much listeners can stream, B) Raise subscription fees, C) Increase advertising rates or the frequency of ads, D) Eliminate or restrict the ad-supported model, or E) Some combination thereof. That’s pretty much it.
If they were smart, Spotify could get creative with these fundamental options. Back in 2012, I suggested that they let artists cap listening on their albums after a certain number of plays. Then, they could allow listeners to “unlock” unlimited listening of an album by “tipping” the artist, say $5.
Not only would this be an immediate source of revenue and a way for fans to directly support their favorite artists, but it would also significantly lower the number of streams in the pool, raising pay rates across the entire service!
If Spotify doesn’t adopt creative ideas like these, some other company will, and not too long from now, and they’ll be the ones to attract all of the best artists.
Does It Pay To Protest?
As a music fan, I love the Spotify service. It’s convenient, it sounds great, it’s an insanely good value for the listener, and if you subscribe to the premium service, its payout rates are fairly ethical (although certainly still too low) at around $.015 per play.
Still, I’m glad to see some of my favorite musicians boycotting the company. In a market economy, valuing your own work enough to say “No, you can’t have it for anything less than a fair rate” is one of the most surefire ways to keep others from devaluing it.
But if you’re going to make demands, it’s a good idea to know what you’re demanding.
If you want my opinion, I’d say hold out for $0.03 per play. Once you account for all the people who’d never buy your album but end up kicking you some coin anyway, that’s arguably as good of a return as music sales ever were – and possibly better.
But if it was me? Honestly, I’d probably settle for an increase to $0.02 on the premium service to start. (So long as I could limit or block listening on the ad-supported service.)
A 30% raise would be a huge step in the right direction, and a potentially easy battle to win: Simply raise basic subscription fees from $9.99 to $13.99. Or, just create a new, slightly more expensive product tier for the highest-frequency users. Do either of those things, and the service is already there. Done. And that’s without implementing a single creative idea.
Artists have subsidized Spotify’s growth for quite a few years now. Perhaps today, with countless legal alternatives to piracy priced cheaper and made more convenient than ever before, it’s about time we took off the training wheels.
Now that legal, convenient and affordable alternatives to piracy exist, why should new artists continue to subsidize Spotify? The only “public assistance” that a huge company like Spotify should get is a concerted effort to crack down on illegal and exploitative pirate sites.
Collectively, pirate sites rake in millions in advertising and pay out $0. Realistically, if we really want legal streaming services to pay well, we’ll all have to work together to clamp down on those kinds of services. This would increase the viability of legal competition to Spotify, giving artists more streaming providers to choose from, and increasing payouts.
We live in a market economy, and it’s about time to let Spotify sink or swim as a real business. If Spotify can be convinced to start putting greater limitations on the free service and begin paying out a fair and sustainable rate to artists, they’d certainly win me as a customer. As it stands now, that’s the only thing holding me back.
Justin Colletti is a Brooklyn-based producer/engineer, journalist and educator. He records and mixes all over NYC, masters at JLM, teaches at CUNY, is a regular contributor to SonicScoop, and edits the music blog Trust Me, I’m a Scientist.