We haven’t changed the way we listen to recordings since the phonograph was invented in 1877. Yeah, stereo was a neat addition. And, sure, we have cool things now like digital compression and distribution. The fact that you can carry around thousands of songs in your pocket and download more while on the move is pretty outstanding. However, the track you listen to on your iPhone is just as static and invariable as the recording Thomas Edison heard.
What if the recordings we listened to were dynamic, changing in subtle (or dramatic) ways each time we listened? What if you could get updates to your favorite albums and songs? What would it mean to artists if they didn’t have to choose just one take of their solo, or one set of lyrics for a verse?
Apps give us a way to deliver such evolving music.
We all use apps for entertainment, communication, and to increase our productivity. We are very accustomed to the idea that as companies improve their technology, they push out software updates that enhance our experience.
Software can help us accomplish truly sophisticated tasks. In the music domain, we have powerful DAWs like ProTools and Logic that are used to digitally capture, edit, and mix music. However, somewhat ironically, all that power goes into delivering a fixed, unchanging recording of a song.
Before recorded music, every listening experience was unique, because the only way to hear a piece of music was to have it performed live. To be fair, the reason we haven’t seen dynamic recorded music is probably due to the fact that we haven’t had a delivery system capable of handling it. The idea of an evolving song experience on a cassette tape doesn’t make any sense.
But, if you think about delivering an album as an app, everything changes.
The collaboration between Björk and Snibbe Studio to produce Biophilia as the first app album is an example of the potential. It is an immersive experience that provides the kind of juice the music industry could use right now. By allowing the music to be interactive, it excites the user and prevents two experiences from being identical. Artists at the cutting edge sense this creative power, and have begun exploring apps as vehicles for expression.
If apps as albums are to succeed, however, we can’t expect musicians to have the graphical skills of Snibbe Studio, or the resources to hire such immensely talented people. And, although an immersive experience that demands all of your senses is fascinating, most of us listen to music while doing other things. The last thing we need is some ass driving down the highway trying to have a mesmerizing, hands on experience with his music. You thought texting was bad.
We also are not talking about loops here, or re-mixes as they are understood in the common vernacular. Songwriters and composers use nuance and timing to convey emotion. They make choices. They don’t want a system that just roughly chops up a verse and sticks it where another verse might go, especially without their approval. They may want you to hear two different solos, but they don’t want you to have access to their mistakes. And, they don’t want some terrible auto-remix being associated with them when it is overheard at the lunch counter.
Artists and recording studios will need tools to make it practical to create an Album 2.0 culture. At this point, it’s pretty standard for at least one member of a band to have a DAW at home, even if it is just GarageBand. And, obviously, almost every studio has a DAW. We’ll need software to help organize the tracks they record with their DAWs into packages that can be delivered as apps. Think about marking several different solos in a timeline of a song, so that an app knows it could play either solo at a particular point. Or, think about the nuance created if all the snare hits are identified and then can be randomly reorganized to give a different feel to the song each time you listen, making it seem like a live performance. In visual terms, picture MP3s as two-dimensional objects, while Album 2.0 songs are three-dimensional. This really isn’t a wild idea. DAWs already use the concept of non-destructive editing, meaning that the other take is still there, buried under the take you decide to keep. Album 2.0 simply means you can use anything from that stack, not limiting yourself to one “perfect” take.
We’ll also need a basic framework for describing an Album 2.0. If it is to be scalable, there needs to be a supported architecture for the apps. It may be open source code that studios or artists download as an Album 2.0 template. It may be that the woman in your band right now that has the DAW will also learn some essential coding. It’s really not that hard to imagine.
As for distribution, most of it is already in place with the various app stores. A band updates their Album 2.0, and Apple or Google push them out to the users. The listener gets a new vocal, fixing that horrible pitchy moment the singer hated in the first release. All that needs to be developed is a nice way of managing big data on the servers and an efficient system for transferring those audio files and their metadata.
Updates to albums also have immediately apparent financial benefits to artists. First, they give the artist or record label options such as in-app purchases or subscription models. Instead of only one transaction when a user downloads an album, each song can continue to produce new revenue. In a world where content is king, it also provides a constant pipeline to keep a fan engaged. Whole new forms of social media and “gamification” could be developed around this relationship.
What about the nostalgia implicit in hearing the same exact version of a song you heard on a first date twenty years ago? Well, maybe the original release is the default setting in the app. You know that moment where you hear a song for the first time, and it blows you away, and you stay in your car until it finishes playing? Now, imagine having that “first kiss” thrill over and over as your favorite song evolves, continually surprising and delighting you.