There is a quote that is often thrown around in film and game sound circles about the rule of 100%. I believe the idea originates from Ben Burtt, but is often repeated and conveyed by various respected sound designers, especially in film. I’m paraphrasing, but it goes something like this…

“Everyone on a film assumes it is 100% of their job to tell the story, the composer will write music that hits all the major plot points and moods, the writers cover everything in dialogue telling 100% of the story, and the sound designers will cover every single moment with effects to carry 100% of the movie/game/whatever” – I actually found a better reference to this in Randy Thom’s Designing a Movie for Sound essay found here (section: opening a door for sound)

At the end of a production, this feels very true, and it feels especially true when you are sitting in a final mix, trying to figure out what the heck you are going to get rid of in the moment-to-moment mix. What is important at any given moment? This, in film, is where the collaboration with the director kicks into high-gear and the ‘audience’s experience of the story’ really gets into the veins of the soundtrack – a final mix is, if you like, the ‘implementation’ of the story via the soundtrack. The decisions will be made through discussion, and this is certainly easier in film due to the linearity of the medium, as to what has prominence at any moment, sometimes music is fore grounded, sometimes sound fx, sometimes (most often) dialogue. In video game mixes, the experience can be completely different depending on the team involved, the size of that team, and the scope of the project. Sometimes it is one person mixing the game making all these decisions, but at least with the knowledge of what the game design and experience needs to convey. On bigger projects it might be a small directorial multidiscipline group of leads who sit together and talk about the decisions – either way, the process is complicated by technology and workflow.

I like the idea of sound, music and fx contributing to the storytelling in equal measure. This is certainly more appealing than thinking that each of these elements will attempt to create a logjam by providing 100% each, and leaving it to the final mix to sort out the priorities at each moment. I’ve heard of sound editors in film even providing more than 100% coverage in having multiple different ‘options’ available on the dubbing stage.

Now, this is an idealized and utopian scenario, and every project has different demands of each of our three main threads of sound, but perhaps, at least as a starting point and a way of thinking about what will be important in your project from sound, breaking these areas down into the three chunks that ‘ideally’ are responsible for 33% each will work better.

33% of the soundtrack will be about music moments.

33% will be about sound moments.

33% will be about dialogue moments.

It is an oversimplification, and perhaps the practicalities of budgeting and rework make this a difficult proposition, but it is a better starting point than the 100% rule: which creates that logjam at the backend. Thinking about these numbers at the beginning of a project, rather than the 100%, is a more realistic guideline for everyone involved. It should even encourage more forethought and planning as to ‘whose moment’ is required up front, it might kick into gear some early mapping of a project in terms of FX, Music and VO. All of these elements simply can’t be going all the time, so these kinds of decisions do need to be made.

Perhaps an even more simple pre-check before commissioning any sound work should be along these lines…

Should it make a sound? (Yes / No)

Should it have a music cue? (Yes / No)

Should it be conveyed through dialogue? (Yes / No)

The emphasis here being on a reduction of overall sound, rather than an increase of overall sound content.

Leaving the ’what plays and what doesn’t play’ decisions to a final mix is making a lot of work for yourself in those crucial few weeks at the end, and the finished project will sound, more often than not, like ‘music was turned down here’ and ‘sound effects were turned down here’, rather than the coordinated orchestration of specifically written and implemented music, vo and fx to be found in, for example, The Last of Us. In this game, no one element feels as though it is trying to overpower the others, they seem to be very much working together, and the more you think about this (because it isn’t something that you notice when you are enjoying and playing the experience) the more you realize that this has all been very carefully thought out in advance and didn’t just happen to ‘come-together’ at the last moment.

I like the idea of a composer setting out with the knowledge that their contribution is going to be only a third of the entire soundtrack. Similarly I like the idea that writers are starting out with the notion that one third of the experience is going to involve spoken dialogue. I like the idea that, as content creators, we can fully expect, from the outset, to throw away 66% of the responsibility to carry everything on our shoulders. It is also just good common-sense editorial.

Being the sole audio developer at an Indie studio, and having a background as an audio director, I tend to think immediately of any project in terms of it being my responsibility to cover 100% of the soundtrack (foley, fx, ambience, music, vo). But it is only when I start to think about actually creating the content that I realize that it isn’t anywhere close to 100% of my sound or music work that is going to be doing this, but the work of many collaborators. It is very important, I realize, to define the scope of what is needed at the point of delegating out the work, as well as a schedule for its completion and integration. Any projects where I have contributed sound or music myself, I always find I have a hard time at the mixing stage ‘removing’ things. Just too attached to it. I can see the amount of work that has gone into things, and it is natural to resist decisions whereby that content is effectively removed or demoted, even though for the good of the project.

This is why I believe we have so much to learn from watching and listening to mixers. There is a useful, Eno-like idea that in attempting to mix a project you wear the ‘mixers hat’, not the sound designers hat, not the friend-of-dialogue (writer’s) hat, and not the composer’s hat. That is no longer your role. It is in wearing the mixer’s hat that you are allowed to remove yourself from the work done up to that point on every element of the content, and effectively make cold, hard decisions about what is needed, about what can be pushed to the foreground, and what can be removed. Mixing is a very subtle art in that decisions don’t need to be black and white, (“either there is music or there isn’t music”), but several things can co-exist up to a point, music can be ducked out of the way, yet still be audible, as can backgrounds and fx. A massive part of that subtle art is also political, (though it doesn’t really need to be). However, it is at this point of ducking things that you realize that a far better method of approaching this would have been to have designed the music to get out of the way at that particular moment in the first place. Predicting these moments where possible will enhance the interrelationship between the three major food groups of a soundtrack (leading to a more cohesive and telepathic whole), it will also make for a better experience for the audience/player. And, it will also make ‘mixing’ so much easier… another way to think of good planning is as “mixing in pre-production”.

This is something I’m trying out in initiating new projects. I’m hoping to be thinking about the final effect, and the final mix decisions long before we actually arrive there, and already in reality, the closer we get towards a final mix, the closer we get to determining exactly what is required of each of the three components of the soundtrack. Some of the most useful ways of doing this I’ve found are narrative, or gameplay, dynamics maps (detailed here http://www.gamasutra.com/view/feature/132531/dynamics_of_narrative.php ) – these will give an idea of what is needed from each of the three elements, though these are like graphic scores that allow a great deal of interpretation from the artist charged with creation. At least understanding the fundamentals of the dynamics involved in a project will give rise to healthy discussions about whose responsibility it is to, say carry action scenes, as opposed to ambient scenes or exploration and moments of ambiguity. Shifting the focus of sound work to understanding interrelationships between the three main threads of a soundtrack much earlier on in a project is where I see so much scope and opportunity in development right now, no matter what the technology or delivery mechanism for the game.