While the arms-race of each successive console generation offers and tantalizes consumers with higher quality entertainment experiences, defining quality itself has started to get more and more tricky. Is it simply a case of (for sound at least) higher sample rates? More fidelity in the surround field? Playing back more voices simultaneously? Higher resolution DSP effects? Consistency? Less glitches and bugs? More convincing (and convincingly captured) performances from actors?

It does begin to blur around the edges as you realize that this is perhaps one of the broadest and most subjective categories to talk about. Yet, it is fundamental to how we navigate, describe (and judge) increasingly expensive (and often complex) entertainment experiences within our industry. Quality is also something that, you soon realize, doesn’t only apply exclusively to big budget games, but also something that applies to much smaller titles, and even down to simple interfaces. Perhaps it helps to think not about the end result, the objective final output of the game, but about the overall experience, and to that end, perhaps the ‘quality’ of processes that go into creating the experiences themselves requires more examination and investment (beyond the unsatisfactory notions of ‘quality’ simply being a shaded area occupying the intersection of features, budget and time).

I’ve been thinking a lot about this lately (too much, hence the overflow into the written word) and, my own ad-hoc definition of “QUALITY”, in a game production context, might shed some light (or maybe raise more questions) on how to evaluate (and produce) the ‘intangible’ notion of ‘quality’ (note: this is not really about ‘polish’ which I consider to be an endeavor almost exclusively achieved and performed in post-production) – and it is actually informed and tracked across several quite different areas.

1) Quality of Interaction (Communication and Collaboration) Ensuring collaboration is happening at the high level (between leadership/studio culture/project management) and at the low level (between coders and implementers) and is both happening vertically (intra-discipline) and horizontally (inter-discipline)

2) Quality of Implementation (use of, and access to, material, ease and speed of implementation (tools & pipelines), expertise, iteration time (refinement and enrichment)

3) Quality of Input (Source Assets) and Output (Signal/Data Path): Correctly isolated (or environment specific) recordings (or synthesis) at the highest sample rates and bit depths + I/O signal path (easily re-configurable mixer hierarchies and parametization of sound), controllable, carefully measurable, predictable and trackable output levels. Having this I/O in place allows both upwards and downwards SCALABILITY to different (or newly emerging) platforms.

In combination – I reckon these three areas invariably allow the delivery of refined ‘high-quality’ features and experiences. I’d also like to imply that these areas are not limited to console development (although it is the source of current questions about what ‘next-gen’ actually is/means), but can apply to any technical system whereby the delivery devices are cyclical and incremental.

Perhaps quality is more simply about how well we are able to convey an idea and an experience to a user, and making that distance between the user and the experience as small as possible, such that, in the end, the technology all but disappears completely.

The day to day work of audio can be very detail oriented, and it is easy to get lost in this forest of sound molecules. Solutions to many day to day issues often rely on decision making of a broader kind, and often audio work can be as much political as it is creative, social or technical. Wrangling resources, ensuring that important production information and risks are on everyone’s radar, selling features, ideas, haggling for more time or budget and communicating across disciplinary voids can require a fair degree of entrepreneurial flair.

I’ve been thinking about some general audio pillars within game development a lot. I thought I’d have a go at throwing together some very high-level pillars for game audio which read, to all intents and purposes, like a kind of manifesto promise. The thought here is to provide high-level transparent goals for the audio department within a development environment, and to serve as a series of checks and balances by having a longer term strategic outlook (without that we are marooned in the reactionary, short-term and arguably heading in no direction in particular). This also serves to hold audio accountable to some tangible realities and deliverables, if things aren’t moving in the direction outlined, then, during regular check-ins, course correction can be applied.

Four Strategic Long-Term Audio Pillars

A Focus on Polish is a Focus on Solid Communication.

Whatever the project, polish is one of the most fundamental areas of audio work (it is the reason we focus so much on having good quality source assets and geek-out about microphones, and also why we focus so much on the idea of a ‘signal path’ ). Call it post-production, or mixing, or whatever, the process of removing any unwanted jagged corners, cuts, glitches, sounds that grab the attention at the wrong time, or don’t help the experience is an aspect that is universal to every single sound project. This can be an emphasis on being visible about and scheduling audio Post-Production time, or a familiar and contributory appearance at scrums. But, in order for sound to actually effectively polish something, the work in other areas of production (animation, scripting, world building etc) has to have been somewhat ‘locked down’ – This is an increasingly difficult subject in today’s fast-moving, ‘never-finished’ digital production domain, but one thing that these changes have emphasized over all others is that communication is critical. Iteration, visibility and ‘connectedness’ to the team’s thinking and planning is important to providing polish in the digital production domain. Using continual verbal, visual, and written comms is absolutely essential to keeping everyone in the loop on what is happening. Polish is as much co-ordination, as it is technical or aesthetic choices, and co-ordination is a political endeavor.

Grow, Nurture, and Invest in the Audio Team

Audio teams are often the smallest in the building. They are outnumbered by Art, Design and Tech departments. They can appear to others to be a black box, where no-one understands the processes and voodoo that goes on in sound-proofed rooms. But, we are just like any other department. There is nothing special about team audio; we may see things differently, and have different connections to the team, we have different needs and different skill-sets, but fundamentally, we are exactly the same. In the early days of game development (which these still are) audio often needs to shout that bit louder for equality and representation on the team and to get a seat at the table as a ‘principle collaborator’, rather than an end-of-production ‘service provider’. Everyone on the team will be trained in, and versed in the language of collaboration and innovation. They will know who to go to, how to present, how to prototype an idea and set goals, they will have resources at their disposal, and they will be encouraged to push forward and improve every aspect of their craft and process – removing every element of drag, friction and resistance from their work. Career paths will be clear, transparent and on par with other disciplines in the studio culture. Members of the team will have autonomy to control their own growth and path. The audio budget will always be discussed and adjusted to fit the requirements of the project, with a focus on VALUE.

Early (and Continued) Involvement for Audio

Involvement in earliest genesis discussions of a project. Early involvement with script development, pre-vis work and prototyping as well as with early scheduling and budgeting. Simply put, “Audio is another Art Department.”. The sound team will be able to participate in design discussions, or be empowered to create those opportunities and discussions where they do not yet exist.

Tools & Tech: Put Designer/Implementer UX before Player UX. (Player comes 2nd! – The only way to truly put the player 1st) –

Push the Technology and pipelines in a meaningful, useful and positive direction. Alleviate the designer/implementer’s struggle. The primary goal is to support the person using the tools and enable them a frictionless experience (alleviate enormous fatiguing or repetitive/heavy lifting tasks) when integrating audio into the game. (From small standalone batching scripts and tools, to game engine and audio engine tools & pipelines – the experience of integrating sound should be simple, straightforward, painless and easy to communicate to others) – focusing tools and processes on the user, allowing audio designers to quickly implement assets, switch them and tune them at run-time is a priority for changing the collaborative nature of review sessions etc. This in turn allows the audio designers to focus more clearly on the ‘player’s experience’ rather than wrestling with their own technical issues.

Every studio culture is different, and has a unique approach that solves design and production problems for a unique product line-up. Also, for some audio departments these are problems that are already long-ago solved, while at others, the problems are so much worse (no audio tools, no audio programmer support or resources, and woefully underdeveloped pipelines) – yet every time, audio finds a way to struggle-on, smash through that which resists and make things work and happen. This is really a hopeful push for a broader, more long-term strategic vision – to build resourceful and confident teams with an elevated view of what is in front of them (and behind them), rather than teams fixated on the short-term problems immediately in front.


“Towers. Open Fire.” – Burroughs

I’ve been meaning to post on an, admittedly experimental, method of Agile Development that i’ve been working with for the last few years, but have never quite got around to documenting. It’s not for everyone, but then neither is Agile. I want to see if anyone else is working with and attempting to formalize adaptive iterative methods like this, and secondly I want to see if there are audio designers or directors who are working with Agile methods in a more ‘by the book’ manner and finding them to be successful. Writing (and re-writing!) this post has led me to some quite deep pondering on the Agile process and how to re-think it specifically for audio, and i’ve come to the conclusion (for now) that by thinking about tasks differently, by scaling them, you can have structure, trackability and freedom. My guess is that most audio folks work this way anyway, though perhaps think of it in different terms.


Obviously different teams and cultures, different software products have different interpretations of Agile development techniques and implement them differently based on what works well for that particular team. I’ve always found the by-the-book Scrum and Kanban techniques to be dry and process-driven, where the most important thing always seems to be following the rules of the game, rather than working collaboratively and openly in more informal discussion groups.

Kanban seemed like a better philosophical approach, and more geared towards open collaborative x-discipline game development, however i’ve found both to be relatively short-sighted and not have enough focus on ‘the big picture’. There is also a big question of who is running the show, the scrum master, or PM, or game director, can often wield quite a lot of power simply by cutting user-strories based on priority, rather than a collective gut-feel for if something is truly worth doing or not. This is I guess why these methods work well for more structured, incremental software development, and perhaps not so well to the awkward, uneven and chaotic world of game development.

This brings me specifically to audio in an agile world, you may suddenly find yourself in the following situation, without much planning or thought about the approach… Let’s say we have a large team. From my own experience, this usually looks like this: There is the team, broken down into cells, all doing their daily morning stand-ups. As an audio team, or even a single audio resource on a project, there is instantly a problem here when several cells are meeting at the same time, and the audio resources are spread so thinly as to not be able to show up in the different cells. Shift the timing of the cell meetings to be able to accommodate, and you have the problem of being in meetings all morning. These morning stand-ups DO work well, when the team is small, meaning around 10 – 15 people, so that a single morning stand-up can happen and audio can be at the table to update and be updated without cell meeting burnout. One of the things you quickly learn is to keep these things short, entertaining and meaningful. So, audio updates tend to be just that, maybe without the entertaining and meaningful parts, but, we do our best. So why does this work better at the small team level? On larger teams the ways in which different members of the audio department work within a cell may be quite different, and may even vary at different times during production. Some may choose to detail all their work down to the smallest element, while others may simply work with broad strokes at a higher level. I find agile methods like scrum tend to focus most heavily in on the small details, which is where I can find them dry and tedious, and I guess the part I like least, is that I tend to see teams and groups getting ‘lost in the details’. By the same token, they also tend to ‘leave out’ the bigger picture stuff… so I’ve always been trying to find a way of simultaneously having a good handle on both. Scrums, for me at least, can often feel like something is missing in terms of focus.

Perhaps now is a also good moment to take a step back from looking at the problems of that ‘scrum daily standup’ moment, to think about what is going on from a higher structural level. I should mention the actual scheduling process. This is critical in determining when the team, and when audio, will be allowed to be ‘agile’ and when they have to morph into a beast of delivery and rigidity. I’ve found that working backwards from a ship date to be the absolute best way of knowing where you are and where you should be right now. Milestones are still very necessary and meaningful from an audio viewpoint, perhaps one of the few barometers for where the entire project needs to be. Audio, being last in the production dependency baton-race will typically indicate where things need to be complete by, and by scheduling things like a final mix, or a sound beta and sound alpha period, all the relevant ground spikes are hammered into the schedule and you can begin to build up a pretty decent skeleton of where things need to be and by when.

Project Level View

Example Schedule of High level Tasks (er… click to go BIG)

Certain audio tasks may not appear to be very compatible with agile methodology at all, at least not in the widely accepted ways they are done inside most development studios. Dialogue production, for one example, has a myriad of dependencies and conditions that need to be met at various milestone: narrative script, cut-scene script, AI script writing, filename and productions script preparation, casting, booking, placeholder recording, implementing, recording, editing, implementing, reviews, re-writes, re-records, re-edits, mastering, mixing – these dependencies are so waterfall-like (Gantt-like) that it is almost impossible to see where agile development can fit into that process once it gets started. It kind of lives outside that whole ‘fast iteration’ cycle – even more difficult with mo-cap. Writing tends to be agile, as it turns in tandem with the twists of the game design – but once you get into the solidified waters of the ‘script’ things gets locked down pretty tight. One small, but significant area where I have found fast iteration can be applied to dialogue is during the actual session, where actors can improvise and provide re-writes on the fly. This is not agile development, but straight up improv within the confines of the ‘dialogue line’ or ‘event’. This is a philosophy you have to be prepared for and ready to embrace, but it can be awesome and create some really compelling performances. So, in that tiny ‘dialogue recording’ window, you have an opportunity to be agile. (As an aside, I think the further we can get away from actors ‘reading scripts’ in video games, and get to the place where they ‘learn lines and improv scenes’ the performances will ‘read’ as much more compelling by the audiences – iterative and improv based dialogue session are one way to do this, but the window of opportunity to capture this is very small – precisely because it is one of the only periods of freedom inside an industrialized Gantt production process).

So, dialogue production is not really something that plays nice with a by-the-book agile methodology. It would be awesome if it could, and there were systems whereby actors could always be available to come in and instantly change content on a weekly basis, but it is so industrial a process, that this is probably not something we’ll see for a while, certainly less likely at the A3 end of development.

Music production, similarly can be a process of either industrial production based on several iterative stages of pre-production sketches to delivering finished pieces mission by mission, milestone by milestone, feedback being provided at various stages and then having them enter a brief post-production phase (of mixing and mastering), or it can be a more ongoing iterative process for smaller titles with less music – the ongoing iterative process here can produce music of finished quality at almost the beginning of the process, and constantly change throughout production right up until the end. The methods of working, and the production pipelines implied by different styles of music (electronic over orchestral for example) largely determine the process and agility inside that process – so then, here agility and iteration times are influenced more by chosen production style, than by a team’s desired way of working. But, as with dialogue, these kinds of productions are likely to include periods, or levels, of both agile and waterfall.

Depending on the scale of production, and here is where I finally get into my idea of a different way of thinking about and scheduling agile tasks, items such as ‘music production’ and ‘sound effects production’, at the highest level, are often better represented on a schedule by several very long-term time boxes. Essentially these huge time-boxes are large spaces of time dedicated entirely to producing and iterating on content until time either runs out, or a part of the process feels finished. The advantages of these ‘large, broad iteration cycles’ (or timeboxes) are many. They simplify the audio production task definitions for non-audio PMs and other disciplines. They show clearly where the agile tasks rigidify into drop-dead delivery dates (so we don’t open the audio up to endless noodling when the clock is running out). They also free up the producers of this content to iterate and experiment with these broad categories in their own way, at their own pace, they can procrastinate, knuckle-down, explore new ideas, essentially have complete ownership over those areas of the game sound. Part of this is to never have anything in the game as ‘final’ or ‘finished’ until the final mix. This leaves flexibility and honesty in the schedule and also allows big decisions to be made at the final mix (where very often sounds are replaced based on new unforeseen contexts that have arisen late in development) and major stakeholders are present for the audio sign-off.

I’ve broken this down elsewhere, but as an audio lead, my high-level audio schedules usually consist of two kinds of tasks. Long-term iterative tasks, and short term tasks. Short term tasks may be things like developing specific audio features for the game on the programmer side, or those waterfall tasks described for dialogue above such as ‘casting’ and ‘recording’. Long-term iterative tasks are those ongoing areas that will always be changing as development trundles along, yet they remain as high-level as possible in terms of description, purely to allow the PMs and scrum tasks to have insight without worrying about every single piece of minutiae about the work.


So, there is a difference here that needs to be addressed, and that is one of TASK SCALE.

I’ve found a useful way of thinking about tasks, and resolution of tasks, is as a kind of atomic scale, like zooming in on that photo in Bladerunner. I reckon there are (at least) three useful levels at which to consider tasks…

1)  The Molecular/Atomic Level (The detailed / nitty gritty /individual wave files, event triggers, volume and pitch attenuations, all the tweaking-knob-twiddling detail of any particular task, the smallest components)

2) The Object Level, the level at which you can give things names that other disciplines can relate to – weapon x, music cue y, location z.

3) The Project (or ‘Feature’) Level, as described above, the large high-level PM way to look at and break down tasks, music, ambience, prop effects, UI sound, mix – big things that you can talk about holistically – the ‘Features’ that make up the back of the box, exec summary type-stuff.

The Molecular/Atomic Level task is not something that I think works well at being tracked at all, so I don’t think this really works with Agile methods, it is too granular (and, in fact, too agile) and is always the kind of stuff that people who start a discussion about need to usually take ‘off-line’ and collaborate together on elsewhere. This is a level of detail that I hinted at earlier as being something that an individual has ownership over and autonomy over – but the ability to ‘scale up’ and be able to talk about and update on those details at the Object level IS important…

The Object Level. This is a category of tasks that I believe can be tracked nicely with Agile methodology (or any other tracking method), and lends itself well to x-discipline group exposure. Discussions are tangible and not too technical, details can be figured out offline, and progress can be tracked e.g. ‘The sound for 10 weapon classes is on schedule for the end of the week’, ‘the music cues for missions x, y and z  are now implemented and ready for feedback/testing’.

The Feature/Project Level. This is a PM, or Audio Director / Lead / Exec level perspective and often requires a solid knowledge of where everything is on the object level (depending on your role). When things change on this level, they ripple down to the levels below in a big way, via either extra polish time, additional levels, objects or animations, or reduced scope.

All these TASK levels related to one another, but depending on another SCALE, the scale of production, the amount of sound resources required to do this effectively changes a lot.

There are two ways a project and its resources can be scaled. A project can either have DEPTH, or BREADTH.

A Racing genre game with very few tracks, but with a massive amount of vehicle types, could be considered a game with shallow feature sets and DEEP content. A third-person linear adventure game, with lots of variety in the different locations and player activities throughout the experience (I’m thinking Uncharted, Arkham Asylum etc) could be said to be game with a BREADTH of features and content, but only one or two DEEP mechanics. An open world game such as GTA, Assassin’s Creed or Saints Row could be said to have both BREADTH (tons of features like driving, shooting, hand-to-hand combat, navigation) and DEPTH (tons of variety in vehicles, weapons, mission types and locations). A mobile title such as Angry Birds or Candy Crush would by contrast have comparatively shallow Depth and Breadth. (There is also the factor of TIME, which for now, I am not considering, but will make a large impact on the amount of sound personnel required).

This is where sound personnel matters a lot, and how they are organized to handle TASK SCALE levels becomes important. An open world title might require several sound designers and implementers with ownership over several broad areas of the game, all handling both Molecular and Object Level tasks and require an Audio Lead to monitor and connect that team to the Project Level Tasks. In a small mobile game studio, one person may be handling all of the task levels themselves.

Attacking a project on all of these levels is kind of what I mean by “Attack on All Fronts”. When a painter or sculptor works, the way they often work is a quick and meditative (subconscious) interplay between the molecular level detail and the big picture level detail, going back and forth very quickly and rapidly iterating using the material they work with. This is how working on a mobile title can feel when you are a single person audio department. The object level completely disappears and there feels like little need to track anything, or communicate anything because the work is so fluid. This can lead to problems in communication, and obviously game development is very different to painting in that there is a team involved in the creation process. For larger teams, that middle step of thinking about the Object Level tasks becomes a pivotal part of the process, where high-level and detail level thinking can interface and I believe in that level is one of the best places, and most collaborative places to discuss the project. Sometimes it is easy to loose sight of the process and the levels of tasks and responsibility…

At certain points in full production, for long long periods, we often find ourselves simply attacking stuff. As it gets added, we attack it. As it changes, our sounds become out of date, and we attack it. This is a full-frontal assault on all aspects of the game from all angles, weapons, hud, foley, ambience, music, voice, UI – that ongoing, iterative approach can become a fog of a war that is “Attacking on All Fronts”. This can be a long period of iteration, and I think it can be crucial in producing good work, provided that relationship between personnel and task scale is maintained. Through iteration, the more something changes, the better it gets, sometimes it gets worse before it gets better, and sometimes it just gets cut. In almost all cases, it is better for the project. Up at the Project level it is an incredibly open and deliberately anti-detail process. This allows you to attack the project in clever ways, via scope for example. If you have ownership over weapons and UI at the Object level, then your long-term tasks, for the entirety of production is to attack those areas as they change on the x-discipline level. The freedom in how to approach that iteration lives at the Molecular level. Molecular level scope is also left totally open and free for that person running that section. In the end, a focus on the end user at all levels provides the incentive for ways to either add to or remove from the work required (Removal of sound is a huge area of sound design and iteration, and not one that you’d instantly think of as being something you’d schedule for. Its almost like scheduling anti-work). Attack on all fronts at all task levels is closer to the process of a painter slowly building up paint on a canvass, or a sculptor working constantly on a sculpture, the feedback is very instant and as things change, the overall ‘work’ begins to form. It changes as it forms and it is important that it is the big picture that is tracked, as much as the details.

Up at the Project Level, these long-term tasks will eventually turn into short-term polish tasks, or post production tasks, whereby they are pre-mixed, finalized and everything starts to solidify and get ready for the final mix. As long as these dates are clear, I find you can work in a totally flexible and free manner until you need to switch gears.

Once post-production, or sound alpha, or sound beta is reached (terminology differs here between developers). The Attack on All Fronts approach continues, but it moves into a different gear. Polish and removal. Removal, again, is as big a part of the iterative process, particularly in terms of Object Level areas like weapons, but when all these different areas come together and are presented in context with one another, a new level of removal needs to take place. This is where things again really get focussed on the player, and the end user experience. Any clutter or sound that is getting in the way of the experience is removed, or diminished, or mixed in such a way that the experience becomes more honed and focussed. This is essentially the job of the final mix, but also a process of sound effects replacement, premixing, polish, cutting, all informed by a period of intense scrutiny and reviews takes the foreground.

Defining the periods of rigidity and the periods of agility at the Project Level seems to be very important in being able to both control and let go of the game development process for audio. To deliver and also to maintain a degree  of freedom and opportunity inside of various tasks feels important. I think that most tasks, even those as rigid and industrial as dialogue production can be broken up into short-term and long-term tasks: with corresponding levels of freedom, experimentation, detail and overview.

Big picture tracking is a fairly difficult thing to quantify and relate to, it is based on constant review and constant iteration and is based a great deal on a ‘feel’ for when something is right in x-disicipline context, rather than just running out of time (although that can put an end to the process too) or being buried in checking small tasks off a checklist. The structures i’ve tried to pin down here are ways of having STUCTURE via the Schedule and tasks at the Project Level, TRACKABILITY at the Object Level and FREEDOM via the open nature of tasks at the Molecular Level.

As already stated, game development projects tend to be very uneven and even ‘chaotic’. I like some of the agile systems as loose frameworks for certain kinds of tasks and for inter-departmental awareness and communication. Finding a balance that works for the project, culture and personnel you have often means a lot of mixing and matching, and a high degree of seeing whatever does and doesn’t work. But I think one step towards that is understanding this notion of identifying task scale, which is something that can sometimes feel easy to get lost inside due to the complexity and constantly changing elements of production.

(excerpt from the afterword of ‘Game Audio Culture’)

It is no longer enough to simply have a good sounding game. It is no longer enough to be able to produce great sounds, or great music, or great speech. This is the basic starting position that I believe sound has explored for the majority of the 20th century, and from which our industry now has the opportunity to grow. Sound, indeed any discipline, should now be approached from a completely fresh starting point – from day one, as an integral part of the design process. There are no excuses, if this is not how your organization is set up, then it is up to you to start the process as soon as you possibly can. In the 21st century the sound artist, no matter what kinds of game or product they are working on, is to be a true multidimensional problem solver and innovator. This imperative is everywhere we look today, the primary thrust of technology is to enable collaboration, enable visibility and transparency, clearly it is trying to fix something that is broken.  With mixing, it is no longer enough to simply mix a game, the final physical acts of moving faders is when the opportunities for mix decisions are almost all closed off – in order to truly influence mix decisions, to nurture mix moments and strategies from concept through to final you need to be there in the beginning. With sound design, with composition, with dialogue, every area under the sound umbrella works in exactly the same way. Sound is a by-product of design decision making, there is little room afterwards for maneuverability – and it is the opportunities for amazing sound design that are most lamented through this segregation/waterfall approach. Every area of specialization will need to undergo this transformation. Sound, art, design and technology all form the moving and interrelated parts of a user experience (is it almost time to rebrand ourselves as UX designers?) – this context is how sound must be able to think of itself and all the interactions of the sound designer must fulfill and resonate among these inter-dependencies and interrelationships. Our responsibility is to be mutually accountable for all the other discipline’s successes and/or failures, and them similarly for ours. The incredible sound and co-ordination in ‘The Last of Us’ wouldn’t have been possible without opportunities provided for the sound team by design, art and the creative director, but it was the sound team’s opportunism and ability to rise to the challenges that made this one particular example (as I write this) shine out above all others so far in 2013.

How do we do this? As sound designers, as audio directors, as freelance content creators? It isn’t really something that any of us really have that much experience in, and I say this because every single game, every single team and every single opportunity is completely new and different.. and it really should be approached in this way. Though I do believe the way to start down this path is simple. It is all about the relationships and trust that we have with other people within a team, creative or otherwise. These relationships are entirely defined by trust, this is never about talent, and rarely about experience – unless it is the experience of letting go – and the collaborative motivation is one which we can foreground above all else, and learn to foreground on a daily basis. The sooner we become as integrated as possible, as early as possible into the veins of  the process, and as trusted a design collaborator as possible in the development process, the better for not just the craft of sound, but for the craft of interactive design as a whole. Being a sound designer isn’t about making great sound, it is about making great games; simply by using sound to help solve design problems.

Naming conventions are one of the cornerstones of an organized project, and whatever your discipline, a solid, sensible naming convention will save your skin as you hit full-on production. But just because you have established one early in your development, this doesn’t mean it won’t be challenged. What happens when your naming convention is backed into a corner by the changes and pressure put on it by real-life production: changes that require fast file searching and a flexible approach that won’t so easily break when new assets are thrown at it? The problems of a non-practical naming convention are especially evident in dialogue file naming, where i’ve found a couple of easy approaches to be extremely useful.

Firstly, i’ve found it incredibly useful to think about how the filenames actually appear in sorted order in a file or hierarchy folder. For example, this list of cutscene lines between two characters will display poorly for anyone who actually uses these files…





When these lines are taken off their script page and placed onto an OS or in software, either for editing, implementation or debugging, they will arrange themselves like this…





Nasty! The ability, in this case, not to hear or see these files in the narrative order in which they should be played back makes it very difficult to work with these files. Narrative order should always, where possible, be considered as a way of prioritizing and expressing a naming convention, especially when you are dealing with tens of thousands of files. Simply leading with the line number would solve this:





The same can be said of context grouping for AI dialogue. Grouping can be done based on either context, character name, variation number and so on. For this, I usually go with character name first, as this is how much of the dialogue I have worked with has been organized and prioritized. For example:









The character name leading organizes all the content belonging to that character first, and then the context of the AI dialogue for that character ensure they are grouped accordingly. This not only makes things easier for quickly moving around large chunks of data, and large subsets of that data, but it also allows for quick scrubbing around in a file directory for the required files.

Aside from thinking about filenames on this organizational and narrative, or contextual level of hierarchy, one of the most important elements of filename design and information hierarchy has been to build in change and iteration as a numerical expression. This is a very simple file naming convention trick which I picked up from the awesome Creative Director John Howard when we collaborated on prototype 2.

The notion is deceptively simple. Knowing in advance that lines of dialogue are going to constantly be added to the game as production moves forward, the problems inherent in renaming a whole section of the game dialogue because of the insertion of a new scene, or line , can be quite destructive to production and to the ability of an implementer to make these additions and changes without resistance from the naming framework. The way to get around this is rather than use 01, or even 001 with leading zeros only is to allow an extra decimal place in all numerical file naming like this…





This naming convention now allows the addition of not only new lines in-between the lines we already have there, without having to rename any of the existing lines, but also to add easily entirely new cutscenes. Here is how we add a couple of lines…







Similarly if we have two cutscenes, 010 and 020, we can easily accommodate 9 brand new cutscenes in between them with this future-proof naming convention.

This small addition to the numerical values used in naming dialogue files has made life a lot easier during iterative development across several titles for me now and i’m happy to recommend it and pass it along. Obviously, these examples are not just relevant to dialogue, but can easily be applied to any large datasets such as interactive music cues.

Happy file hunting!

I’m not interested in sound design any more.

That is to say, sound design for its own sake, edification, gratification or aggrandizement, the single dimensional approach, to the exclusion of all else. I am interested in design as a whole. How all the elements, the craft, the process of how a soundtrack can become an un-subtractable piece of the total design. I’m extremely and endlessly fascinated by this wider design discipline, in presentation, in timing, in how the game, cut-scene or game-play is perceived, as a whole, by a player. For many years, perhaps understandably as disenfranchised members of production teams, we sound folks have elevated ourselves and our teams under the banner of ‘50% of the experience is sound’, and while this elevation has served its purpose in revealing the importance of sound to an experience, it is merely didactic…

100% of the experience is the experience.

Sound is nothing without collaborators, nothing without an artistic direction, without a narrative, without contexts and structures that run far deeper than the sonic, we need something for our echo to bounce back from. Likewise, a narrative is nothing without voice performance, weighted gameplay nothing without a musical heartbeat, a visceral jaw-dropping moment nothing without purposefully tuned, and timed sound effects. Sound can help solve problems, fix holes, mend and bend intentions, but only when it is fully involved.

We are heading into the post-sound design era. This is an era no longer obsessed with the ‘neglected’ art of a film or game soundtrack. An era in which sound designers are comfortable, confident and fulfill their role as co-designers on a product team. The focus of any sound designers work, should be as a principle collaborator to not just the overall project, but artistically, technically, socially and politically in the development of company culture. We should expect and ask nothing less than the full unequivocal involvement and respect afforded to art directors, design directors or technical directors. We need to move beyond, in our thinking, and in our daily transactions, the disciplinary segregation of end-of-the-production-line thinking, and push confidently into the open role of collaborator, be that with clients, departments, or our audiences.

This move has been made possible in both film and games production by a shift in attitude towards collaboration with sound, and of the necessity towards improving the the overall design of audio-visual productions. Neglecting sound in almost any (major) discussion is a neglecting of the fundamentals of good design, and is not only a neglect of design, but also of good business.

This era is also as much about moving away from the self-imposed limits of technology, and towards a more balanced approach, where technology (or, the skills necessary to produce the work required) plays equal to aesthetic concerns and social skills. These three parts (art, tech, social) form the key skills required by a designer of any discipline working in any media today. Young sound designers today need to learn quickly to move and think beyond the latest technology, as every day there will be something new to distract you from the actual work of ‘designing’ sound to solve the specific design problems on your productions.

In the end, the work is about people, relationships, teams, networks, communities, and mutual accountability.

I fully anticipate this post-sound design era to be exemplified by the upward movement of ‘sound designers’ into much broader creative roles on teams, becoming creative directors, producers and directors themselves. This feels like the natural next-step for the evolution of fully-integrated design and practice. In the past, wherever there has been lack of understanding, missed opportunities, lack of innovation, there have always been those who stepped up to fill the void and lead by example. This may not be everyone’s journey or destiny, but I hope it is the destiny for the art forms we love, and many of us will play, or are already unwittingly playing, a key role through the games we help design.