Blue sky out here this morning, so i’m having a Blue Sky day. *Update at foot of the article…! For the last couple of years while now I’ve been using Wwise to mix games. The run-time environment and routing is fast, intuitive and extremely flexible. Bus hierarchies and behaviours are some of the best control mechanisms I’ve found anywhere in any toolset for live mixing, and the suit of debug tools like the profiler and the voice monitor offer essential data flow visualization of things like memory, voice resources, loudness etc. The Voice Monitor window in combination with the capture log is fantastic, however, with a few extra features, it could really change video game mixing workflows. The Voice Monitor, when you start a capture, visualizes, in the form of a timeline, all the events that are triggered when connected to a live game, and it is this timeline visualization of events that is the key aspect to further developing this window’s functionality…

Screen Shot 2015-04-10 at 8.33.32 AM Screen Shot 2015-04-10 at 8.56.45 AM

The resulting timeline is a history of all the events being stopped or started, when they were triggered, how long they lasted, if they faded in or out, and as well as info on their voice volume at that specific time. It is essentially a timeline recording of the game sound, and as you play through a connected game, every event that the audio engine triggers is reported and shown in the window. Let’s imagine for a second that Audiokinetic extend this feature and were to allow users to ‘record’, save (and even edit) these as ‘capture sessions’ (A new folder in the ‘sessions’ tab could be home to these). This would then allow multiple users to exchange these files and play them back in their Wwise project. This would turn a saved ‘capture session’ into a kind of ‘piano roll’ style event recording of game playthroughs, almost like a MIDI file, and here is where the real benefits for mixing come in, this would allow different people inside (or outside) the developer the ability to play through game using different play styles (from the one the audio implementer /or mixer has) – and to then allow a mixer to mix the game effectively for all these different styles by playing back their performance inside Wwise and tweaking accordingly. This is a kind of ‘performance capture’ of the player.

Player Styles

I used to see the impact different play styles had on a mix quite often while working on open-world games such as [Prototype], Scarface and [Prototype 2]. We’d spend a lot of time mixing the game with the audio team, slowly creeping around inside the game environment, making sure everything that needed a sound had a sound and that it was all balanced. We also made sure that we were following, what we thought were all the correct mission objectives and getting to all the way-points as directed to complete the missions. The game started to sound pretty good and pretty polished. Enter our QA lead to drive the game for a mix review with the leads or for a demo of the game to journalists. This guy played totally differently to us, moved through levels completely differently, had a completely unique, insane combat style and every time he played, he’d miss out all the low-level subtle stuff we’d spent a lot of time on, and go for these big flamboyant and flourish-filed ways of showing off the game. Jaws dropped. We could never play the game this way AND mix at the same time. This highlighted that mixing open world games (or really *any* game) wasn’t going to be as straightforward as we’d first thought – sure, we *knew* there wasn’t just one path through the experience, but we didn’t know just how different that path could be. I’ve often wondered how we could best capture and replicate different play styles without having to constantly pester people to come in and play for us while we tune the mix. Now, if we could spend a day getting 10 – 15 different players with varying kinds of play styles to go through levels in the game, while we recorded their playthroughs using an event-based capture session – one that allowed us to later replay these event logs, in real-time, and see/hear exactly what events were triggering, scrub back and forth, spend time zooming in on details, and zooming out again to consider the big picture of the mix, re-engineer and re-mix the audio – it would enable us to mix faster, mix for more interactive player focussed outcomes, and not have that odd feeling in your gut that the next time you see your shipped game in a Twitch stream, it might sounds kind of ‘not good’.

Video Sync

Another missing piece of this proposed capture system would be the ability to also drop a video file of the same captured gameplay onto the Voice Monitor timeline and slide it around to sync up with the event log. Saving out these capture sessions would then also include a reference to a video file, so that whoever was tasked with mixing or pre-mixing the level or project at that time, would have both the visual reference from the game, as well as all the audio inside the Wwise project to work with and tweak. (Video capture can now more easily be achieved with iOS output via the updates in the latest OS X – and capturing from console or screen capturing via a computer is also fairly trivial these days – I don’t anticipate Wwise being able to capture video, but it would be great to have the ability to drop in and sync up a video on the voice monitor timeline). Another thing that this kind of capture session style playback mixing would enable, is the ability for developers to hand over capture sessions to outside studios, which, with access to relevant Wwise project, would enable them to hear the game in a calibrated environment, particularly if those facilities are not available at the development location. Allowing fresh sets of ears to assess a game’s mix is an essential aspect to game mixing, and using these forms of captured gameplay would be an extremely cool way to achieve this.

There are also obvious benefits for extremely large teams who need to co-ordinate a final mix across multiple studios, and this could be perhaps achieved more easily by passing around these Wwise event capture sessions. There are certainly proprietary solutions that already take on board this recording and playback approach, and not just for audio but for all debugging, alas these only benefit those who have access inside their organizations. Similar reporting and visualization exists in other third-party engines too, and as these systems are already mostly developed align the right lines, with some extra functionality they could accomplish some of what I am thinking about here.

So, I don’t think it will be long before these kinds of log recording, playback and editing features becomes a reality within interactive mixing and game development. * Unknown to me, literally while I was writing this blog post, is that apparently this kind of feature is now available over in FMOD Studio. From Brett… “This is possible with FMOD Studio 1.06 (Released 10th April 2015).  You can connect to the game, do a profile capture session, and save it.  To begin with, the mixed output (sample accurate pcm dump) is available, and you can scrub back and forth with it while looking at all events and their parameter values. The next bit is the bit you’re after though.  You can click the ‘api’ button in the profiler which plays back the command list instead of the pcm dump, so you get a command playback (midi style). If you want, you can take that to another machine that doesn’t have the game, and change the foot steps for a character to duck quacks if you want.  You can do anything with it.”. 

* Added support for capturing API command data via the profiler, when
  connected to game/sandbox
* Added support for playback of API captures within the profiler, allowing
  you to re-audition the session against the currently built banks
Advertisements

(excerpt from the afterword of ‘Game Audio Culture’)

It is no longer enough to simply have a good sounding game. It is no longer enough to be able to produce great sounds, or great music, or great speech. This is the basic starting position that I believe sound has explored for the majority of the 20th century, and from which our industry now has the opportunity to grow. Sound, indeed any discipline, should now be approached from a completely fresh starting point – from day one, as an integral part of the design process. There are no excuses, if this is not how your organization is set up, then it is up to you to start the process as soon as you possibly can. In the 21st century the sound artist, no matter what kinds of game or product they are working on, is to be a true multidimensional problem solver and innovator. This imperative is everywhere we look today, the primary thrust of technology is to enable collaboration, enable visibility and transparency, clearly it is trying to fix something that is broken.  With mixing, it is no longer enough to simply mix a game, the final physical acts of moving faders is when the opportunities for mix decisions are almost all closed off – in order to truly influence mix decisions, to nurture mix moments and strategies from concept through to final you need to be there in the beginning. With sound design, with composition, with dialogue, every area under the sound umbrella works in exactly the same way. Sound is a by-product of design decision making, there is little room afterwards for maneuverability – and it is the opportunities for amazing sound design that are most lamented through this segregation/waterfall approach. Every area of specialization will need to undergo this transformation. Sound, art, design and technology all form the moving and interrelated parts of a user experience (is it almost time to rebrand ourselves as UX designers?) – this context is how sound must be able to think of itself and all the interactions of the sound designer must fulfill and resonate among these inter-dependencies and interrelationships. Our responsibility is to be mutually accountable for all the other discipline’s successes and/or failures, and them similarly for ours. The incredible sound and co-ordination in ‘The Last of Us’ wouldn’t have been possible without opportunities provided for the sound team by design, art and the creative director, but it was the sound team’s opportunism and ability to rise to the challenges that made this one particular example (as I write this) shine out above all others so far in 2013.

How do we do this? As sound designers, as audio directors, as freelance content creators? It isn’t really something that any of us really have that much experience in, and I say this because every single game, every single team and every single opportunity is completely new and different.. and it really should be approached in this way. Though I do believe the way to start down this path is simple. It is all about the relationships and trust that we have with other people within a team, creative or otherwise. These relationships are entirely defined by trust, this is never about talent, and rarely about experience – unless it is the experience of letting go – and the collaborative motivation is one which we can foreground above all else, and learn to foreground on a daily basis. The sooner we become as integrated as possible, as early as possible into the veins of  the process, and as trusted a design collaborator as possible in the development process, the better for not just the craft of sound, but for the craft of interactive design as a whole. Being a sound designer isn’t about making great sound, it is about making great games; simply by using sound to help solve design problems.

I’m not interested in sound design any more.

That is to say, sound design for its own sake, edification, gratification or aggrandizement, the single dimensional approach, to the exclusion of all else. I am interested in design as a whole. How all the elements, the craft, the process of how a soundtrack can become an un-subtractable piece of the total design. I’m extremely and endlessly fascinated by this wider design discipline, in presentation, in timing, in how the game, cut-scene or game-play is perceived, as a whole, by a player. For many years, perhaps understandably as disenfranchised members of production teams, we sound folks have elevated ourselves and our teams under the banner of ‘50% of the experience is sound’, and while this elevation has served its purpose in revealing the importance of sound to an experience, it is merely didactic…

100% of the experience is the experience.

Sound is nothing without collaborators, nothing without an artistic direction, without a narrative, without contexts and structures that run far deeper than the sonic, we need something for our echo to bounce back from. Likewise, a narrative is nothing without voice performance, weighted gameplay nothing without a musical heartbeat, a visceral jaw-dropping moment nothing without purposefully tuned, and timed sound effects. Sound can help solve problems, fix holes, mend and bend intentions, but only when it is fully involved.

We are heading into the post-sound design era. This is an era no longer obsessed with the ‘neglected’ art of a film or game soundtrack. An era in which sound designers are comfortable, confident and fulfill their role as co-designers on a product team. The focus of any sound designers work, should be as a principle collaborator to not just the overall project, but artistically, technically, socially and politically in the development of company culture. We should expect and ask nothing less than the full unequivocal involvement and respect afforded to art directors, design directors or technical directors. We need to move beyond, in our thinking, and in our daily transactions, the disciplinary segregation of end-of-the-production-line thinking, and push confidently into the open role of collaborator, be that with clients, departments, or our audiences.

This move has been made possible in both film and games production by a shift in attitude towards collaboration with sound, and of the necessity towards improving the the overall design of audio-visual productions. Neglecting sound in almost any (major) discussion is a neglecting of the fundamentals of good design, and is not only a neglect of design, but also of good business.

This era is also as much about moving away from the self-imposed limits of technology, and towards a more balanced approach, where technology (or, the skills necessary to produce the work required) plays equal to aesthetic concerns and social skills. These three parts (art, tech, social) form the key skills required by a designer of any discipline working in any media today. Young sound designers today need to learn quickly to move and think beyond the latest technology, as every day there will be something new to distract you from the actual work of ‘designing’ sound to solve the specific design problems on your productions.

In the end, the work is about people, relationships, teams, networks, communities, and mutual accountability.

I fully anticipate this post-sound design era to be exemplified by the upward movement of ‘sound designers’ into much broader creative roles on teams, becoming creative directors, producers and directors themselves. This feels like the natural next-step for the evolution of fully-integrated design and practice. In the past, wherever there has been lack of understanding, missed opportunities, lack of innovation, there have always been those who stepped up to fill the void and lead by example. This may not be everyone’s journey or destiny, but I hope it is the destiny for the art forms we love, and many of us will play, or are already unwittingly playing, a key role through the games we help design.