Archives for the month of: February, 2014

It is Monday morning. So I thought i’d put something together that i’ve been meaning to do for a while, and that is a process document which details some of the high-level decision making and processes that go into the creation and implementation of sound for a game from the asset to the code.

iteration

Audio Iteration Process [Click to Enlarge]

Doing this highlights the importance of a generalist skill set in game audio (for those either looking to get into game audio, or those looking to improve/grow skill-set areas). Not only do you have specific areas/groups of very specific processes, like the recording and editing block at the top of the document (in RED), and the implementation block (in GREEN) towards the bottom, but you have the need for a complete interconnectedness that involves social relations and collaboration in order for the model to work at all (decisions, reviews, communication).

Now, this document was put together thinking of sound design and implementation, but I think this is every bit as applicable to MUSIC and VOICE production. I also think that viewing the processes and decision making like this makes it very clear how our production and collaboration processes can be improved (e.g. less implementation steps using separated software is always a goal). A Voice workflow, for example often works in an iterative way at the RECORDING stage (getting many takes of the same lines in different ways to give more choices later on), rather than at the REVIEW stage (although callbacks and re-writes have become more commonplace), meaning that hearing voice IN CONTEXT and making review decisions and direction decisions is less based on a context-led-rationale than it is in sound FX design. There are many industrial reasons why this is different, but opening up the FX iteration path visually, certainly allows us to perhaps see where we can innovate and improve some of the more rigid industrial structures that are imposed, rather than designed.

Another area I wanted this to highlight, is the ITERATION process. This is the most fundamental part of the whole process, in fact, it is the REVIEW & ITERATION cycle that drives the whole model. Until you get a sound into the game, triggering, playing back, you can never know if it is doing its job or not. Chances are 9 times out of 10, that it is not, or that it could be improved in some way with a tweak of some kind. There is always something that needs to be done. Sometimes it is the re-recording of new material, which results in a journey back to the beginning of the process. Sometimes it is re-visiting assets in the sound library, and sometimes it is down to tweaking in the run-time realm of the game and audio engine. The more this process is repeated, the idea is that the less times you have to revisit the areas nearer the beginning of the process and spend more time refining the run-time game parameter side of the process. All iteration processes aim to refine what is there, and the sooner you can get ANYTHING into the game, the sooner you can start the process of getting closer to the run-time.

Another thing to note is that, there is no ‘finished?’ or ‘complete?’ stage in this process. That is simply because I don’t think the process ever really ends until the game is ripped out of your hands, it constantly gets ‘closer’ to finished, particularly the more time you can go through the latter trigger stages of the flow… but it never really ends. Another reason for this is that the game itself is changing underneath your feet, and so sounds & implementations are often required to change to ‘keep up’ with current architectural and optimization snapshot of the game.

I was also writing about a hybrid procedural audio model on Friday, and this is not accommodated in this flow, but would either be a new path of procedural sound object creation and testing (to replace the RED path), or become a part of the implementation (GREEN) path – ideally replacing the recording and editing stage entirely and shifting heavily towards a more implementation and iteration-based flow.

I made the document in Lucid Chart. It is awesome, free and very easy to use.

Advertisements

Sample-based vs Procedural: Its not quite as dramatic as an all out death match between these two approaches and philosophies, even though the temptation is to see things in either/or black/white terms.

One thought is that, procedural audio, even though it has been around for a while now, is still fledgling and even though there are inherent ‘cost’ savings to using this method for sound generation and propagation (particularly in games with huge amounts of content), finding a home in a largely risk-averse entertainment software industry is a big ask as the applicable approaches still feel fundamentally ‘experimental’. The thing I’ve come to realize, perhaps somewhat later than everyone else (and perhaps because of the ‘either/or’ polemics), is that a lot of the techniques and tools we are using are already in transition to a more procedural status.

This is just a quick categorization attempt that I wanted to get down before it evaporates with the rest of my thoughts and doodles on a Friday morning…

The Sample-Based Approach.

Relying entirely on streaming or preloaded sample based assets sitting on a disc.

(Most games of the PS2/PS3 generation and some mobile games today)

Re-triggering of pre-recorded material, usually wave file assets.

The Procedural Approach

Moving the sound generation effort from the disc (and the streaming throughput bandwidth) to the processor.

Synthesis-based sound objects, acoustic models, grain players, noise-shaping and DSP intensive – in-essence everything is generated at run-time, based on (hopefully) elegant, efficient and simple real-time models.

(currently fringe aesthetic games, some music based games)

For me, the process of just writing these two (admittedly loose) definitions down, made me realize that any proposal to exclusively use either of these models would need to be either a) aesthetically niche or b) technically or artistically challenged in some way. And, even though I tried to say definitively which games used these approaches, I think I’m on unsafe ground in my generalizations. It also made me realize that, of course, there is already a ton of crossover in these categories in most proprietary sound engines, and certainly inside middleware audio solutions. A purely sample-based approach is probably getting quite rare these days. So, are we in the midst of a hybrid approach without even really realizing it?

Hybrid Procedural Approach

(Most console games today)

A fundamentally sample-based approach, but one that goes much further towards the implementation side of things than simple triggers. Breaking down sounds into constituent molecules (granular) or even small recognizable chunks (automatic weapons). Parametrization of sound. Sound ‘shaping’ in the form of Procedural DSP used for ‘additional layers’ like reverbs, filters and flutter. Some soundseed or air implementation in wwise, but just as a subtly mixed in ‘layer’, rather than to supply the overall effect.

We are using procedural techniques and technologies more and more in the form of reverbs and DSP effects. But also in our implementation, we are thinking more procedurally about sound, even if still using sample-based playback material as the starting point and raw material. My feeling is that we have moved towards this often without even realizing the big picture. Could this slow-bleed approach eventually end up with interactive sound designers working completely with acoustic models and unique sound object based propagation? Perhaps for certain genres and platforms. But it is difficult to imagine a move away from a hybrid position into exclusivity. I can though, see certain projects leaning one way or the other.

Chances are if you work in game audio, you are already working in a hybrid procedural audio world.

//