(first published on Gamasutra)
Noticing a tweet from sound designer Kelly Pieklo about making the transition from linear sound design to non-linear sound design, and about how sound designers get to determine the parameters that can drive, control and transform the sound elements in a game got me thinking.
For ease of writing, i’m taking the term ‘parameter’ to refer to all the various elements of game data that can be mapped onto audio – including states, triggers, switches and variables.
In film sound, there isn’t really a concept of parametric data from the other departments that the sound designers can use to drive their sounds. Perhaps the closest analogy would be an OMF of temp picture cuts which the sound editors can import into their sessions to keep up to date with scene and shot changes during post-production. Often the ‘parameters’ are supplied by the director, and are not tangible programmatic variables that alter over the course of the movie, but ideas that need to be interpreted by the sound designer, and implemented through more abstract methods.
Imagining some well known movie plots with parameters that control their overall sound is a fun proposition. How would we plot the movement in Apocalypse Now towards Kurtz and affect sound, perhaps via a ‘Distance to Kurtz’ parameter? In The Conversation, we could have a parameter for Harry Caul’s ‘paranoia level’. I’m sure these are too high-level to function, but there is something we could do with those ideas once parametized, and it is a great start for thinking about the main thread of a plot or narrative and breaking it down into more interactive ways that the sound can be affected overall.
I was wondering how feasible it might be to not only have the technical and obvious parameters that we deal with most of the time in game sound, but also a whole new group of more abstracted parameters that reflected things like how the player felt (gathering biometric data from players is something that has been discussed a fair bit recently) or, in terms of more narrative game experiences, how the character ‘feels’.
I recall a feature that we implemented in the open world Scarface game which kind of did this. It was Tony Montana’s ‘Rage Meter’. If you build this meter up enough, you unlocked the ability to enter ‘Rage Mode’ at which point Tony was able to go into a blind rage in first person for a limited amount of time. Now, this wasn’t really a fully scalable parameter with many gradual nuances, but more a switch mechanism for a gameplay mode – but the interesting thing of note is that it was directly mapped onto how the character felt and behaved, and as the emotional state of the main character had changed altering his point-of-view, so too did the sound, music and dialogue being employed during that mode. Sounds were pitched down and filtered with weapon sounds pushed forward in the mix, dialogue switched to utterly insane swearing (as opposed to the regular conversational and relaxed swearing that denoted ‘normal’ gameplay) and music switched to atonal Georgio Moroder synth washes that occured in the same scenes in the motion picture.
Narrative, emotional, or point-of-view parameters might be challenging to figure out, but I think there are lots of opportunities to think more abstractly, and less technically, about game parameters. This recent talk by Randy Thom at the Mix Magazine Immersive Sound Conference gives plenty of nourishment for thoughts in this direction, particularly about point-of-view.
Game parameters and switches are mostly the servants of reality, beleivability and simulation; time of day, relative distances, footstep surface type, speed, height, density etc etc. I think these technical parameters, while entirely necessary, are really just the foundation of beleivability for sound integration and synchronization with the game engine. In an open world title, or simulation title, there are likely to be many more of these kinds of ‘reality’ based parameters and switches.
(above – all of the parameters in my current project are technical, or based on simulation)
It may not be practical to parametize the emotional vectors of a game narrative, or even neccesary. Perhaps music ‘states’ are the best example of something already a little more abstracted and closer to the emotional pulse of the game – states are the most likely to drive music or atmospheric transitions in a game and as such offer some tantalizing ways in which to start thinking about also affecting sound and dialogue. Perhaps when a music state changes from ‘calm’ to ‘fear’ there are a great many more opportunities to alter the way the sound and dialogue are presented to the player too. Maybe, without realizing it, music states are mapping the narrative and emotional beats of the game for us, and maybe tapping into these states to make changes in the rest of the soundtrack is one of the biggest opportunities for much deeper game sound integration.