Here are the things that I think went well:
-the tool does randomization really well.
I was able to use relatively few samples to create good and diverse sounds for everything I was charged with creating. I made heavy use of the pitch randomization along side the sample randomization to liven up all of my weapons and foly, and I can already hear a pretty marked difference from certain games on the market with more straightforward soundtracks. I have noticed that games like Call of Duty 3 tend to take the pitch randomization a little too far in spots, so I generally stuck to the concept of pushing it until its gone a little too far and then backing it off a notch.
-limiting the number of instances and crossfading worked well.
One of the biggest obstacles in creating a mix for an interactive platform is retaining focus when multiple important events are going on. I’ll pick on Call of Duty 3 again (ps2 version) though it may be the port and I’m picking because I’m addicted to the game. 🙂 In CoD3 when there are multiple machinegunners firing simultaneously the mix just tends to wash out into a sea of riffling bullets. There’s no punch because there’s not room for punch (and because all punch comes from negative sonic space). In my mix I made heavy use of the limit instances proprety of the weapon cues, and with very brief fadeouts I was able to make rapid fire weapons that didn’t wash into their own reverb tails. This also controlled my headroom nicely since there wasn’t much additive gain happening.
-engine RPCs worked well
I’m still no master of videogame engine design, but I’m happy with what I have working right now. My workflow was generally to design an ‘idle’ track and master it to 0 dbVU, then create a ‘friction’ track and do the same. The friction track was a bow wash for boats, wind for aircrafts, and running dirt for trucks and tanks. Within the XACT engine I’d set the RPC to set my startup idle to -8 on my meters and my friction track to –infinity. As my ‘speedfactor’ variable was increased by the game, my ‘idle’ track would increase in volume until it maxed out at 0 VU, and my ‘friction’ track would also increase into audibility and mixed in to taste. Pitch would increase and affect the entire sound instead of each individual track. Of course these weren’t linear curves, and some vehicles withstood more pitch manipulation than others -a motorcycle will rev higher than a tank will for instance. My general philosophy of setting my samples at a standardized place in the analog VU wor
ld meant that each curve could be applied to a large number of very different vehicles and cause the effect and mix to sound right across the board.
-integration and testing
Once the sounds were mapped by the programmers into the game, the process of integration and testing has been running relatively smoothly. I don’t have the hooks built into the game that allow me to connect the XACT tool directly to it, but I’ve been tweaking parameters, rebuilding, and rebooting the game to immediate result so that’s been fine.
-to sum up-
the tool does the things that it says it will do pretty well. Good randomization and control of cue instance limits, good RPC control, straightforward integration once its been mapped into the game.
Now that that’s out of the way, here are the things that I either had issues with or would like to see improved:
in my opinion this is the single most pressing need that the tool has. In the best practices section of the C++ programmers manual it spells out what parts should fall into the programmer’s domain and what parts should fall into the sound designer’s domain. No where does it really spell out the specific places that the programmer will touch the things that the sound designer has done, however and this is a glaring defect. I’ll list here the things that I think the sound designer has to create and the programmer has to manipulate in some way:
-sound bank names and indexes
-wave bank names and indexes
-cue names and indexes
-relationship of cues to soundbanks, wavebanks, and variables
-stop loops on looping cues
-variables that control interactive cues
-dsp functionality for reverbs
that’s 10-15 different ways the programmer will have to put his hands on the sound designer’s work. We need comprehensive documentation functionality that spells all of these things out please. The best practices part of the manual says that we should get together with the programmer and figure lots of this stuff out before hand. That’d be very cool, but I think we need a fallback mechanism for when things aren’t quite so utopian.
Pretty please. I’d like an excel spreadsheet but I’ll take a txt file. Really. Please.
-quicker confirmation of which RPC a sound is attached to
I kind of have to take it on faith that my mouse stopped at the right spot when I let go of the drag-n-drop. If I’m not sure its several clicks before I can be sure. Documentation of the sort pleaded for above would help with troubleshooting this type of stuff as well.
as projects get bigger a search function becomes more and more necessary. I’d like to be able to do a search for samples, sounds, and cues if possible.
-RPC grids and snap to grid
sometimes I want things to be a little more exacting than my floating mouse allows me to get. Also, sometimes I want things pretty linear and sometimes I’d just like a frame of reference for how far I’ve gone with my line drawings.
-Different RPC default for Orientation Angle variable.
In an ideal world the Orientation Angle RPC would open up with a grid, a line drawn with three knuckles instead of two (the third knuckle being dead center) and a mirroring capability. Then as I draw a volume rolloff on the left side the right side would automatically update to mirror the curve. Maybe a checkbox to turn this functionality off, but I’d default it to on myself.
I’d really like to see folders inside of the soundbank and wavebank lists. Possibly inside of the RPC lists as well. No folder structure and no search function and no documentation makes for a lot of hunting around and time lost.
Its nice that we’ve got this fairly wide bank of reverb presets. It’s a huge pain to hear what one sounds like. Currently we have to 1)have a sample created 2)program that sample into a sound 3)assign that sound to an rpc 4)set up a reverb DSP 5)pick a reverb preset 6)draw a curve in the RPC that sends to the reverb 7)play the sound. Id love a little reverb click that I could use when setting up a verb initially to see whether I’m going to like it or not. This would also be very useful for tweaking verb parameters.
No way for the sound designer to tackle this one right now. This would be a good thing to take out of the programmers hands and move into the studio.
-basic sample editing
I don’t terribly mind using SoundForge to manipulate samples, but a rudimentary sample editor that would just allow me to trim heads and tails and maybe do a fadein/fadeout would be incredibly time-saving.
the single most egregious programmer audio error is using up all of the headroom too quickly. This is even further complicated by interactive play where events can either pile up or thin out in unpredictable ways. I know that implementing a limiter would run the risk of creating further loudness problems, but if put in the hands of an audio person maybe they could be used for good instead of evil. Compressors for categories would be cool as well. I like audio, and most important audio manipulation is dynamics processing.
Or any meters. Anywhere. I’d love to see exactly how far away I am from digital clipping at any given point. Currently I just run the digital output of the PC back into my DAW and use that for peak metering. I’d prefer to see it within the tool though.
-to sum up-
the program lacks basic documentation features, basic organizational features, and is not generally very ergonomic. Documentation issues are probably the biggest barrier to this tool’s success, followed collectively by the poor organizational and ergonomic setup. There’s no dynamics processing, no meters, no occlusion/obstruction, and no search.