Creating A Dynamic Reverb System in Unreal Engine 4

Introduction

I recently completed the final technical project for my masters degree, the aim of which was to create a system that could create convincing reverberation at runtime. The idea for this came from a discussion I had with a developer regarding procedurally generated levels in 3D games. We discussed the difficulty of using traditional methods such as reverb volumes when the level geometry was unknown until runtime, and the idea of using ray casting to approximate dimensions and calculate early reflections occurred to me. This approach is not a new idea by any means, with Google Resonance, Steam Audio and many others offering similar functionality. I wanted to see if I could create such a system entirely within Unreal Engine 4, without any reliance on third party plug-ins or software. It was also imperative that I created a system that was parametrisable, as sound design in interactive games is often so much more than simulating real life. I wanted to allow a sound designer using the system to over or under emphasise reverberant characteristics to suit the needs of their project. Here’s what I came up with.

Demonstration

The video above shows the system at work in a test level I created. It shows the way in which the reverb system responds dynamically to changes in room dimensions and even construction materials. Now we’ll take a look under the hood to see how it works.

Overview

The system can be broadly split into two parts. There is the early reflection system which models the first few discrete echoes the player hears, and the late reflection system which models the tail of the reverb. The new audio engine in Unreal Engine 4 allows both of these components to be modelled using realtime audio effects. Multi-tap delays are used for early reflections, and the reverb effect is used for late reflections. The parameters for these effects are set using a series of ray casts (or line traces, to use Unreal Engine terminology) to ascertain the volume, construction materials and paths of reflection within the space. The amount of late reflections will also increase as the player gets further from the emitter for added realism.

Late Reflections

The first batch of line traces are emitted in 360 degrees around the player. These traces are used to estimate the volume of the space, and also to ascertain the material types of the walls, floor and ceiling. Upon colliding with the level geometry, the physical surface material types are stored, along with data such as the impact points and impact normals.

The physical surface material types each pertain to an absorption coefficient, which essentially dictates how much sound energy is absorbed by the surface. The system functions by averaging the absorption coefficients of all surfaces collided with during the first batch of line traces, then uses this in conjunction with the volume of the space to calculate the reverberation time using Sabine’s equation. This is then used to set the reverb time parameter on the reverb sub mix effect. The high frequency decay and high frequency gain parameters are also scaled according to the average absorption coefficient, which makes more reflective surfaces sound brighter.

Early Reflections

Early reflections are calculated by taking the initial batch of line traces and ‘bouncing’ them around the environment. When a line trace overlaps a sound emitter, a valid path of reflection is established. The relevant delay time is then calculated and stored in the emitter. These delay times are then used to set the multi tap delay effects. Panning is also preserved by using the initial line trace angle, meaning that the system will also adapt to player rotation.

Using The System

Using the system is fairly straightforward, as it essentially functions using only two blueprint classes. The Sound_Emitter BP class is used for all sound sources, and the Sound_Listener BP class should be added to the level once. The Sound_Listener will automatically track the player’s location and emit line traces. The Sound_Emitter should be used in place of the AmbientSound actor. It contains an audio component for playing back sounds, a collision sphere for detecting line traces, and is also used to store and set all parameters relating to the early reflections. Upon adding a Sound_Emitter to a level, the designer can easily set the sound to be played using the details panel. Attenuation settings can then be set in the usual manner within the sound cue.

In order to ensure the system functions correctly, it requires a dedicated collision channel called ‘reverb’. All walls, floors and ceilings should be set to block this channel, the collision spheres on the sound emitters should be set to overlap and all other actors should be set to ignore it. This ensures that the reflection paths and room size estimation are not affected by props and other actors. Custom collision channels can be configured within the project settings.

The user must also configure the required physical surface types within the project settings, and also assign each of these an absorption coefficient within the Coefficients dictionary, which is contained in the Sound_Listener BP. These surface types can then be added to physical material types and assigned to the level geometry in the usual manner.

The system will also function alongside the traditional method of using fixed audio volumes, and can be activated and deactivated globally by calling the events Activate Reverb System and Deactivate Reverb System in the Sound_Listener blueprint.

Reverb Priority

All emitters within audible range will be sent to the late reflection sub mix, however only four emitters can have early reflections simulated simultaneously. This is because each emitter needs its own dedicated multi-tap delay effect in order to adequately simulate the early reflections, and so the number of emitters that can use this system must be limited. Allocation is handled by a priority system whereby the designer can set an integer priority for each emitter in the details panel. When more than four emitters collide with the early reflection line traces, the four with the highest priority will be used. This system is also dynamic, so if an emitter becomes no longer prioritised it will fade itself out of the early reflection system without audible artefacts.

Customisation

The overall reverb time can be scaled by setting the public variable RT60Scale in the Sound_Listener. This acts as a multiplier, so a value of 1 leaves the value unchanged and 0.5 would result in reverb times half as long. The number of times the line traces bounce around the environment can be set using the Max Order Of Reflection public variable on the sound listener. Higher numbers of bounces will identify more valid reflection paths but at the expense of additional CPU bandwidth, so this can be tweaked to find an optimal compromise based on the target hardware. The balance of early reflections can be set using the ERTrim Global DB public variable in the Sound_Listener. This allows for early reflections to be boosted or attenuated globally in decibels. A value of 0 leaves the balance unchanged, every +6dB will double the level and every -6dB will halve the level.

Future Developments/Improvements

Currently the system updates at a fixed rate of 0.5 seconds. Whilst this seems adequate, it would be good to allow the designer to set a custom update rate to allow for better optimisation. Similarly, there are probably optimisations that can be made in terms of the number and distribution of line traces. I plan on allowing the designer to set the number of line traces using a single variable, then have the system calculate the relevant trace angles automatically for an even distribution. It would also be beneficial to expose the variables for the reverb scaling over distance to the user, as this would allow the designer to control the overall reverb mix in a more controlled way. It could also be beneficial to have it scale according to a customisable curve as opposed to purely linear scaling, as this would allow an even greater level of control. Making the system aware of the type of sound the emitter is playing could be beneficial as well, as it would allow for content such as dialogue to have a lower mix of early reflections to improve intelligibility. The one key benefit to building the entire system within Unreal Engine 4 is that anyone with knowledge of blueprints could conceivably modify the system to suit their needs, making it almost infinitely expandable and customisable. Many of the planned improvements simply involve making certain parameters more accessible, intuitive or user friendly to aid in this process.

 

 

 

Creature Sounds in MAX/MSP

Recently I undertook a project to create a system in MAX/MSP that could manipulate a vocal input in real time for the creation of creature sounds. All of the creature sounds found in my current sound design showreel were created using this system, which I shall explain here.

Demo Video

Scrubbing convolution

The primary inspiration for the system was Dehumaniser by Krotos. One of the features I was keen to emulate was the ability to convolve human voices with animal sounds in real time. This however posed a problem, as the vocal input in a continuous signal, whereas the animal samples it is being convolved with are of finite length. One solution to this is to simply loop the animal samples, however this approach is not particularly responsive to input. The solution I arrived at involved a combination of granular synthesis and convolution. A granular synthesis engine scrubs through the animal sample, with the position of the playhead governed by the RMS amplitude of the voice input. This means that the louder the input gets, the further towards the end of the sample the playhead will get. This approach proved effective for two reasons. Not only does the scrubbing method ensure that there is always material to convolve the voice with, but many animal calls will naturally start soft and get progressively louder. This leads to a good amount of dynamic tracking between the input and the animal sample. Three samples can be loaded into the system simultaneously to allow for blending of different animal calls.

Creating Flexibility

In order to ensure the system was capable of creating the largest range of sounds possible, a modular approach was taken to the design. At the heart of the system is a routing matrix that allows the user to route any of the various modules to any other, in whatever order they see fit, and a mixer that allows the user to combine multiple signal chains. The mixer also contains a number of master effects such as stereo width enhancement, compression, delay and reverb. Each module features an independent send level for both delay and reverb effects for maximum flexibility.

DISTORTION

The system features a parallel distortion processor that allows for overdrive, bit crushing and waveshaping via either Chebyshev polynomials or a transfer function drawn by the user. Very useful for adding some grit!

Additional Modules

There are probably too many modules to cover in detail here, but to summarise the other modules include:

  • Resonators based on the Karplus-Strong model to emulate the vocal resonances of alien creatures
  • A ring modulator for robotic/synthetic sounds
  • Pitch shift to easily alter the perceived size of the creature
  • An input responsive formant filter to add a more vocal quality to processed sounds
  • A vocoder with blendable carrier waveforms
  • A frequency shifter to allow for Darth Vader-esque vocalisations
  • An additional granular synthesis module with an emphasis on creating highly warped sounds
  • A recording module to allow the user to easily capture sounds for export to a DAW etc

Future Developments

I am currently working on creating a stable standalone version of the system, which I am hoping to post here for free download at some point soon. Stay tuned!

Meridian Line

I’ve been working on a very interesting project called Meridian Line with Archway Interactive for the last few months, and thought it might be time to share a few updates. I can’t be too specific about the exact nature of the game, however what I can say is that it will be a navigational thriller based around urban exploration in an underground rail network. The main point of reference both visually and sonically is the London Underground, and so I’ve been attempting to procedurally recreate the sounds of London tube trains. This has been a very interesting and enjoyable process that has thus far involved recording food processors down 6 feet of cardboard tubing, squeaky wheels on a toy Batmobile, catering trolleys, and electromagnetic interference from a laptop playing Youtube videos via the pickups of an eight-string guitar. So lots of fun, as you can probably imagine. If you’re wondering about the button in the photo above, it’s an actual London Tube train button repurposed via Arduino to be the demo restart button.


As a part of the UK Games Fund’s pitch development programme we were given the opportunity to demo our work at EGX last September, so I went along with company director Dave Smith (pictured left) to help with running the stand. The weekend was an absolutely fantastic experience and a great chance to meet other developers from all areas of the industry.

Business cards for EGX 2018: Never underestimate the effectiveness of a good marketing gimmick!

Following on from this we were nominated for the Pitch Development Programme Award and attended the UK Games Fund Awards in Dundee. In the end the award was won very deservedly by Ocean Spark Studios for their RPG Tetra, and also for the amazing outreach work and tuition they offer as part of the Ocean Spark Academy. A good night was had by all! I’m looking forward to seeing what 2019 brings.

iMUSE and the Future of MIDI in Game Audio

The Composer’s Dilemma

When creating music for interactive games, the composer must take into account a number of additional considerations that do not usually present themselves when composing linear music. In linear music the composer is able to dictate the exact form and structure of the music, controlling the length of each musical section to ensure the piece flows and evolves in an effective manner. With interactive music, control of the musical structure is often handed over to the player, meaning that the composer must create music with this in mind. For example, a game may feature a section in which the player explores the level before entering combat. Clearly these two scenarios will require different musical accompaniment, with an effective and seamless transition between the two. Sweet (2015, p. 166) notes that the most effective video game scores serve to enhance the entire experience of playing the game, stating that “any music that does not support this goal will remind the players of the meta-reality instead of providing them with a cohesive narrative experience”. The dilemma for the composer in this situation is that the length of time the player spends exploring before entering combat is entirely under the control of the player themselves. As Collins (2008, p. 3) states: “While they are still, in a sense, the receiver of the end sound signal, they are also partly the transmitter of that signal, playing an active role in the triggering and timing if those audio events.” This means that the musical accompaniment must be flexible enough to allow for a number of eventualities, from the impatient player who rushes through the area to the player who stands still and listens to the score play out. The solution to this issue is to create a dynamic music system that is able to respond to the player’s actions in real time. Many approaches to this kind of dynamic scoring currently exist, but the focus of this article will be one of the earlier systems, which in some ways remains unsurpassed even to this day. Read More

Ambisonic Audio and Virtual Reality

The Sonic Demands of Virtual Reality

With the current mainstream adoption of virtual reality technology, there has become an increased demand for a system that allows the accurate spatialisation of multiple sound sources in three dimensions. It must allow for head movement to ensure that sound sources move realistically relative to the player. Such a system must also be scalable in order that it does not limit the number of sounds that can be spatialised, and also computationally efficient. This article will examine a particular implementation of ambisonics that allows for all of these criteria to be met.

Ambisonics – An Introduction

Ambisonics is a method of creating three-dimensional audio playback via a matrix of loudspeakers. It works by reproducing or synthesising a sound field in the way it would be experienced by a listener, with multiple sounds travelling in different directions. A sound field can be thought of as the superposition of an infinite number of plane waves, meaning that theoretically any sound field can be recreated via an infinite number of loudspeakers placed in a sphere around the listener. In practice a close approximation of the sound field can be reproduced using a finite number of loudspeakers. This differs from conventional surround systems such as 5.1 in that each of the channels present in the ambisonic B Format are not routed directly to a discrete loudspeaker. B Format signals are an encoded representation of the sound field that can be decoded for playback on a speaker array of any size, providing that there are at least as many speakers as there are channels in the B Format audio stream. The larger the number of loudspeakers in the array, the more accurately the sound field is reproduced. Mono, stereo, and 5.1 mixes are also easily decoded for conventional reproduction systems (Noisternig et al 2003, p. 1). Read More

The Benefits of Procedural Sound Design

Game Sound Vs. Film Sound

There are a number of important distinctions to be made between sound design for linear media such as film, and sound design for interactive media. The first and possibly most important distinction is that fact that in film, a given sound effect can be considered a single, discrete event that occurs at an exact moment in time. No matter how many times the film is replayed, the sound will always play out in the same way. In interactive media, control of the same event may be passed on to the player, and as such may be repeated many times in a number of different contexts. This creates the potential for highly repetitious sound design, which in many cases is undesirable for reasons that shall be discussed shortly.

The Need For Procedural Sound Design

At this point it is important to draw up a distinction between non-diegetic ludic sounds that are designed to notify that player about the state of the game, and diegetic sounds that arise from player interactions with the environment. Take for example the sound played in Metal Gear Solid (Konami 1998) when the player is spotted by an NPC guard (see video below).

This short fanfare is heard by the player, but is not heard by the player’s character or any of the NPCs, and as such is considered non-diegetic. Its sole function is to notify the player that they have been spotted and must take evasive action to avoid being killed, making it a ludic notification sound. This sound is required to be repetitive and consistent every time it is played in order to clearly communicate this message to the player, making variation undesirable. Now consider the sound that is played when the player fires a gun in Battlefield 1 (Electronic Arts 2016). This sound exists within the game world, can be heard by the player character, and most importantly is created as the result of a simulated physical process. Read More

Inside The Loop: Audio Functionality in INSIDE

Playdead’s 2016 release Inside makes use of audio in a number of interesting ways, but  this article will focus on the sequence involving distant explosions and shockwaves. During this section a regularly repeating explosion in the background creates a rhythmic shockwave that moves from background to foreground, and the player must avoid being caught out in the open when the shockwave arrives. Failure to find cover results in a swift and graphic death as would be expected, but the manner in which Inside respawns the player is where it differs from many other games. Rather than simply revert the state of the entire game back to the way it was at the last checkpoint, Inside allows the repeating sound of the shockwave to continue, and respawns the player at an appropriate time within the cycle. This audio-led system has a number of implications regarding the function of audio within this section of the game, which will be detailed in this article.

Ludic Functionality

The most immediate function that the audio serves is to instruct the player on how to approach this section of gameplay. The sound of the shockwave is loud and has a large amount of low frequency content. It also causes the ambient sound of the level to duck when it occurs, which increases the perceived loudness of the sound. This all communicates to the player that the shockwave is a threat to them, even before it has resulted in death for the first time, and prompts the player to ensure they are behind cover when the shockwave arrives.

The shockwave is preceded by a rising sound similar to that of a falling explosive shell and a muffled explosion. The delay between the explosion and the shockwave also conveys some spatial information about the game environment, as it creates the impression that the explosion must be occurring very far away in order for the shockwave to take so long to arrive. This is of course physically impossible, in reality the sound of the explosion and the shockwave are one and the same, but the effect is very convincing nonetheless.

This sequence of sounds allows the player to time their movement between cover objects and avoid a fail state (death). As the cycle of warning sound, explosion, shockwave repeats, it establishes a rhythm in the player’s mind. The player may then rely on this internal sense of rhythm to overcome the puzzles in the section rather than simply relying on a more literal interpretation of the sound cues to avoid danger. If the player was simply waiting for the sound of an incoming explosive shell to denote imminent danger before moving to safety, some of the sections’s puzzles would be much more difficult. One in particular involves using a heavy door as movable cover, and due to its momentum or lack thereof the player must pre-empt the warning cue with their movement input in order to be successful. These kinds of puzzles are solvable only by paying attention to the rhythmic nature of the audio, so a respawn system that breaks the rhythmic continuity could easily lead to player frustration or confusion. In keeping the rhythmic continuity in tact throughout death and respawning, Inside not only avoids player frustration but also continually conditions the player to listen to the audio cues and respond accordingly. This appears to have been one of Playdead’s key motivators in designing this section. Sound designer Martin Stig Andersen states in Inside: A Game That Listens (2016) that a similar section in their previous release Limbo was compromised due to the fact that the audio would not loop continuously, making the puzzle harder to solve. When creating Inside they designed it from the ground up to allow for continuously looping audio.

The rhythmic nature of the gameplay allows for parallels to be drawn with rhythm action games such as Guitar Hero and Rock Band. In Inside however it is not a player’s reflexes and hand eye co-ordination skills that are challenged through the use of rhythmic gameplay, but rather their ability to plan and judge speed, distance and positioning. Technically the sound of the shockwave could also be considered a game mechanic in and of itself due to the fact that the player is attempting to avoid being destroyed by a very large wave of air pressure.

Immersive Functionality

The constantly looping audio cues in this section also have implications for player immersion. Collins (2008 p. 134) states that “Audio plays a significant role in the immersive quality of a game. Any kind of interruption in gameplay – from drops in frame rate playback or sluggish interface reactions – distracts the player and detracts from the immersion and from audio’s playback – particularly interruptions in music such as hard cut transitions between cues”. If the audio were to simply stop at the point of death and start again when the player respawned, it would create exactly the kind of discontinuity that Collins mentions here and have a detrimental effect on player immersion. Ermi and Mayra (2005 pp. 7-8) delineate immersion into two main categories; sensory immersion as created by the graphics and audio of the game, and challenge-based immersion created by the player applying their skills to overcome the game’s challenges. Death and respawn mechanics present a unique challenge in terms of preserving both sensory and challenged-based immersion. Upon dying the player has triggered a fail state and no longer has physical control of the character. This interrupts the challenge-based immersion until the player respawns and regains agency. Furthermore the game will usually have to reset itself back to a previous checkpoint. This most likely means fading the audio out and fading the screen to black, breaking sensory immersion through lack of audio visual input. By continuing to loop the shockwave sound effects even after the player dies, Inside retains auditory immersion throughout, keeping the player immersed until they are respawned and challenge-based immersion can resume.

INSIDE_20180205233714

This section the game is not solely notable for its sound design, as the music implementation also plays a key role. There is a puzzle part way through the section where the player is required to synchronise a piece of cover that moves in a circular pattern with the rhythm of the shockwave. If the player is successful then the cover will move across a ladder at just the right time to protect the player as they ascend it. By its very nature the puzzle can be a difficult one to solve, and the game provides a clear signifier to the player when they achieve the solution by removing the shockwave sound design entirely and replacing it with a musical cue that follows the same rhythmic pattern. First and foremost this notifies the player that they have solved the puzzle, but it has a much more profound effect in terms of player immersion. The game makes an explicit shift from realistic sensory immersion to abstract sensory immersion. With the more literal sounds of the threat removed the player is now focussed solely on timing their movements to the rhythm rather than taking cover from an explosion. Grau (2003 p.13) states that “…immersion is mentally absorbing and a process, a change, a passage from one mental state to another. It is characterize by a diminishing critical distance to what is shown and an increasing emotional involvement in what is happening.” Inside almost forces the player into a deeper state of immersion by engineering this change from sound effects to emotionally engaging music in a very sudden and unexpected way and inducing a different mood. As Collins (2008 p. 133) states “Mood induction and physiological responses are typically experienced most obviously when the player’s character is at significant risk of peril… In this way, sound works to control or manipulate the player’s emotions, guiding responses to the game.” The timing of this shift in mood occurs when the player is in the middle of a fairly stressful section. The only way out of this hostile environment is to progress, and at the exact moment the musical cue enters the player is halfway up a ladder, exposed until the last moment when the cover swings into place to protect them. Due to the stressful nature of this predicament the mood induction can be highly effective, arriving at the precise moment the player experiences the relief of solving the puzzle and the endorphin rush of a reward for competency.

As the player progresses from this point, higher pitched reverberant synth layers gradually fade into the mix, drawing the player deeper into the abstract soundscape and even further from the literal auditory events of the scene. Once the player has completed the section, the pulsing, rhythmic synth layers representing the danger of the shockwave fade out entirely, leaving only the nebulous higher pitched synth layers. This serves to notify the player that the danger has passed, and that they can proceed normally from here on. Once the player enters the elevator at the very end of the section, the music fades out entirely, before the shockwave sounds return suddenly and cause the elevator to crash into the water below. This final subversion of expectations catches the player unaware and serves to bring them back to the game’s more literal reality very suddenly. This unexpected tonal shift creates feelings of stress in the player, setting them up for the next mood induction which comes in the form of a claustrophobic underwater gameplay section.

In conclusion, Inside utilises audio in this gameplay section to imbue one fairly simple mechanic with a number of different functions. By giving priority to the looping audio and ensuring that all other gameplay elements conform appropriately, Inside is able to effectively communicate the game mechanics whilst retaining and deepening player immersion, even through repeated player deaths.

REFERENCES

Andersen M. (2016) Inside: A Game That Listens. [Online Video] Available at https://www.youtube.com/watch?v=Dnd74MQMQ-E [Accessed 4th Feb 2018].

Collins, K. (2008) Game Sound: An Introduction to the History, Theory and Practice of Video Game Music and Sound Design. Cambridge (MA): The MIT Press.

Grau, O. (2003) Virtual Art: From Illusion to Immersion. Cambridge (MA): The MIT Press.

Ermi, L. and Mayra, F. (2005) Fundamental Components of the Gameplay Experience: Analysing Immersion. DiGRA ’05 – Proceedings of the 2005 DiGRA International Conference: Changing Views: Worlds In Play.

 

An Introduction

mixing

For the last decade or so, I have worked as a recording engineer and music producer. During that time I have been fortunate enough to work with a number of incredibly talented artists. From assisting on sessions for Peter Gabriel, Tom Jones and Kanye West and recording and mixing for The Fall, to producing local artists such as Fruitless Forest and Suburban Symphony, it’s been a truly amazing experience. After I graduated from the Tonmeister programme at the University of Surrey I went on to work at Real World Studios in Bath, before setting up Hilltown Studios in Colne, Lancashire.

Recently however, I have felt in need of a new challenge.  I have been fairly obsessed with video games ever since my parents bought me a Sega Master System as a child. The idea of exploring virtual worlds and controlling the events unfolding on my TV screen utterly captivated me, and as technology has advanced the medium of interactive games has become steadily more immersive and complex. The mainstream adoption of VR technology fascinates me, and seems to be leading us closer to completely immersive experiences that up until now existed only in science fiction. The role of audio in games has always been extremely important to me. When I think back on my most memorable gaming experiences, they are always tied to sound in one way or another. The warp sound from Myst, the sound of the chainsaw from Doom, the alert fanfare from Metal Gear Solid and the infamous “You Died” sound from Dark Souls are etched deep into my subconscious. The soundtrack to Riven still fills me with a sense of awe, and Aeris’ theme from Final Fantasy 7 still makes me want to cry like a baby. I realised I wanted to learn how to create and implement sounds like these.

I got my chance to be a part of this process when I was asked to compose the soundtrack for a game called Beyond Flesh and Blood, a third person shooter in which the player rampages around a post-apocalyptic vision of Manchester killing hordes of rebel forces in a remote controlled mech. The developer Pixel Bomb Games initially only contracted me to write and produce the music for the game, but towards the end of the game’s development they also asked me to implement the sound effects and music within Unreal Engine. Relishing a challenge I accepted, although at the time I had no idea how the engine worked. After consuming all available online resources within the space of a couple weeks I was (almost) ready. I quickly set about adding reverberant volumes to create a palpable sense of space, randomising and revising existing sounds to make them more appropriate, adding attenuation to create depth, and recording additional alien slime noises with the help of a bucket full of chicken soup and a sink plunger. It was a steep learning curve, but a lot of fun at the same time. The combination of creativity and overt technological geekery was the reason I got into sound engineering in the first place, and the experience I had working on Beyond Flesh and Blood satisfied both of those cravings in a way I hadn’t yet experienced. It dawned on me that this was something I could specialise in and make a career out of, and have a damn good time doing it too.

And so today I began my studies on the MSc – Sound and Music for Interactive Games (or SMINT, as it appears to be affectionately known – other brands of breath freshener are available). It is my hope that the things I learn and the people I meet over the course of the next year will allow me to pursue a career in video game sound design. Ideally I would like to specialise in sound design whilst keeping my compositional abilities sharp enough to be useful if the opportunity arises. One area I am particularly interested in is the use of HRTFs and head tracking to create true three dimensional audio for VR experiences. Sound does so much to give us a sense of place in the world, and I believe that recreating believable sonic environments will be pivotal in the long term success of virtual reality. I recently downloaded the Google Resonance SDK and had a play around with it in Unity, and it’s capable of some remarkably convincing results. I’ll be updating the blog with more detailed thoughts on that and many other things over the course of the year. I’ll also be using it as method of tracking my progress along the Dunning-Kruger curve, which should be a whole world of soul-destroyingly enlightening fun. Onwards!

Links:

My Discogs Page – Past Credits

Beyond Flesh and Blood OST