RSNO New Bands


This post was originally published on the RSNO blog, it is an introduction to the first full project that I have been producing since joining the Royal Scottish National Orchestra.

RSNO New Bands is a new programme of activity designed to get groups of musicians to incorporate orchestral instruments into their own original material. Digital Projects Coordinator at the RSNO, Neil Cullen, explains.

Using a concert last October in Paisley Abbey with The Twilight Sad and Admiral Fallow as a starting point, Royal Scottish National Orchestra musicians are mentoring fledgling artists on how best to orchestrate for a variety of genres. Last year’s concert was organised as part of Paisley’s Spree Festival and Head of Brass at the Royal Conservatoire of Scotland, John Logan, was in charge of delivering arrangements of both bands’ songs for full symphony orchestra.

On the morning of the concert, John Logan led an introductory workshop for three young bands from the Glasgow area along with input from Admiral Fallow’s Phil Hague (who also handled some of the night’s arrangements), Sarah Hayes and Kevin Brolly. Also in attendance at this first workshop was Ruth Rowlands, Davur Magnussen and John Whitener who play the positions of Cello, Principal Trombone and Principal Tuba with the RSNO, and would go on to lead the individual workshops with each band.

After the seminar on the morning of the concert, each band had individual workshop in December with Ruth, Davur and John at the University of the West of Scotland. These workshops will extend until June of 2014, with a performance planned later in the year where each of the bands will perform with the RSNO the fruits of their labour during the New Bands programme.

The bands that are taking part in the workshops are Marie Collins, Les Rusty Scruffs, and Junkie Romance, who you can find more about in the next episode of the New Bands video blog series.

Tagged , , , , , ,

RSNO Composer Lab: Anna Meredith

From October 2013, I’ve started work as Digital Projects Coordinator at the Royal Scottish National Orchestra. The RSNO’s Composer Lab concerts took place recently, featuring music from living composers, in which they could explain their own pieces to the audience.

Anna Meredith presented the concerts and one of her own pieces was performed in an interestingly interactive way. Having written a dissertation on adaptive music, I found the process fascinating. The crowd made decisions on how the piece should be played, affecting the tempo, dynamics and instrumentation, the same way parameters would in a game’s code. These binary choices from the audience, “faster or slower”, “louder or quieter”, albeit in a rudimentary way, could be compared to the logic statements that would be used to build a reactive game audio engine.

I spoke to Anna about the concert for the video above, in which she explains it’s premise and the crowd interaction elements.

Tagged , , , ,


This Monday, my second contribution to Designing Sound (the premier online source for articles and features relating to sound design) went live. An interview with Raymond Usher, an independent audio director who was with Rockstar North from back when they were DMA Design and has worked on most of the pre PS3 / 360 generation GTA games, can be viewed by clicking the above picture or linking from here.

The interview covers Raymond’s experience working on the Grand Theft Auto series, his work on Realtime Worlds’ Crackdown and also his independent audio out sourcing company Euphonious. Also covered is some opinion on the impact of audio middleware like FMod and Wwise as well as a discussion of Raymond’s research into the importance of audio to overall experience of a video game.

avatars-000027546347-vp816h-cropRaymond Usher started work in the games industry in 1992 as a sound designer at DMA Design, working on a number of titles, most notably the original Grand Theft Auto games. He became the company’s senior audio programmer and through the release of GTA III, was part of the metamorphosis into Rockstar North. Raymond stayed at Rockstar until the end of development on GTA: Vice City before moving to the newly formed Realtime Worlds to act as Audio Director for the title Crackdown. After that company’s collapse, Raymond founded Euphonious, an independent audio production company providing direction, sound design, audio programming and music licensing services for developers. Double BAFTA award winner for Vice City and Crackdown, Raymond has been at the forefront of game audio for over 20 years.


p0136h37 (1)

Recently I made my first guest contribution to Designing Sound, the web’s number one source on articles and features relating to sound design. The article, an interview with Trevor Cox for the sites acoustics month can be read by clicking the picture above or here.

Trevor talks about the Insitute of Acoustics, the Good Recording Project, his work with the BBC and gives some general advice on acoustically treating a audio monitoring environment.

avatars-000027546347-vp816h-cropTrevor Cox is a professor of acoustic engineering at the University of Salford. He has presented a range of science documentaries for the BBC, designed the acoustics of various rooms worldwide, is the co-author of a book on absorbers and diffusers and has previously been the president of the Institute of Acoustics.

Tagged , , , , , ,


Over summer I will be taking part in this year’s Dare To Be Digital competition as Scottish Ambassador to the Chinese entry. This means that I take the fifth spot on their team and I’ll handle the game’s sound design and music composition as well as some programming and game design. In a team as small as 5, there is a lot of room to be cross disciplined. The role involved a trip to Beijing to meet the team and work on the concept before flying back and beginning the competition on the 12th of June. As part of the development process we need to maintain a team blog which will document our progress throughout the competition. As the only native English speaker on the team that task falls to me and so I’ll update this post with all of the weekly team logs. Our game is titled The Unknown.


Week One: Into The Unknown

Week Two: United By Unity

Week Three: Green Screen Dreams

Week Four: All Merging Together

Week Five: Push It To The Limit

FascinatE Project


For three months leading up to summer, I will be working with the University of Salford on an aspect of the EU funded FascinatE Project. FascinatE is a €9.5m EU funded project involving a group of partners from across Europe including the BBC, Technicolor, TNO and ARRI amongst others. FascinatE stands for: Format-Agnostic SCript-based INterAcTive Experience and is looking at broadcasting live events to give the viewer a more interactive experience no matter what device they are using the view the broadcast.

The FascinatE project will develop a system to allow end-users to interactively view and navigate around an ultra-high resolution video panorama showing a live event, with the accompanying audio automatically changing to match the selected view. The output will be adapted to their particular kind of device, covering anything from a mobile handset to an immersive panoramic display. At the production side, this requires the development of new audio and video capture systems, and scripting systems to control the shot framing options presented to the viewer. Intelligent networks with processing components will be needed to repurpose the content to suit different device types and framing selections, and user terminals supporting innovative interaction methods will be needed to allow viewers to control and display the content.

Joining Ben Shirley and Rob Oldfield (the UoS team responsible for a major proportion of the project’s audio component), I will be working on further development of an audio object extraction system. A large focus of spatial audio research at the moment is object based audio, (similar to that currently used in video games) and how the technique could be applied to linear media or more specifically, television broadcast. Extracting audio objects in a live context is a tricky proposition. In a small set, actors could be close mic’ed and local GPS could be used to provide positional data, however, in a live sports set-up such as football, another approach must be taken. Instead, pertinent on pitch audio events such as ball kicks or whistle blows must be triangulated using the current large aperture microphone array infrastructure (12 pitch side microphones).

Part of what I will be developing is a way to reject audio events detected outwith the parameters of the pitch, primarily crowd noise. The detection algorithm that seeks for ball kicks can currently throw out false detections due to drumming in the crowd and similarly, crowd whistling can be mistook by the application to be referee whistle blows. If the application were able to determine which were on pitch audio events and render the audio output accordingly, these false detections could be avoided.

The final demo of FascinatE will be hosted at MediaCityUK on the 30th of May, where all of the partners will descend upon Manchester to bring together all of the constituent parts of the projects and create one cohesive whole, a glimpse at the future of interactive broadcast.

Somethin’ Else + Papa Sangre

Recently I visited the offices of Somethin’ Else, the largest UK independent radio production company. As well as radio Somethin’ Else are also very active in digital media, producing apps and video games for mobile devices. I was there to talk with Digital Project Manager Nev Daniels about my work with binaural audio.

Somethin’ Else are responsible for creating Papa Sangre, a binaural audio only game for iOS devices. The first person thriller was created on what the team boast is the first ever real-time 3D audio engine implemented on a mobile device. Papa Sangre was supported by Channel 4’s now defunct 4ip. After gaining plaudits for Papa Sangre, the team started working on a successor The Nightjar, an audio only Sci-Fi thriller starring Benedict Cumberbatch which went on to be nominated for 2 BAFTA awards. The Nightjar is an interesting beast, more an interactive radio play than a video game in the conventional sense, blurring the lines between mediums.


After chatting with Nev about my experience with the Android binaural system, Nev mentioned that Somethin’ Else are interested in allowing outside developers to work with their engine. At some point in the future, once some other projects are finished with, I plan to develop a prototype based on a game concept I have been putting together. For now, I’ll say no more, but it’s something I’ve been mulling over for quite a while and the prospect of actually putting it into production is incredibly exciting.

Tagged , , , , , , , , , ,

The BBC R&D Project

The project that I’ve been working on recently with BBC R&D, which started to take shape last summer, has recently been completed along with my master’s degree in Audio Production. This thesis project was part of the BBC’s Audio Research Partnership, of which I was selected to be the first masters student to take part and the first student to work full time from within the R&D department at MediCityUK.


Binaural audio has benefited from a resurgence of interest recently, as outlined in Chris Pike’s recent blog on the BBC R&D site. As well as giving a good general background on research efforts and applications of binaural audio, Chris’ blog describes my own work and it’s place within the R&D audio team’s overall work with spatial audio. Chris supervised the project which acted as the final thesis for my masters and involved the development of a virtual surround sound application for mobile devices with head tracking. Binaural rendering was used to create 5 virtual speaker positions which could playback standard format 5.1 content as currently produced by the BBC and head tracking was achieved through a bluetooth connection to an inertial measurement unit (IMU) attached to headphones.

As binaural audio generally relies at the moment on the use of headphones, it could be argued that mobile devices are the perfect platform for this type of spatial audio due to the inherent use of earphones when consuming media on tablets, smartphones and the like. Mobile devices are getting more powerful and hence more capable of handling the computational complexity of binaural rendering but more importantly, their popularity has increased exponentially in the past few years.

Last week I wrapped up the project with a lunchtime presentation to the rest of the BBC R&D team at the Dock House North lab. Now that the masters thesis is done and dusted I can’t help but look to the future. I am going to start regularly updating this blog again with my monthly activities in the audio and gaming worlds as well as highlighting some of the work of others that may catch my eye. Hopefully then this will be the final incarnation of a blog which has went through various transformations over the past couple of years.