Saturday 16th December was the second of the gig nights me and my colleagues at Red Wall Studios have put on, under the Red Wall Live brand. This was however the first to include a fully licensed bar. I have held a Personal License for over 10 years so might as well put it to good use.
The bands for the evening were Vice, Kaine, Midnight Prophecy and was hosted by Matt Jones (Twisted Illusion) who also played an acoustic set. It was a very successful evening and was a pleasure to engineer and work with all the bands. Here is my view of the event:
The events take place in our main Live/Rehearsal Room room, with temporary bar in the Chill Out area. We will initially be promoting monthly events starting in the New Year (next one planned for 27th January 2018) and beyond. These will be intimate affairs of around 40 tickets per event, with the main aim to showcase the local bands/musicians/acts which use our Recording and Rehearsal facilities.
If you would like more information, or to register interest in performing at a future event then get in touch with Red Wall Live.
In an earlier blog I discussed what Dolby Atmos is, and what it is all about. Over the past few weeks I have been exploring more of what Dolby Atmos can do. Here I will share with you a short experimental binaural soundscape I have created.
Dolby Atmos is based upon a 7.1.2 channel bed, with up to 118 additional audio objects which can be positioned within the space. My experimental piece utilizes two audio objects moving around the room. These objects are horror voice and sound effect samples. Each object will be sent to a reverb which will make up the 7.1.2 channel bed of room. The screenshots below show the Pro Tools session:
The top screenshot shows the Dolby Atmos Panner automation and the movements within the Dolby Atmos Monitor application. The bottom screenshot shows the mixer page with reverb settings, including positioning and divergence amount for each of the two audio objects within the room.
A really useful feature of working with Dolby Atmos is the ability to easily monitor both binaurally and using the speaker configuration of the studio working from. It is easily configurable within the Dolby Atmos Renderer application. In this case I was monitoring the 7.1.2 in Virtual Reality (VR) mode over headphones, while being able to switch to the 5.1 surround speaker setup of the studio. I was very impressed with how everything sounded through the 5.1 speaker configuration.
If you would like to have a listen to my short piece, put on your headphones and click below.
A few days ago I was hired to undertake some mastering for a new client. Today was the release of the new and improved FabFilter Pro-L 2. I own all the products made my FabFilter, so was eligible for a 70% discount. An ideal opportunity to give it a whirl.
I have used original Pro-L on pretty much everything I worked on and was impressed with the update. There are some nice improvements. There is a new True Peak limiting mode with improved oversampling options. Display modes and metering have been improved, including loudness metering with support for the EBU R128, ITU-R BS.1770-4 and ATSC A/85 standards. Surround sound support has also been added including Dolby Atmos 7.1.2, although I am yet to try this out.
When the song is released I shall update this blog accordingly. In the meantime, if you would like to learn more about the FabFilter Pro-L 2 see the video below.
Having decoded the SoundField ST450 recordings to 5.1, I needed to position the other microphones recordings within the mix. I have been using the ambiX ambisonic plugin suite for a while and would like to demonstrate some of its’ uses. The ambiX suite uses convolution of head related transfer function (HRTF) databases and loudspeaker impulse responses (IR) to create ambisonic and 360 audio content. It is also possible to use the ambiX suite with HRTF databases or IR presets created by yourself. This is something I am exploring further, but I need to understand MatLab in more detail. If you are interested in creating your own databases the AES69-2015 standard is a good read.
Using the Binaural Decoder within the ambiX suite, I was able to simulate 12 virtual loudspeakers in a dodecahedron array. The screenshot below shows the pan positions of the five microphones used.
The AmbiX Encoder is a spherical panner. The Rode NT4 was positioned high above the conductor’s head when recording, so I have positioned it with left/right at -/+ 90 degrees azimuth with 60 degrees of elevation to give some stereo height within the mix. The Rode NT2-A mono mic was also high and behind the conductor, so is central with 75 degrees of elevation. The pair of AKG C214 were stereo room mics and have been positioned accordingly within the mix.
I did experiment with using higher order ambisonics and more virtual loudspeakers (up to 50), however due to the natural room reverb within the open recordings, it causes phase issues during convolution with the impulse responses.
If you would like to listen to the audio, please click below (headphones required):
I’ve been aware of the Facebook Spatial Workstation since attending a conference on spatial audio earlier this year. I tried it, but it never really interested me as I haven’t got access to 360 video cameras to create video content. It has since been updated considerably and is now included within the latest version of Avid Pro Tools software, however the HD version is required to use it effectively. I decided to take another look.
My fondness of Reaper increases daily due to it’s flexibility to undertake fun tasks. The download of Facebook Spatial Workstation plugins includes a template Reaper project, with easy to follow instructions to get started. The Facebook Spatial Workstation can be used to create 360 audio alone, alongside the ability to incorporate 360 audio with 360 video content.
For the new exploration into the capabilities of Facebook Spatial Workstation I acquired a free 360 video from here. The video included some sound of the sea which I kept, but I added some birds and an aeroplane. The images below show the positioning of these sounds relative to the video. The sea is fairly obvious, but I positioned the birds above the cliffs. You can also see the pan automation lane for the aeroplane fly over.
Along with these sound effects I recorded some speech of myself. This would be headlocked as the viewer moves around within the 360 video. To incorporate and encode all these elements was a fairly simple task to undertake. There are clear instructions for full use of Facebook Spatial Workstation available here. An image of the FB360 encoder is shown below. The focus section at the bottom enables overall specification of viewing angle and level reduction.
Having used Facebook Spatial Workstation with more enthusiasm, I feel it can be a very creative tool. The videos can be encoded into a variety of formats depending upon which platform you would like to share the content. Mine was created for Facebook and if you would like to check it out, please see below.
I have been looking into a few of the available options for online collaboration within audio production. There are numerous ways available in which to share files and projects between collaborators, such as Dropbox and the like. However, I am more interested to see what is available for real-time, or live collaboration between studios and artists.
One product I have been aware of for a while is Source Connect. This is advertised as an industry standard ISDN replacement. Source Connect is used to connect studios around the world, to primarily record voice talent and Automatic Dialogue Replacement (ADR). Source Connect works as standalone software or can be integrated into any DAW software used. It does appear to be a very good product, however it is quite specific in it’s uses. The product is quite expensive and as it is not yet 64-bit supported on Windows it is not something I am keen to purchase.
Another product I have been looking at recently is VST Connect from Steinberg. There are two versions, VST Connect Pro and VST Connect SE which is included within Cubase Pro DAW software. The main differences between the versions being VST Connect Pro supports up to 16 tracks, compared to just stereo, and also supports higher quality uncompressed audio transfer. VST Connect can transfer audio, MIDI and video between the host studio and the artist/musician anywhere in the world over an internet connection. To connect to the host studio (which will be running Cubase Pro) the collaborating musician needs the free VST Connect Performer application running on their computer. Once the connection is made between host and client, audio, MIDI and video communication is established. As a Cubase 6.5 user for many years, VST Connect could be one of the many improvements with Cubase Pro which may tempt me to soon upgrade.
The Avid Pro Tools S6 control surface is based upon a modular design, which enables it to be fully customizable to the user’s studio requirements. The recent Pro Tools HD 12.8 update, introduced support for Dolby Atmos mixing and audio rendering. The Avid Pro Tools S6 control surface should give deep integration with the software, for parameter control, manipulation and speed of mixing. A promotional video by Avid, demonstrating some of the new improvements with the Pro Tools HD 12.8 update with Dolby Atmos support is shown below:
Dolby Atmos is a form of three-dimensional audio, which is designed to give the listener a fully immersive experience. Conventional stereo and surround sound setups (such as 5.1) place the audio around the horizontal speaker positioning, Dolby Atmos uses height and objects to create an audio atmosphere. Audio objects can be placed anywhere around the listening position, to enhance the experience for the listener. Below is a video by Dolby explaining in detail the principles of how Dolby Atmos works:
During this semester of the MSc Audio Production I shall be undertaking some mixing in Dolby Atmos for video and audio projects. I will post further updates as I begin to use the Avid Pro Tools S6 for mixing the projects over the next few weeks.
It’s been fantastic to be involved with Local Music Live over the past few months. Big congratulations to Feliicia Eliza on being chosen as winner. Also, big thank you to everyone who entered and submitted some great new music. Hopefully there will be many more competitions and events in the future.
Today myself and my colleagues at Red Wall Studios, began our part of the prize winning package being a fully recorded, mixed and mastered single, undertaken here at Red Wall Studios. Two days have been allocated for completion and all instrumentation was tracked today.
The instruments recorded and microphones used were as follows: Drum kit (Kick – AKG D112, Snare Top – Audix i5, Snare Bottom – Shure SM57, Floor Tom – Audix D4, Rack Tom – Audix D2, Hi-Hats – AT2020, Overhead Left – SE1A, Overhead Right – SE1A, Overhead Centre – Shure PG81, Kit Mic – Rode NT2 and Room Mic – SE2200a), Bass – DI, Electric Guitar – DI and Acoustic Guitar (Body – AT2020, Neck – SE1A and DI).
Below are photos of the drum kit with microphone placement and tracking in the control room.
It was a pleasure to work with Feliicia and her band, and it was a thoroughly enjoyable day.