Tech Focus: Immersive Sports Sound, Part 2 — A Variety of Formats for Live Audio in Venues

So far, only one sports arena — FedEx Forum — has tested an immersive system

Dolby Atmos has become virtually synonymous with immersive audio for broadcast, named the primary format for immersive audio in the ATSC 3.0 specification. As immersive sound makes inroads into live sound, however, a broad array of options are available for its implementation.

Click here for Tech Focus: Immersive Sports Sound, Part 1 — Venue Systems May Be the Next Logical Step.

Netherlands-based Astro Spatial Audio‘s SARA II Premium Rendering Engine harnesses Auralite 3D technology, developed with the Fraunhofer Institute for Digital Media Technology (IDMT), and makes fully object-based, sophisticated immersive audio accessible for live-sound applications. CPU-based with a Linux ecosystem, each SARA II engine offers up to 64 MADI or 128 Dante configurable network pathways at 48-kHz/24-bit resolution. All paths are assignable to at least 32 audio input channels that can be rendered to up to 128 independently processed sound-source outputs (point-source or plane-wave).

To ensure absolute accuracy, true object-based immersive audio is achieved with 40 synchronization updates per second per object, along with advanced algorithms applied to fast-moving objects to prevent audible errors, with latency of 5 ms. SARA II provides browser-based access to an easy-to-operate graphical user interface, with simultaneous control of multiple devices: from mixing consoles, digital audio workstations, and Windows, Linux, or Mac operating systems to tablets and phones running either Android or iOS. Control via third-party systems or MIDI is achieved via Open Sound Control (OSC).

France-based L-AcousticsL-ISA — or “Immersive Hyperreal Sound” — is an object-based audio-mixing and -distribution format. It provides extensive simulation, prototyping, and listening that yields three proprietary algorithms to establish direct spatial, three-dimensional connections between objects and sound for all audience sizes. L-ISA tools provide the mixing engineer five parameters for each sound object: pan, width, distance, elevation, and an additional aux send. Through a range of third-party solutions that are part of the L-ISA ecosystem, each parameter can be controlled in real time. The format has been applied to theaters, houses of worship, museums, and other types of venues. No sports venues are reported to have deployed it.

Germany-based d&b audiotechnik Soundscape creates immersive sound using its DS100 Signal Engine and two software modules, En-Scene and En-Space. The former is a sound-object positioning tool allowing individual placement and movement of up to 64 sound objects in a space, such as a stage, so that each sound object corresponds both visually and acoustically. The latter is an in-line room-emulation tool that creates and modifies reverberation signatures for a space. These reverberation signatures are emulations derived from acoustic measurements of seven internationally renowned performance venues and convolved within the audio processor. A temporary Soundscape system was installed in July in Memphis’s FedEx Forum, home to the NBA Grizzlies.

Meyer Sound’s Galaxy multichannel processor includes an expandable audio matrix typically used to distribute sound to a group of PA loudspeakers in a coherent fashion so that one or two channels of sound are delivered to PA components to provide listeners a uniform experience from a single direction. The company’s Spacemap Go spatial sound design and mixing application leverages Galaxy’s processing power to deliver sound from up to 32 sources to many directions simultaneously and to move those sources dynamically. Immersive-sound systems can be designed to provide the multichannel coverage required for large venues, such as sports venues. They can, for instance, move sources through trajectories within the coverage area to design an audio channel to do “the wave” in a sport stadium. The Meyer system could manage 32 simultaneous waves running at different speeds and directions.

California-based Spatial’s Reality engine is built on an object-driven platform that renders realistic, believable soundscapes in real time, with natural physics and complex object behaviors that allow 24/7 dynamic experiences. Spatial Studio is a creation environment with detailed control of fully visualized experiential scenes. Creators can see the whole scene take place on the 3D canvas and can fine-tune object position, size, motion, and behaviors while listening to real-time preview. Spatial has no current sports-venue implementations. However, the Conservatory of Recording Arts & Sciences (CRAS), which developed Dolby’s broadcast-audio training module, has added Spatial to its curriculum with Spatial Studio 101, a two-day foundational course offered in-person and remotely to teach sound-design engineers how to build immersive soundscapes using Spatial Studio features.

Password must contain the following:

A lowercase letter

A capital (uppercase) letter

A number

Minimum 8 characters