NBA Summer League Tests Out, Refines Audio Workflows
New mic arrays and ways of mixing them are a focus
Story Highlights
For years, the NBA’s Summer League has been the organization’s Skunk Works: a place to develop new broadcast techniques and technologies relatively free of formal contractual constraints. Among the innovations trialed have been Noah Basketball’s shot-tracking system and Sony Sports’ Hawk-Eye tracking system.
Lately, the experimentation has reached into virtual territory, with the NBA testing cloud-based production workflows for alternative feeds and multi-language broadcasts, using such platforms as Microsoft Azure and Evertz DreamCatcher BRAVO. Over time, various camera, sensor, and microphone technologies have been tested and refined during Summer League events, intended to improve broadcasts, officiating, player training, and game-data analysis.
This year’s edition, which finished up recently at the Thomas & Mack Center and Cox Pavilion on the University of Nevada, Las Vegas campus, was no different. And when it came to audio, there were at least a couple of technical three-pointers.
Array Microphones
Shure was able to further refine its multichannel mic-array system, which it has been developing at the Summer Leagues over the past several years. An evolution of its corporate-meeting products, the planar arrays allow placement of multiple transducers in a fixed arrangement but with each transducer lobe able to be remotely steered over a digital network — in the case of Summer League, a Dante network. The units were deployed on each of the basket stanchions, providing near full-court coverage. Audio was piped to the production truck but were recorded to multitrack and not used as part of ESPN’s broadcasts. The recordings, notes Shure Associate Director, Global Product Management, Pro Audio, Bill Oakley, can be used later for further research into creating more immersive audio mixes for basketball broadcasts.

Shure’s Bill Oakley: “The [mic-array system] is a digitally controlled product on a Dante or other digital audio network, which opens up a slew of options and possibilities.”
“Basically,” he continues, “we wrote the software platform to be specific for this application, this market, while reutilizing the mechanical aspects as a way to be quicker to market.”
Expected to be released this fall, by the late-October start of the new NBA season, the broadcast version will have versions of Shure’s IntelliMix DSP, which offers echo-cancellation and noise reduction. They have been tweaked to add more functionality and control.
“It’s still a work in development,” Oakley explains. “As you can imagine, in the broadcast or live space, the noises are much more variable [than in corporate-meeting environments], and it’s not going to be the same constant tone, per se. We have our initial offering in the array today, but there are more plans and development happening to improve that and tailor it for [sports] applications.”
Array-type sound capture is certainly broader in its range than the typical shotgun-mic assembly used for end-zone capture. Although the specifications of the broadcast version are still under wraps, Oakley notes that the audio quality comes from the ability of a planar array to make a tight directional lobe, with better side/rear rejection than a long shotgun offers, and from having eight in a compact form factor, versus the physical space occupied by an equal number of shotguns, on a single network cable.
“By reducing the ambient sound contributing to the noise floor,” he explains, “the lobe channel ends up with the targeted sound being more present.”
Automation Is Key
Much of the potential for these systems, however, lies in their capacity for automation. Automixing is already in common use for specific broadcast-sports applications, such as managing multiple announcers, either via dedicated units, such as the Dan Dugan automixer plugin from Waves, or integrated into consoles, such as Calrec Artemis and Apollo desks.
The concept can be taken beyond its current use cases for broadcast sports. “Instead of having one shotgun that picks up the entire key (and of course all the ambient noise that comes with it),” Oakley explains, “you can fan out five, six, or seven virtual shotguns that have a tighter pattern, higher ambient rejection, and then you can leverage the auto mixers.”
He notes how that combination of arrays and automation can exponentially increase their combined leverage, “while also trying [to minimize] the number of faders A1s need to manage. Engineers already have enough inputs to deal with; they don’t need more. As action moves between the different virtual shotguns, only those tiny lobe shotguns are being turned on, and you still have that one fader hitting the desk and controlling your levels accordingly.”
At a time when broadcast budgets are in flux, automation’s potential is being eyed carefully. But even beyond that, the shift to immersive audio formats is increasing channel counts, and automation can help manage those broader soundscapes. Systems like Lawo Kick, which can automatically track the path of soccer balls in play, are already in use. Shure’s array prototypes have also been tested by MLB and golf broadcasters, in addition to the NBA.
Oakley emphasizes that the intent of further developing sound-capture automation isn’t to reduce staff but to make A1s’ current workloads easier, in terms of fingers on faders as well as field logistics. “You could never steer shotgun microphones as efficiently as you can digitally controlled arrays without having to send a couple of A2s out there to move them around.”
But, he stresses, the ultimate goal is better sound quality, something more achievable with arrays. “We’re a microphone company. We have musicians and people who are passionate about this, so the audio quality obviously was the first thing.
“But a very close second,” he continues, “was that this is a digitally controlled product on a Dante or other digital audio network, which opens up a slew of options and possibilities. With REMI, [for example] the ability to have someone 2,000 miles away not only mix the audio but also adjust microphones is a totally different workflow, adding more control to the engineers. So, good news: we haven’t replaced anyone’s job; we’ve just simply made the tools to do their job better.”
Incorporating NBA Player Tracking Data
EDGE Sound Research was another repeat camper at Summer League, there to find ways to further automate the effects mixes of the game. Its tests used a variety of microphone combinations along with NBA Player Tracking Data to automatically generate a submix, dynamically adjusting based on the positional data of players and other objects on the court. Another test explored a more hands-on approach, with the audio objects made available to a submixer for manual control — allowing greater creative input and flexibility in the mix.
“Our big focus,” says EDGE Sound Research Co-Founder/CEO Valtteri (Val) Salomaki, “was to be able to live-test our Virtual Sound Engine solution that we’ve been building here over the last couple of years, and the first tool in there we call Audio Focus. What Audio Focus does is take in any of the object-tracking data and use all the microphones around the perimeter and automatically lock onto any of the objects on the field of play and mix based off those objects. Instead of mixing based on channel count, it’s mixing based off the best combination of all microphones to capture player one, player two, etc. and the ball and separate those out as output tracks.
“[It’s] able to give the A1s more of a superpower,” he continues, “because they don’t have to worry about the submix on the court but have flexibility to choose what part of the field of play is being captured and emphasized. After everything is captured and automated on that side, we can apply different types of digital-signal processing to clean up those feeds and separate those feeds.”
The focus on broadcast-sound objectification, Salomaki adds, is an outgrowth of the company’s original mission, which was to combine both auditory and physical sensations of sound into a palpable experience, deployed initially at MLB parks to enhance the game experience for hearing-challenged fans in the venue.
That continues to be a goal, but the potential for broadcast-audio quality is exponentially greater. “What we realized,” he says, “is that, instead of trying to condition signal all the way at the end of the signal chain, if we can automate the signal chain at the very beginning, there’s so much more flexibility to have next-generation content. [It’s] not just for what we want for our embodied sound technology but for what broadcasters want to be able to leverage, what social-media teams want to be able to leverage, [and] what next-gen content wants to leverage. The big bottleneck in the signal pipeline right now is that everything goes into one mix and everybody gets that one mix.”
These focused mixes, created using algorithms rather than AI to minimize processing latency, would be used for replay applications, says Salomaki.
The NBA Summer League has become an essential off-season learning and experimentation session, one that can change how fans experience the game. “By integrating intelligent software with advanced audio hardware,” says Barney Carleton, associate VP, broadcast planning and strategy, NBA, “we’re redefining how live basketball is experienced — not just today but for the future.” The league, he adds, focused on enhancing basketball-specific sounds — sneaker squeaks, ball bounces — while reducing extraneous and distracting sounds on broadcasts.
“The innovations,” he points out, “enable real-time, data-driven audio mixing that adapts dynamically to the action on the court, creating a more immersive and authentic soundscape. The NBA 2K26 Summer League provides an ideal environment for this kind of innovation — with a high density of games and a collaborative atmosphere that encourages experimentation and rapid iteration at scale.”
