NAB 2024: How Generative AI Is Changing the Game

Panel of NFL, PGA TOUR, AWS execs see ‘enormous implications’ in AI deployment

On a Monday panel during NAB 2024, representatives of the NFL, the PGA TOUR, and AWS delivered a detailed vision of the ways they are using generative AI today or intend to use it tomorrow.

The Main Stage presentation at NAB 2024, “How Generative AI Is Changing the Game” was moderated by TNF broadcaster Kaylee Hartung and included discussion with a panel of experts across sports and technology with a packed house in attendance. (Credit: AWS for M&E)

Leading off the discussion, AWS VP, Artificial Intelligence Products, Matt Wood laid out two key reasons the latest generation of AI and machine learning is so exciting. “First,” he said, “there’s a step-function change in the type of problems you can solve. The type, the complexity, the amount of data you can use, the context and understanding you can put around those challenges is just way greater, way larger, much, much larger than has ever been possible before.

“The other,” he continued, “is that there’s just a broad accessibility improvement to these technologies. It has never been easier or faster to actually use machine learning for any task. You no longer need deep data-science expertise. You no longer even need to train your own model: you can just take something that’s out there and start using it, start tinkering with it, start tuning it, and then take it into production very, very quickly.”

Wood set some concrete — and stunning — expectations with respect to the productivity boost that companies would see based on the “wealth of workloads” that gen AI will benefit. “Today,” he said, “I think of it as a technology which is going to enable us in the next 12-24 months to increase our productivity by about 10X — which is pretty remarkable. Today, I think, we’re maybe at 1.2X or 1.3X. Over the next five years? I think that the utility is going to be more like a 100X improvement.”

The Rise of Agent-Based Workflows

Wood outlined three key growth areas for generative AI: media management, assistants, and agents.

The implications for media management — particularly, metadata tagging — are enormous. Much of what the PGA TOUR and the NFL had to say on this topic is noted below.

The second area is obvious and already pervasive. The industry is going to see the rise of AI assistants that can make work easier on numerous fronts.

But the final bucket of use cases Wood outlined seemed to offer potential for large innovation breakthroughs: agent-based gen-AI workloads.

“Today,” he said, “these agents can complete tasks which are in the ballpark of about five minutes in length. You can just offload and have it do it pretty well. Anything longer than that today, kind of, isn’t possible. But, over the next year, two years, 10 years, we’re going to be able to offload more-complicated tasks. We’ll hand more long-running tasks to the AI. It won’t be just five minutes, it’ll be 15 minutes, and then five hours, and then five days. That’s how you go from where we’re at with 1.2X today to 10X productivity in the next year or two.”

NFL Deputy CIO Aaron Amendolia explained how the application of optical tracking and AI to game operations has been transformational on multiple fronts, not only for fan engagement.

“We started Next Gen Stats in 2017 with sensors that players wear,” he said, “collecting data live, low latency, then drawing insights for our broadcasters, storytellers, and also game operations. We layered on the Player Health and Safety Initiative in 2019. We’re bringing that data and that insight straight to the fan. If they’re at the game, they can hold their phone up and get an AR overlay of what’s happening on-field. These are the types of experiences we’re building for fans.”

And he sees more ahead: “More cameras, more sensor data, more points of data around the game itself. We’re trying to build these systems that have multiple layers of value, and they coordinate multiple types of AI, computer vision, machine learning, generative AI, and an orchestration layer on top of it, often with natural-language processing.”

‘You Can’t Metadata Tag for That’

“If you’re doing metadata tagging manually,” Amendolia noted, “you’re never going to be able to discover everything. Looking at generative AI, natural-language search, indexing with computer vision, our archives of media and content — including what’s coming in and what has happened in the past — we can tap into so much more detail than with traditional tagging.”

Citing a search example — Patrick Mahomes, in the rain, with a Pepsi sign behind him, and a fan in face paint as well — he said, “You can’t metadata-tag for all those contexts.”

He also expanded on the training and fine-tuning that can be done so that these models begin to truly understand the game. “We can train the AI how to recognize certain football actions that you can’t tag for. I want to see [for example] a fair-catch signal with this particular yard line, this player, but also I want to see it at a night game in this particular state.”

Once that model is built, these search contexts can be used in the different interfaces that people at the organization use, versus being siloed off to one specific role.

Said Amendolia, “That’s going to expose the value of our content at a greater scale across the organization. We can build in automation in the future based on that, because the amount of data coming in at scale is massive.”

Wood emphasized the importance of this level of depth in metadata tagging: “I think the level of detail that’s available in the tags is far bigger and more expansive than ever before. We were trying out a system the other day and had an athlete running, and we asked the AI to tag. What we got back was incredibly detailed. It described the type of concrete, what the athlete was wearing; it described not just the sneakers but the brand and the make and the color of the sneakers, which it identified as a brand-specific color.”

Amendolia added the dimension of multimodal capabilities. “Also, [with] the multimodal-search capability — audio plus visual plus data — you can expose the spirit and emotion of a play: I want to find a hard-hitting, aggressive play; I want to see that sense of focus and determination on the players’ faces. It can return that. You can’t metadata tag for that.”

Live Production in the Cloud: Time, Economic, Environmental Savings

AWS Global Head of Sports Julie Souza noted innovations from all the leagues, including the NHL: “Behind the cameras, so to speak, we are seeing the growth of live cloud production technology. The NHL did this again on March 22. They went live-to-air with a broadcast of an NHL game that was produced remotely.

“That has enormous implications,” she continued. “The savings add up on multiple levels. First, I understand from a league colleague that it can take days — maybe three, four days — to set up to produce a live show. But live cloud technology brings that to just a couple of hours.

“But, and I love this fact,” Souza continued, “the traditional method of production for that game would have emitted more than 2.05 metric tons of carbon dioxide. That’s planting 34 trees and having them grow for 10 years.”

Every Shot Live, Perfect for Live Commentary, AI Dubbing

PGA TOUR SVP, Digital Operations, Scott Gutterman offered other examples of how generative AI is opening new fan-engagement opportunities on a global scale. “We have an initiative called Every Shot Live that we started four years ago at the Players Championship. We’re the first golf entity to ever show every single shot from the first drive to the last putt.”

That means that, during a tournament, the PGA TOUR puts out 48 streams on Thursday and Friday, about 24 of them at the same time. Those streams are alongside the television and ESPN Plus broadcasts, but they don’t carry any commentary.

“Just natural sound,” noted Gutterman, “and you can hear the players. These feeds go all over the world. It’d be nice to have commentary on all of them, but it’s not feasible to find commentators to sit on all 24 different streams.”

The TOUR is evaluating a range of generative-AI technologies to determine what more can be done to drive those individual streams. “Can we use generative-AI voiceovers?” he observed. “And, if a network in Japan wants to see Hideki Matsuyama, can we deliver that to them, and can we do it in Japanese?”

Some of this made the panel moderator, Thursday Night Football Sideline Reporter Kaylee Hartung, a bit uneasy. “What about the incredible context and history that some of the commentators have — [legendary sportscaster] Jim Nantz and his 37 Masters [tournaments], for example?” she asked. “Won’t that be lost with all these new tools?”

Scott had a thoughtful response: “For us at the TOUR, it’s more about empowering the broadcasters. Imagine giving Jim Nantz the opportunity to know just, Hey, we’re going to follow these six players today. We’re going to have five talking points after every one of those players finishes a hole. If Jim Nantz and [commentator/retired pro golfer] Trevor Immelman want, they will see those opportunities come up, and, if they want, they can work them into the broadcast.”

Going Multimodal: Like Biology and Emergent Properties

Asked to tell broader stories about the impact of gen AI, Wood offered a multimodal story involving biology and emergent properties. “For sure, there are some really interesting properties of these machine-learning models. The bigger they get, the more modalities you mix together, the more some of these properties start to emerge.”

In biology, he explained, predicting how a protein will fold into its stable state has been an intractable problem. However, today’s AI and machine-learning models are doing it in about 10 seconds. “And it looks like an emergent property of these multimodal models is that they are detecting the energy function of the most stable state of the protein.”

His advice for sports-rights owners and distributors: “I’d be creating ever larger models, capturing multiple modalities to be able to encourage those emergent properties to manifest, so that you can channel them into entirely new products and experiences for your fans.”

Amendolia ended the talk on an insight that resonated with the panelists and indeed the entire audience: “The one thing that’s different about this particular moment is that this technology is for everyone in your organization. It is not just for the technologists. It’s not just for the data scientists. Everyone in your organization’s going to use AI in the future to do their job, to be more efficient, to produce new and novel things. It’s going to get involved in every nook and cranny of your business.”

Password must contain the following:

A lowercase letter

A capital (uppercase) letter

A number

Minimum 8 characters