Special Report: Will the Cloud Change Everything for Live Sports Production?

The Fall 2023 edition of the SVG SportsTech Journal is now available! Among the plethora of content in this free, digital edition is a special report on cloud production. Live sports productions, particularly smaller shows, have begun to embrace cloud-based productions, whereby camera signals are sent directly into the cloud rather than being cut on site. In this piece, facility providers discuss the latest developments in cloud-based production and how your organization can take the plunge.

The conversation below was conducted during the SVG Remote Production Forum in L.A. earlier this year and features Dave DeBuhr, Account Manager, Professional Networked Solutions Division, Sony Electronics; Michael Kidd, Director of Architecture, Disney Entertainment & ESPN Technology; Angus Mackay, Product Marketing Manager, Matrox; Costa Nikols, Global VP of Sales Enablement, Riedel Communications; and Corey Smith, Senior Director, Digital, Software Engineering, CBS Sports and Paramount+.

SVG: Michael, what does the cloud mean for ESPN live production?

Michael Kidd, Director of Architecture, Disney Entertainment & ESPN Technology

Kidd: That’s way too big a question. I’m going to narrow that way down. We’re just going to talk about remote production a little bit and we’re looking to be able to enable remote production to have more capability, more flexibility, more agility, and more capacity than we’ve ever had before. Our strategy is looking at the cloud as a part of that solution, whether it’s live switching in the cloud or adding cloud aspects of it to a more traditional production. Those pieces are going to help us figure out how we do what all of these events at a quality level that we’ve come to know and expect.

SVG: Michael, at ESPN you have thousands of some of these events going on. How does the cloud make economic sense?

Kidd: I think most of the people in this room have some idea that the notion of remote production means cost savings somewhere, whether we’re not sending people to the place or we’re not renting as much gear or we’re capitalizing some of that. You can only build so much capability to do remote production concurrently.

If you build a room or two you will come to the day where you want to do your third or your fourth production that day and you need a control room from somewhere. And that’s where this occasional use/not every day gives you the get out of jail free card to be able to say, yes, we can do this.

Smith: Last Thursday, we had our UEFA soccer — I think it was week two now — but we had 16 different soccer matches happening within a 12-hour period of time. You couldn’t build that kind of capacity in a control room situation in a facility because it would be way too much money and because it’s occasional use, half the space is going to go dark most of the year.

McKay: We have a customer who in January will go to air with our new framework and so they will be receiving camera signals and switching and applying graphics and then putting it to air in a fully cloud-based system. We feel that the industry expected this to take two to five years.

Kidd: It absolutely works today. That said, the people that you have that know how to make television may not yet know how to make television in the cloud. There are a lot of extra skills. There’s a lot of skilling up. You may not have a Dev Ops guy yet but you’re going to want a Dev Ops guy. The technology is all there, but there is a little bit of a paradigm shift in terms of the skillset you need to make it work.

SVG: Corey, can you discuss the launch of the Golazo Network?

Corey Smith, Senior Director, Digital, Software Engineering, CBS Sports and Paramount+

Smith: Sure. I’m proud to be able to run a technology team that focuses on core television technology that is based in cloud and in April we launched our first digital linear network, called the Golazo Network. This digital linear network is a 100% cloud-based: master control operations, packaging, playout, asset management, clipping captions, distribution is all in the cloud.

We have three different live desk shows every single day that originate out of Stamford, CT, and those shows are basically contributing to the live productions, which are in the cloud. The operators are completely remote. My engineering staff is completely remote. I work out of my house in Orange County, CA, my DevOps guy is in Seattle, my technical PM that actually runs a lot of the day-to-day operations for Golazo Networks is in Austin, TX. So, our operators, engineering staff, and basically the employees of all the things cloud for CBS sports are in a completely distributed model.

The cloud for us is also a way for us not just to do cool and unique things like the Golazo Network, which is a first of a kind soccer network here in the U.S., but it gives us the ability to expand not only our ground-based operations but also our facility-based operations because we’re not going to build more control rooms. We’re going to migrate a lot of those workloads to the cloud because that’s where it financially makes sense. We’ll take the workloads that are kind of the common denominator of things, MCR type operations, clipping, editing, cloud-based craft edit, that kind of thing. We’re moving a lot of that stuff to cloud, but we’re there to support our existing professional sports broadcast engineering crews that go out to all of our NFL games, NCAA games, PGA, and more as they move to hybrid operations.

SVG: Following up, there is a lot of talk around cloud and “spinning up and spinning down” a service. And it sounds very simple. What kind of preparation did it require?

CBS Sports’ Golazo Network, which is a first of a kind soccer network here in the U.S., is run largely via the cloud.

Smith: We spent about four months building infrastructure, and we were able to launch shortly around the fifth month. But a lot of it was stacking storage. It was stacking the boxes in the environment for playout and asset management, et cetera, and then tuning out, making it all work. Cloud routing was a major component of this. We’re SRT at our core, so anything coming in from ground is typically going to be a backhaul from our facilities. But once it’s in our core, it’s really all SRT routing in and out of the components, both for last recording for caption pass through, distribution. We had to build a facility in the cloud that mimicked what we have in say Fort Lauderdale, a broadcast center in New York or even Stamford, CT.

So, we had to sit down and figure out what we needed: storage, asset management, captions, and the ability to switch the feeds and do live transport. We worked with a number of our partners to craft and build our infrastructure in cloud, which includes routing, obviously storage for live content, live sub-C clipping of content, play out, live captions are in cloud 24 hours a day, seven days a week.

Then we took those workloads and actually built those services in cloud and in a way that was scalable. We actually run our AWS environment in five regions around the world. Our backups are in Dublin, Ireland, and Oregon. Our primary [compute] is in Virginia. We also have operations in Frankfurt and Sao Paulo, Brazil, to support our sports teams down in Latin America.

As for the spinning up and down of workloads, that is really our packaging systems for the soccer matches, the NFL games, the NCAA games that you actually see on say Paramount+. That’s all packaged in our cloud master control environments, and they feed that content to Golazo Network as an affiliate. So again, the traditional philosophy of how television works in a facility is very much alive and well in our cloud-based environment as well. We have to adhere to the same principles anyways to make it all work.

SVG: Michael, what has been your experience at Disney?

ESPN has undertaken several fully cloud-based live-game productions in 2023, including a Davidson–George Mason Atlantic 10 Conference basketball game in Fairfax, VA (remote operators shown here in Bristol)

Kidd: What Corey is saying tracks very closely to our experience as well. This is hard. This is a lot of work. You have to put on a lot of effort upfront to figure out how this is going to work. It gets more complex the more vendors you have, the more different pieces you have that work in different ways. For some of the events that we’ve attempted to do in the cloud, we’ve done a couple on air, it took a day or two. We’re going slowly, we’re trying to figure out how it works. We don’t have the level of DevOps orchestration and automation to make this repeatable and easy yet, but I certainly think that that’s something that’s achievable. We just have to put in that effort to get there.

SVG: Where are the gaps and the holes in terms of products and services? Or are they all there and the industry just needs better interoperability?

Kidd: I think part of it is that a lot of the solutions available to us in this space are systems that used to run on premises and then they figured out they could run in a virtual machine, and that means that I can put it in a cloud now, and that works, but it’s different. Once you turn it on, it works, but getting it spun up and getting it turned back off so it will come back on again is real work. This is not stuff you guys do in your data centers every day. You leave that stuff running, but in the cloud that costs money. So, we have to figure out how to get that level of orchestration and automation as we keep saying it, and the vendors keep enabling us. It’s just going to get better.

SVG: Angus, what is Matrox’s take?

Angus Mackay, Product Marketing Manager, Matrox

Mackay: Everything everyone is talking about is exactly what we’re seeing and it’s really interesting. So, there’s the lift and shift and there is we’re going to package or template. We’ll create a recipe for each type of programming that we want to do, and maybe we can just say copy paste, copy paste, copy paste, kind. Where we’re approaching things is creating a system and when we talk about the lack of compatibility between different vendors in the cloud it is usually because they’re still speaking video rather than maybe data or IP.

How do we get those streams and keep them in sync and do switching and all this kind of stuff? From our standpoint we want to create a framework, which is IT native. And then because we’re an OEM supplier we can give technology to integrators, broadcasters or to anybody. And that offers a really interesting opportunity to create best-of-breed products, and then possibly some media services that might come along and compliment that if you need a special little additional function.

Using a completely IT native architecture, we can build this cloud thinking into the system. And we can approach it a couple of different ways. First, a performance level where it provides more processing to meet the on-air needs. Or you could set a hard limit and then we’re going to ask you to work within that depending on the cost structure that you want to put in place.

We want to create a framework and a technology system that is going to enable all of this to happen much more easily and much more flexibly.

SVG: Dave, can you share the Sony and Nevion perspective?

Dave DeBuhr, Account Manager, Professional Networked Solutions Division, Sony Electronics

DeBuhr: Sony and Nevion have spent an enormous amount of time and resources developing automation and orchestration systems to be able to try and simplify and automate those processes because these are complex systems.

A lot of the development back in Japan is surrounding AI for automation. Auto clipping, auto replay, auto highlights, stuff like that. We want to use the power of AI to offload some of what’s required once a production does start.

The other thing that we really harp on is the importance of all the vendors playing nice together and using open standards and not being locked into a particular vendor’s license modeling. That will ease the transition into the cloud and give the best chance of all these different devices working together as they try and plan a production.

What about the debate of public vs. private cloud and defining what exactly those terms mean?

Costa Nikols, Global VP of Sales Enablement, Riedel Communications

DeBuhr: I see the public cloud as being an AWS Elemental where you can run resources and run software loaded on there for your application. I look at the private cloud more as data centers.

We’ve got a big customer that’s an esports producer and they’ve got some big data centers and with high-speed networks connecting them together they run software both on the public cloud and in their own infrastructure. But that’s all managed by them locally. They’ve got their own data centers, their own infrastructure. They own the hardware. They own the processes.

Kidd: Public cloud is pretty easy to define. There’s a rate card, but you have no idea what it’s going to cost you. And private cloud is sort of the opposite. You have no idea how much that costs to run by the hour, but you know how much money you spent building it. But having that general compute, having that networking where you can string things together dynamically and have a system be one thing one day something else is the flexibility that, to me, defines cloud.

Nikols: I agree with that. You have your options of having your own private cloud on-prem or in a co-location facility, and you can certainly manage the costs versus the public cloud which is a black art to figure out exactly what it’ll end up costing.

But what’s the difference between public and private? With AWS they offer a lot more security than an on-prem private cloud. They are providing a level of infrastructure and security and maintenance and patching onto the platform which you’re not paying for. You’re also not paying for power, you’re not paying for real estate, you’re not paying for HVAC.

How often am I going to be operating the equipment? Am I hours a week or is it 24/7? If you’re looking at balancing out that magic mix of a $45,000 Capex investment, how long will you run it for and how long is the depreciation cycle?  If you’re using the standard depreciation cycle of five years, you need to be running less than 22 hours a week in order for the cloud to make sense. If it’s more than 22 hours a week you have to look at some of the other factors involved in the decision, and that’s the flexibility and the ease that the cloud brings me. So, it is really a complex decision, and the best thing that I can say is sit down with your vendors, sit down with your engineers, figure out the workflows that you need, and then try to map as much as you can upfront to figure out the benefits.

Smith: And you never know what the actual costs are going to be at the end of the day until you actually have it running. Open public cloud is really expanding your data center footprint without having the headache of the Capex side. And the way we’ve developed our cloud infrastructure we’re not tied directly to specific native components within the cloud providers. We’re actually trying to bridge the gap between being fully dependent on a vendor and being agnostic to that vendor.

As to the gaps mentioned earlier, we can all probably agree that at different times of the day each cloud provider has their own different gaps, whether it’s networking, compute availability, or GPU availability. And I think one of the strongest use cases around the GPU stuff is like, let’s diversify away from the existing cloud vendor for GPU. Let’s get more diversification on some of that.

Maybe even the hardware vendors themselves could offer services up in say, an AWS environment where they’re actually racking and hacking hardware in, but they’re now virtualizing that as a SaaS based service and offering it as part of the marketplace service rollout. I don’t have to invest in the hardware, but again, I can buy a service that runs on that hardware without having to fork over the expense of buying, powering, racking it, et cetera, et cetera.

McKay: What is cloud? Does that just mean someone else’s computer? Does it mean software defined? Are we talking about someone else’s computer elsewhere or are we talking about software defined or maybe just the internet?

I think the term can mean different things to different people. And really the value in all of this is the IT architecture that power as you move away from that lift and shift and instead make something inherently IT capable. And then you solve that problem of compatibility, you solve the problem of the best of breed product, not having the future that you’re looking for. You hire a developer and have that person create the missing link. You can have best of breed products from your traditional vendors but then you have the flexibility of hiring your devops team or an external integrator or your nephew in the basement and say, create me this special little piece of code that I need to glue this altogether. I think that’s really where we’re headed. Consider, we’re saying infrastructure of a service, so let’s have our infrastructure as a service and then be able to put point products on top, make it all speak together, have it inherently resilient and redundant because that’s what software is, and then we start to really cook with gas.

SVG: What are some key skill sets or hires to do to make sure that you kind of make the right play and don’t find yourself halfway up the river without a paddle?

Kidd: I don’t know. Would you rather hire a cloud guy and teach him broadcast or would you rather hire a broadcast guy and teach him cloud?

Smith: I think that I would hire a cloud guy and teach them broadcast, to be honest. All of the staffing that we have in our future is really like if it’s a master control operator, we want them to come from a master control environment because the tool sets are the same. If it’s running cloud-based infrastructure, I want them to come with an IT background.

Kidd: I think you’re absolutely right. The younger generation, if I will at this point, how old am I now? It’s hard enough to find broadcast engineers as it is at this point. You’re going to be much better off finding someone that understands cloud technologies and teaching them that 2110 is just a lot of bits, but it’s going to be okay and getting them used to the broadcast stack.

Smith: There’s not a lot of broadcast engineering talent that’s coming out of schools anymore. I mean, it’s almost a lost art. So, we’re going to have a pretty interesting time as an industry when a lot of the older folks are nearing retirement and punching out.

Check out more from the Fall 2023 edition of the SVG SportsTech Journal HERE, including special in-depth reporting from the 2023 FIFA Women’s World Cup, five industry White Papers covering a wide variety of pressing topics affecting the sports-production industry, a recap of SVG’s recent events, and updates from more than 250 SVG Sponsors.

Password must contain the following:

A lowercase letter

A capital (uppercase) letter

A number

Minimum 8 characters