Tokyo Olympics

Live From Tokyo Olympics: Inside Look at NBC’s IBC Innovation Efforts

The pandemic might have stopped the Olympics for a year, but it didn’t stop NBC Olympics’ commitment to innovation. In fact, it accelerated it: any innovations at the NBC Olympics IBC plant that the team wanted in place for the Beijing Games in February 2022 had to be ready for Tokyo 2020. NBC Olympics’ Todd Donovan, VP, engineering technology; Kevin Callahan, manager, system design engineering, and Lukas Zahas, senior manager, broadcast technology, discussed how the broadcaster’s IBC plant evolved during the pandemic.

From left: Kevin Callahan, Lukas Zahas, and Todd Donovan inside the NBC Olympics production-control room at the IBC in Tokyo.

Can you talk a bit about the one-year delay and how that impacted the facility here at the IBC?
Donovan: When the Games got canceled, everything was on the boat on its way to Tokyo, so the boat came back. While it was in transit, we spent some time rethinking the two-Games model and what technology we would need to get us through the Tokyo Games and set us up where we wanted to be for the Beijing Games because there would be no time to retool.

One of the big things to rethink was that separating a 1080i SDR signal for a large audience and a 1080p HDR for a smaller audience didn’t seem like the best answer. And, based on our experience with Notre Dame Football and some other projects, single-stream production seemed right. So the team spent a lot of time retooling. Fortunately, with the IP routing environment and the VSM software layer and the fact that we had just rebuilt the place, it wasn’t a horrific turn, but we also didn’t expect to get there so quickly.

Our summer was [spent] getting our primary workflow ready for 1080p HDR, and the big retooling from here will be getting ready for 50 Hz in Beijing. But it has been tested, and we think that will be fine, so it will just be the usual maintenance on the system and replacing the stuff we decided not to touch during the Games, like latest versions of software.

The facility looks the same as it has for the past few Olympics. Can you give an overview of the core and how that has changed?
Zahas: The core is a Cisco Nexus 9K fabric with mostly Grass Valley gateway hardware. There is Grass Valley control with what is now called Orbit, and that does the orchestration. [The user controls] all of it via VSM. It’s sort of a hybrid monolithic leaf/spine topology: some stuff is connected directly to the core and some on leafs. That’s the gist of the core.

And, to your point, it’s a good thing that it looks the same. The user sees a control panel, they punch a button, and they expect to see a source.

Donovan: We’re using the same Lawo VSM router presentation layer, and Lukas has a lot of sophisticated automation and workflows built into that. When someone showed up in PyeongChang [South Korea] in 2018 and touched the router panel, things happened. Here, touching the router panel is very similar. There is some extra nomenclature to handle the HDR, and the names of the sports have changed, but it looks and feels very similar. The underpinnings are different.

That was an essential decision — to not affect operations — but also VSM was a recent investment we were quite happy with and wanted to continue. On the routing side, we were already using Grass Valley XVP cards to process SDI in and out, and now we are using XIP cards for an IP topology. A lot of the work with GV on workflows and capabilities was baked into the first project. So, while the underpinning changed, the logic didn’t, and we wanted to capitalize on that in making the next big step.

 OBS also underwent a lot of changes with IP, HDR, and UHD. How did that impact your operations?
Zahas: It has been straightforward because they are relying on baseband. They give us a 1080i baseband signal over fiber, and, for the UHD, they also give it to us over fiber via 12-Gbps SDI. We immediately downconvert that to 1080p HDR.

We don’t have any IP handoffs between us and them. That may happen in the future, but, certainly for today, it’s much easier to hand off an SDI signal and be done with it.

There is also a big facility in Stamford, CT, that this facility works closely with. How did that impact decision making?
D
onovan: We were building a brand-new IBC after PeyongChang, and we had that kind of opportunity to do that. But, in the year that the Olympics got delayed, there was a lot of investment in the Stamford facility. Sure, there is a lot of 1080i there, but there was a pivot to more IP technology for routing, and more capability was built.

Callahan: The year gap turned the timetable. Initially, the plan was to get a lot of IP experience with the Tokyo facility. When it was delayed, we advanced the timeline for implementing a new GV router in Stamford with Cisco leaf/spine architecture. We started to get more mileage out of that system than the Olympic system. So, there were things we learned on the Olympic build that helped us change our design for Stamford and then lessons off the Stamford build for the Olympic build. Stamford is getting to be more 1080p-capable, with eyes toward 4K and other advanced formats that the IP router can handle going forward.

Anyone looking toward the IP and UHD and HDR world will be having the same sort of conversations you had on current vs. future needs. Any advice on how they can have those discussions?
Donovan: Well, it’s a moment in time, and you try to look down the road to see where we’re going to be. At NBC Olympics, the conversations are about whether an investment is for Beijing, Paris, or beyond 2024. In Stamford, it is about what level of investment to make based on business going forward.

It’s always a balance between being advanced and on the leading edge technically but also being able to deliver new things to the production community that enhance and reinforce storytelling. At the same time, you minimize any technical risk, although that is completely opposite from being on the leading edge.

For example, the Olympics are a highly watched show, so risk tolerance is obviously extremely low. At the same time, we can’t just do HD anymore; that would be like continuing to just do analog. So we tried to find test events, learn what people are doing in the industry, see what can be done better. We also talk to manufacturers and put it all together to come up with something we are confident about.

We all agree that, while we have a lot of stuff here — like the IP router and HDR workflows — they were all things we felt pretty good about because we had some degree of experience with them. We never put this equipment and this workflow on the air in this environment, but we had done similar versions of it — like Notre Dame Football, which was a huge learning opportunity. And we had logged weeks and weeks of test time on the IP routing system over two summers. Those are all things to minimize the risk, put better images on TV, and help bring the experience up. And then OBS kicks up what it is offering the broadcasters of the world, so it’s a constant push and pull.

It seems that you have workflows here that are designed to make it so that editors and storytellers don’t have to try to understand wide color gamut, HDR, IP, or 5.1.4 audio.
Donovan: The system must pass things along, whether it’s 10-bit processing, wide color gamut, HDR, or the 16 channels of audio. In some areas, the user just must let it flow by them so that it gets to the user who needs it and knows what to do with it.

The infrastructure is much wider in terms of audio and video capacity, but that doesn’t mean that every person must be burdened with all that knowledge. That has helped a lot: just move the media.

In the past year, there has been a lot of talk about virtualization and new workflows. Where do you see technology headed in general?
Callahan: In terms of virtualization of commodity hardware into VM stacks, I think we are getting there with appliances that can spin up multiple instances of a processor, whether it is a video proc, up/downconverter, etc. We’ll investigate it when we have an application for it. But we aren’t to the point where you schedule and command and control that virtual hardware. Yes, you can deploy VM for 15 multiviewers, but you still need an engineer to go and load a different VM and change all the routes and NMOS handlers. I hope where we’re heading is buying a blade of processors that, if you buy the licenses, can be up/down/crossconverters or color correction. Color correction is something you still need to bring down to baseband to do it in a logical way, and we would love to just put it in virtual hardware that we can dynamically spin up as needs arise.

There have been some big advances, similar to what you have done here in our industry, that didn’t go too smoothly. How did you prevent that?
Donovan: That’s the downside of trying to do something new and different because there isn’t all the reporting behind it or experience. You’re kind of out there and exploring together with vendors and each other. And the trick is balancing where the right place is to use the amazing new technologies to their fullest but also be able to train hundreds of users to come in and operate the broadcast center. You need to get them trained and get it reliable so that they can focus on the storytelling and not on “Is my tool going to work?”

One way is to try it on smaller shows, audiences, and efforts, and it will, over time, be a main workflow in a huge daypart or on NBC primetime. That’s one of the great things about the Olympics: it’s a huge project, and there is always someplace to try something out.

You mention working with vendors. What are your thoughts on interoperability?
Zahas: We don’t want to gloss over the complexity of it and the difficulty of this. We had a lot of challenges, and we hoped that NMOS would make interoperability smooth. Unfortunately, it’s less than smooth right now, and it takes a lot of pushing to get the vendors to work together to fix things, because, when it comes to interoperability, there is little that works at 100% on the first try. It’s important to have vendors who are willing to work with you to get things working, because it’s difficult right now.

The video-transport part of it is straightforward. It’s the control and getting different systems to talk to each other and then control of destination devices that is hard. NMOS still has a lot in the spec that is left up to interpretation: a manufacturer can be compliant with the spec, but you must get vendors to work it out so that their equipment talks to each other.

 

Password must contain the following:

A lowercase letter

A capital (uppercase) letter

A number

Minimum 8 characters