Beijing 2022 Reflections: OBS CTO Sotiris Salmouris Discusses 5G, Virtualized OB Vans, and Other Innovations
The Beijing Paralympics conclude later this week and that conclusion marks a difficult effort undertaken by the team at Olympic Broadcasting Services (OBS) and rightsholders around the globe to juggle COVID protocols, production workflows, and technical innovation. Sotiris Salamouris, CTO of OBS, and his team oversaw a number of new innovations at this year’s games and he discussed them with SVG late last week.
I wanted to roll through some of the technologies and get your thoughts on how things went. Let’s start with 5G that was used at the alpine events. How did that work out?
It went really well. We were more aggressive with using 5G than we were in Tokyo where it was essentially a couple of cameras that supported the opening ceremony operations. But in Beijing we deployed close to 30 cameras on 5G, some of them fully live and some of them that I would call near live ENG.
For the alpine events it proved helpful for areas that were difficult to reach and we used it in two or three handheld camera locations along the course in parallel to our “legacy” RF systems that work in COFDM. We had bit rates of around 40+ Mbps doing 5G which is higher than the typical COFDM system which is usually around 30 Mbps. So, we were able to get an increase in terms of overall quality.
Another good use case was at the opening ceremony where there are always some unknown factors and last minute surprises from the creative team. There were some changes in the fireworks that we found out a couple of weeks before the show and [we needed to change a camera position]. But with the closed-loop management system for the Games we could not really engineer any of our more standard solutions, that is to try to get fiber connectivity or establish a dedicated RF link, so we decided to use 5G [to get the signals from the camera] and it worked great. So, we learned that 5G can be deployed at the last minute with excellent quality, very high bit rates, and can be easily integrated with our workflow.
That is really how the technology works very nicely: as a last-minute transmission over an essentially public network that you can integrate into your production.
Do you think 5G could be used to cover an entire alpine course or would there be issues around bandwidth, latency between cameras, etc.?
Well, all of those are addressable problems and the question is what kind of capacity you have and capabilities you have. At the Olympics it’s a high-end event and we have far more capabilities than other events might have. So, we do have the capacity to cable very remote locations, and we essentially built a mini fiber network to cover the whole course which is close to 2.5 kilometers in length. But even in our case there are course changes or competition changes that are difficult to accommodate at the last moment and that’s where you can use 5G and absolutely rely on it. It allows you to use it as a replacement for a cable camera at the last minute.
One problem is you can’t plan to use it by yourself, independently. You do need to have a partnership with a telco operator who you can plan with because we are talking about a public network. So, the nice thing is you don’t need to do it by yourself, but you do have to consider what kind of quality guarantees you need because you may have spectators and other people that may want to use the same network. But the technology does work, and the bit rates and quality are what we expected.
You mentioned working in the closed loop which was designed to minimize the risk of COVID cases. How did that impact your ability to problem solve for production changes that might occur?
It was complex and it did require a lot of detailed planning by the organizers which they did very thoroughly. I have to give accolades to the whole Organizing Committee and the Chinese authorities for managing this process. They didn’t want to have any kind of COVID spread from the people who come from abroad to the population and thus the typical back-of-house locations that we need to have access to were part of the loop. However, within that area we could move very quickly and there were no issues as the bubble included the venues, transfer between venues, and accommodations.
But before that closed loop was established, by the end of January, it is where we were mostly affected because a lot of our installation happens before that. In response, however, the committee set up many “mini bubbles” in various locations during that period, corresponding to the specific areas of our interest, and we had all the support to do things properly and do our work. But that required a lot of resources from the organizing committee and a lot of coordination.
By the way, during the Olympics we did have a very small crew that was outside of the loop. Among them was the quite important aerial team for all the helicopter coverage as they had to constantly interface with the authorities, the heliport facilities which were out of the loop, etc.
So, shifting back to technology, what were some highlights?
At Tokyo we introduced so many new things like the move to UHD, IP, etc., etc. that it almost looks like in Beijing we didn’t do anything new. But this is not really the case as we implemented UHD and HDR plus of course the IP transition to the Winter Games and that is a very different environment as to how it is set up. And proving that those things can work in both environments was innovation number one as for the first we had a full 4K HDR backbone at a Winter Games.
And then there was 5G which we discussed and the virtual OB van which was a pet project of our whole Engineering team that started like all nice projects in a bar where I had a very relaxed discussion with Iddo Kadim, Intel’s director of Olympic Technology. We were discussing where broadcast technology was going and things that we could do together in the context of the Olympics.
We all know about the transition to IP and the flexibility it gave to us because a lot of our systems essentially turn to become pure software. And that means you have a lot of advantages like how you are building those systems, and, in our case, the number one challenge is how we can build a complex infrastructure within such a limited amount of time that we always have before every new edition of the Olympic Games.
So, moving to IP give us the ability to move to software but what would be the next stage? And for us that could be a full live broadcast workflow for sports that is all software and running on Commercial-Off-The-Shelf hardware. It is very logical to see this as the next step and even the traditional broadcast equipment providers have started realizing this. They understand that the days of the “bespoke broadcast hardware technologies” may near their end, but this does not mean that it is the end of their contribution to our workflows. They will always deliver the software and actual running applications that we need to do that but maybe we don’t need the traditional bespoke hardware and they could use more standard IT, or better termed: ICT hardware and services.
So, we decided to build a project to see how this would work in the context of an Olympics. It involved Intel as well as key broadcast providers of systems, out of those that you would expect to find inside an OB van or flypack. So, we took those hardware components and made them software that sat on standardized server and network hardware, in an architecture exactly as you would find in any ICT data center, including of course those that are supporting all the public clouds out there.
The key objectives were around the use of commodity hardware and commodity ICT architectures, which both you find in abundance and in very competitive commercial terms. We wanted to take it seriously and so actual coverage of an Olympic sport with dedicated cameras and a full production gallery was our goal. And we wanted to have exactly the same operational surfaces and operational practices that the operators are used to. So that meant a normal vision mixer desk, normal LSM replay controllers from different vendors, normal audio desk…normal operating surfaces that people are used to working on. We didn’t want to push the production team to work on something they were not comfortable with, the idea was to have everything from the production perspective feel the same way it usually does. It is true that you can use computer interfaces for most of the typical systems that you are operating in a production gallery, but the professional operational surfaces have several advantages in terms of productivity and accuracy which are essential in high-end live sports coverage. These elements, however, all these desks and operational hardware are really “dumb”, they only provide the buttons and levers required for detailed and sensitive handling by the human operators; all the real processing happens in the servers that are running the applications software of the vision mixer, the replay platform, the audio mixer, the vision control etc.
Were the COTS servers at the venue?
Yes. There was a rack with nine almost identical servers, and we used six cameras although the capacity was up to 12. And there were no issues for the production team. The result was excellent in terms of achieved visual quality, operational easiness (same as with the legacy approach) and stability.
We had zero issues and glitches for more than a week that we used the system in full operation, and this was really surprising even to us who we would have been certainly happy even with some expected teething problems, but the overall stability was remarkable. The only contention was that we could not do more than 1080p so the next step will be to do it in native UHD. But according to all of the vendors involved that will be doable within the year. So now it’s just a question of how we can use it going forward.
Is your hope to not have to be shipping containers of equipment for weeks, if not months?
We still have a need for live production units but there are never enough for our needs and that’s why we are using more and more flypacks and the ratio is tilting more towards flypacks.
The problem is the need for a lot for pre-planning in order to match the particular, existing, capacity and capabilities of an OB Van to our needs for specific sports. The same, even worse, happens when we are planning for the use of a “standard broadcast design” fly pack system. If the flypack is existing (as part of the inventory of one of our partners) then the process of “matching to the sport” is the same as the one needed for an OB. If the flypack is new, then the process is even more complex since it starts from scratch.
The problem in all these processes is that they do not “scale”: each OB Van, each flypack either existing of new it has to be dealt independently to the rest by dedicated and experienced resources by both OBS and our selected partners. This is a lengthy and expensive process that has remained unchanged over all the past Olympics.
The vision is that this may not be necessary anymore if the required hardware is “standardized COTS” and all what differentiates these systems that cover, let’s say, volleyball to those that covers hockey or weightlifting is their software configuration. In this way, we could end up in a factory-like preparation process that scales greatly and can help us reduce both planning time and, later, installation and commissioning time. You just assemble exactly the same hardware systems, following standardized ICT architectures, and then you configure them using existing provisioning frameworks and scripts. Of course, this can only be possible only if you move away from the legacy “broadcast hardware appliance” approach to one that is based on a standardized ICT architecture using COTS hardware.
So, the idea of the standardized system with software is that I can manufacture, configure, and commission them so that we can have five or six or many more of them side by side and it becomes just a matter of software to configure them for this sport or that. In addition, by using COTS systems we have reduced the reliance on specialized hardware that might not be available on time, or it may be committed to something else and takes a lot of time to plan around. There is a huge industry delivering ICT servers and network elements which dwarfs the current industry of (legacy) broadcast hardware appliances.
Another benefit is we can very easily expand outside the “perimeter” of the typical OB Van or flypack because essentially what you have is a private mini cloud attached to a gallery; hardware servers that are running broadcast application software. And also, there is nothing stopping us from making this a hybrid solution, which means it gets also connected to a public cloud so it can be linked, in terms of workflow, to an IBC or any another facility. You get a lot of flexibility you don’t have with previous solutions.
You mentioned the cloud…how did things go with OBS Cloud?
There was a big move to the cloud in two directions: using the cloud as infrastructure available for broadcast workflows for us and the RHBs and then building and making available full turnkey cloud applications, running on the OBS cloud, that the Rights Holding Broadcasters can use from anywhere in the world to access our full original content both live and post produced.
Regarding the first aspect, that of infrastructure, there has been an even tighter integration of the available Alibaba Cloud components to the connectivity offered by OBS, as part of the broadcast telecom and IT network of the Games engineered by OBS. This allowed for much higher scalability, without compromising cost effectiveness, which is the biggest challenge when it comes to the use of cloud for intense live broadcast workflows, like the ones in the Olympics.
Regarding the second aspect, that of cloud-native applications that OBS offered, the big difference between Tokyo and Beijing was around the explosion of use for the Content+ and the Live Cloud applications. With the combined use of these two applications, any Rights Holding Broadcaster can essentially access all the 6000 hours of originally produced content, either live or post produced, from anywhere in the world and by just using standard IT hardware and access to the internet.
When it comes to Content+, the difference between Tokyo and Beijing was that in Beijing the RHBs could also use Content+ to access not only our short form content but also all the live competition feeds.
That means that while the competition was going on broadcasters could access the live sessions, browse them, and they could start clipping content out of the live session for highlights and their own replays. With this option available, RHBs did not need any presence in the host city, either in the venues or in the IBC, for the capability to build their own post-produced content; they could access all our original content from everywhere in the world in, essentially, zero delay from the time that this original content was available by us.
The other thing we’ve done is begin delivering our signals for retransmission via the cloud, essentially our IP VandA which was delivered compressed inside the IBC but now also available globally. In Tokyo we did this almost experimentally, for one public broadcaster in Israel which they carried UHD (!) successfully for the whole duration of the game. They received the UHD content over the public cloud from Alibaba, as part of the OBS cloud, and then the public Internet as backup.
This solution exploded after Tokyo and in Beijing we had 21 live cloud packages which was more than the VandA package delivery within the IBC. And that was quite important in Beijing because of the increased need by many broadcasters for remote production back in their own territories. What we really found astonishing has been the stability of this type of global live content transmission, something that even a few years ago was considered to be science fiction: the solution performed defect-free for the whole duration of the Games even in UHD transmissions, where each one of the VandA feeds was transmitted in ca. 100 Mbps bit rate.
And what was your take on UHD, HDR, and immersive sound?
Of course, we didn’t have any concerns whether our setup would work and about its performance; our workflow was the one that we had already used for Tokyo. On the other hand, there were some challenges in terms of implementation with what happens with the different type of content, especially with HDR because you have a lot of new parameters to consider with snow. But all of those were very nicely addressed and I think that we proved also that this type of workflow, where we have one live chain for 4K HDR and HD SDR, works very, very well also in the winter games environment.
And we’re very proud because we not only produced HDR in a quality that had broadcasters happy, as a real high-end quality product, but we also managed to improve the quality of the HD. And I would like to clearly make a very definite statement comparing the previous Summer and Winter games in Rio and PeyongChang where we had the standard HD live signal workflow with HD cameras, mixers, etc. The visual output on HD 1080i, derived from the UHD, was visually better than what we derived from native HD in Rio and PeyongChang. And that’s because we are deriving the HD from HDR by using our own look up tables and developments that provide a visual outcome that its visual quality is apparently beyond the operational capacity of the standard HD SDR live workflow. Comparing the HD output in Tokyo to Rio showed it was better and that also has to do with the way we paint our cameras which is much more flexible than the standard HD workflow.
So, when people say there isn’t a benefit to UHD acquisition because they ultimately broadcast in HD you would disagree.
There clearly is a benefit on HD of shooting in UHD. This is a very welcomed side effect from moving to UHD that, honestly, we did not originally expect but, in hindsight, we are more than happy now that we have confirmed it. Of course, this is not a reason why a broadcaster that currently only delivers in HD may want to move to UHD live production. If, however, they are already in a migration path, then it would be good for them to know that they can already benefit from originating in UHD HDR. In addition, there can always be some archival interest because they may also want to originate in UHD HDR. The combination of these two factors, that is the increase of HD quality already and the archival value of the produced content, may be a good incentive for some broadcasters that could expedite their migration to UHD HDR.
Of course, when it comes to the Olympics, the reasons for us moving to UHD HDR is even beyond these two factors. The Olympics is “prime content par excellence” and so are the expectations of the world audience. Even if today the majority of the viewers may not be exposed often to UHD HDR live sports content by their sports content provider this does not mean that they do not, already, build expectations for higher/better quality. Let’s not forget that there is more and more availability of other types of “prime content” in UHD HDR, by most of the big streaming providers, which already educate the viewers to better and better visual quality and it is really a matter of time till these viewers will also demand the same from all the prime content that they consume, sports being a big part among it. As OBS we want to make sure that our broadcaster partners already have the ability to satisfy such expectations.
What do you think will be the legacy of the Beijing Games?
The legacy of this particular games was that under pressure we can do miracles and I will give you a small example of the challenges we faced. We know now that in the eyes of the audiences it was an excellent product in terms of visual quality and storytelling and that was a joint super intense effort from OBS and the individual broadcasters. However, we not only had to solve a lot of problems that were negative side-effects from the pandemic and then, of course, the challenge of having two Olympic Games just few months apart. But as though this was not enough, we had also to deal with the explosion in COVID cases [ahead of the Games] which was a nasty surprise.
We always have a detailed plan for when our Games Time experienced staff would arrive and start working before the Olympics, but because of the positive cases we ended up with almost 15 percent of our personnel not reaching their arrival dates. A big number could not fly so they had to wait until the infection was over and thankfully, we didn’t have any serious cases. And then there was another two or three percent that tested positive in Beijing and had to isolate for several days.
We really lost a big number of workdays by our very important and highly skilled technical personnel before the games. But we still managed to pull through and that was the result of lot of pre-existing and quite sophisticated contingency planning. However, and as it turned out, more important that anything has been the spirit, dedication and experience of all our hands that joined the operations, either on time or delayed due to covid, which did virtually everything under the sun to guarantee that our project would be a full success. We are greatly indebted to all these marvelous broadcast professionals.