SVG Tech Insight: Addressing Key Challenges for Live Production in the Cloud
This spring, SVG will be presenting a series of White Papers covering the latest advancements and trends in sports-production technology. The full series of SVG’s Tech Insight White Papers can be found in the SVG Spring SportsTech Journal HERE.
Use of cloud elastic compute has become pervasive in so many industries: banking, publishing, retail, and transportation logistics. to name a few. Closer to home in our industry, use of elastic compute has revolutionized the way that media is distributed to consumers, making media distribution more efficient, more flexible, and more scalable than ever. Yet, certain areas of the media and entertainment value chain have not been able to embrace the advantages of elastic compute.
In particular, with its demanding requirements for low-latency, high-signal counts and real-time signal processing, utilization of cloud compute for live production workflows has, until recently, been very rare. This is not due to lack of potential business incentive. With its spin up/spin down, pay-as-you-go models, elastic compute would seem to be well suited for uneven usage patterns of live sports coverage. However, there are reasons why such workflows are only now starting to materialize. There are significant challenges to overcome in order to make truly agile live production workflows a reality on elastic compute. Here we examine those challenges and approaches for overcoming them.
A key historical impediment to cloud-based production has been latency. By definition, cloud data centers are often geographically separated both from the sporting venue where an event is occurring and the locations where the operators are performing their production duties. Physics dictates the speed of light over fiber and it cannot be cheated. This is not the only source of latency though. Compression delays, processing delays, and delays that result from the application of protocols required to protect signal integrity when contributing video over networks with packet loss all add to total signal latency. This total latency can become significant for live production workflows. Some of these delays can be reduced through judicious choice of codec and other means, but they can’t be eliminated. This has been a significant impediment to realizing live production workflows on cloud compute.
However, recently there have been a series of technical breakthroughs that have allowed systems to overcome the very real challenge of latency. At its highest level, the concept is very simple. Latency is a function of physics. We can’t cheat physics, but we can manage it. Using timestamps derived from GPS locked time sources, we can maintain alignment of sources as they traverse even the public internet with its multiple varying hops. We can align those signals, presenting them in multiviewers for operational staff, as a coherent set of signals time aligned to the exact frame. We can provide high-bandwidth efficient, yet low-latency monitoring of those multiviewers back to operators so that they have all of the video/audio information they need to make production decisions. Importantly, in addition to reducing media signal latency, we can minimize the propagation delay of critical control signals such that button pushes on control interfaces happen instantaneously, from the perspective of human perception. And finally, when a feedback loop back to the venue (say, for example, IFB audio or video to drive the stadium display) is required, we can deploy processing over COTS compute (or a nearby cloud data center), in proximity to the venue, to minimize the signal latency due to physics, which is directly proportional to distance.
Using these techniques throughout the bulk of 2020, a variety of production teams from five continents successfully produced live content throughout the year. Many of these shows spanned vast distances (oceans) separating the contribution sources and the production crews, with 100% of the production processing (video switching, graphics, clip playback, audio mixing) happening in the public cloud, and often with camera counts ranging from four to 24, which brings us to the next challenge.
To effectively address the requirements of a live production, any solution must be able to support the number of simultaneous I/O signals a production requires. This, of course, varies widely, from small shows such as a four-camera radio show to Super Bowl class mega events with over 100 cameras. Historically, this has presented a challenge for production based on elastic compute: both in terms of sending and receiving signals to and from the cloud, and in terms of processing signals once they land in the cloud.
With advanced coding techniques, such as HEVC and JPEG-XS, that can generate contribution quality 1080p59.94 video streams at bit rates as low as 20 to 40 Mbps (HEVC), and with 10 Gbps cloud connection services available in many locations, one can see on the horizon the ability to contribute signal counts that cover the full range of production requirements for any show. Moreover, because techniques to better manage latency exist (see above), for many productions one can tweak the GOP structure of an H.264 or HEVC codec to increase quality while managing the latency increase this entails.
The ability to support sufficient processing in cloud-compute nodes is also a challenge. A production switcher must be able to process any of the signals on its inputs. Modern traditional switchers typically employ a large frame of dedicated FPGA hardware to create all of the effects needed for storytelling. A compute node in the cloud must not only do all of that processing in software, but it typically must receive and decode all its input signals, scale video and create multiviewer tiles, and often encode its output signals for contribution to master control or a distribution chain. This is an area of active and dramatic improvement. A year ago, early cloud production systems were supporting eight contribution sources. By the end of 2020, systems with 24 (1080p) contribution sources were routine. Now that same technology stack can support up to 48, 1080p59.94 sources, and this trajectory will not stop. It will continue.
Beyond the pure technical challenges associated with live production in the cloud, such as latency and I/O scalability, there is another equally important challenge to overcome. This challenge represents the intersection of production crews with the technology. We call this challenge: functional sufficiency. Functional sufficiency can be described in this way: Does the operational crew have at their disposal sufficient tools (user interfaces, monitoring, and control surfaces) to perform their job function as professionally and with the same level of production value as they do today? Can a TD, an A1, a replay operator and a graphics operator contribute to a show using cloud-based production with the same level of professionalism as they can with their traditional tool set?
As with I/O scalability, this is an area of rapid progress. In early 2020, video switching in the cloud was very limited. Today, the same line of switcher panels that are used in the majority of North American OB trucks (Grass Valley Kayenne, Karrera, and Korona) can be used to control not just traditional K-Frame hardware, but a virtual K-Frame app running as software in the cloud. Throughout 2020, and as discussed in more than one SVG event, cloud-based audio production tools were not anywhere close to their on-premise counterparts. Expect to see significant improvements in this functional area (and many others) in 2021. More importantly, expect to see tools emerge from more than one company as an ecosystem of interoperable tools begin to emerge.
The challenges associated with cloud-based live production are real and formidable. Nonetheless, they are addressable, and significant progress was made in 2020. The expectation (and the experience in the first few months of 2021) is that advancements will continue at a rapid pace — expanding both the achievable scale and production value of live production in the cloud. This is not to say that all production will go to the cloud. There are strong arguments for more traditional live production approaches in many cases. However, where the business case for elastic compute offers a strong ROI, cloud-based live production will become a viable alternative for a greater and greater cross section of events.