SVG Sit-Down: Akamai’s Will Law Continues the Fight Against Latency; And He’s Happy With the Progress
The tech leader is at the forefront of balancing latency, scale, cost in an OTT world
2018 is going to be a massive year for live streaming in sports. Besides the potential fracturing of traditional business models to make way for more direct-to-consumer products, this year’s calendar features some of the most streamed events in the country (and even the world): the Super Bowl, the Winter Olympics, March Madness, and the FIFA World Cup.
Akamai is a long-standing giant in backend live-streaming infrastructure for major events, and its engineering brainpower continues to fight one of sports media’s biggest enemies: latency. According to a recent survey conducted by Phenix and the research firm YouGov, nearly 72% of sports fans expect to encounter lag when they flip on the live stream of a game, 63% of sports watchers are reluctant to sign up or resubscribe to a problematic live-streaming platform, and more than 33% would think about canceling their service altogether. The moral of the story is latency matters.
So where is latency today, and what steps are being taken to bring that time down? We catch up with Will Law, chief architect for media, Akamai, for some answers.
What were some of 2017’s highlights for Akamai, in your opinion?
The two main highlights for us — and I think you’ve seen reflected at trade shows — were the stabilization of what we call our Media Services Live platform and its evolution as an origin service, acknowledging that origin services for live video sometimes can be decoupled from delivery. In the past, Akamai was a monolithic cloud: if you wanted to get delivery from Akamai, you had to push video to Akamai, it would come out, and that was it; no other CDN could play it. So, [acknowledging] that it’s a multimedia world, that, in many cases, our customers want to use multiple CDNs in their delivery, we’re able to separate our component offerings and offer our live-origin infrastructure independent of our delivery service and also our storage independent of our delivery service. That was a big change for us.
The second was a focus on the continued downward trend of overall latency for OTT video. We had an association in the past, especially from broadcasters, that OTT — especially on the sports side — is anywhere from 20 to 40 seconds behind the live event. We’ve seen those timelines retract to where 10 seconds is more reasonable. At 8-10 seconds, you are reaching [comparability] with cable and satellite distribution. We’re seeing a continuing trend where OTT is [comparable] with cable and is, in some cases, exceeding it based on different delivery methodologies. That’s a turnaround that took place in 2017.
In regard to latency, would you say that you are happy or comfortable with where the latency range is now? What challenges remain, and in what ways do you think your company has room for growth still?
Latency, scale, and cost are three mortal enemies that fight each other. All three cannot survive. If you want super-low latency, it’s going to be more expensive and harder to scale. If you want super-low cost, the latency is going to go up, and the scale might suffer. If you want super-high scale, it’s also going to be expensive, and the latency is going to go up. They’ve always been antagonistic for OTT. Part of the flexibility with OTT is that we can turn those knobs. It’s possible to produce a live stream [with] some players at the bleeding edge with perhaps 3-4 seconds of overall latency and other players on the same stream can be 10-12 seconds back. They can be more stable, and other players can choose to play it closer to the live edge. We’re starting to see and provide an experiment with different player behaviors in order to reach different points [of] latency versus stability and robustness.
For us, 10 seconds is achievable at high scale. We can do multimillion concurrent users through our live-services solution with 2-second segments in either HLS or DASH, and we expect end-to-end latency of 10 seconds. That puts you very close to broadcast in the U.S.
Is sports the most challenging medium for you to work in?
I would say yes. Sports [involves] a desire for low latency, and it’s an omnipresent broadcast comparison. There are [online] events that don’t exist with a broadcast counterpart. In that case, you can have all of the latency you like. But sports is widely available in alternative ways. So it’s much easier for people to judge when the latency is higher.
Sports, content-wise, also [represents] very high-cost encoding. It takes longer to encode, so we try to force the encoder down to shorter encoding times; that places a lot more stress on the encoding solution.
Sports is the premier use case for low latency over live streaming. You also get flash crowds, which is another challenge in distribution: there will be nobody watching, and, when your event starts at noon, you can have 500,000 people suddenly come out of nowhere and sign on to a stream. These cliffs that build instantly are difficult to deal with [for] current CDNs. These are real problems, both when the traffic comes and when the game is done and 500,000 people stop watching content from your network. What a CDN is doing is juggling this live stream, [and] we may have 40,000 other live streams on our network, plus we’ve got all of the website delivery and security delivery and all of the other stuff we are moving around. We have to allocate that peak capacity and make sure that, when it goes away, we can move that back and not just have it sit there waiting for the next sporting event.