SVG@NAB Perspectives: Forscene’s Roberts on Closed Captions, Ingest Server, and Audio Innovation
Forscene is emphasizing three major offerings at the show, according to Neil Roberts, head of R&D and support: “closed captions, our newly launched virtual ingest server, and double-system audio.
Noting that closed captions and subtitling are legal requirements, particularly in the U.S., and will soon be mandated on the internet, he says, “We now have the ability to extract caption information from live streams that have already been captioned, or we can import data that has been created elsewhere – or we can actually create the captions directly into a scene.
“Rather than kludgy text-based captions,” he continues, “[the captions] actually appear as media on the timeline, little blocks of data. You can trim them, slide them around, change the font, position or color.
“That can either be burned into the media when you render directly from Forscene,” he adds, “or you can export a data file as an XML or various subtitling formats. Or, if we’re exporting to something like Avid in AAF or [Apple] Final Cut, we can put that into the XML so it then appears in the Final Cut or the Avid.”
He notes that “the people doing the subtitling can be anywhere: once the media is in the cloud, you just give them a log-in.”
Virtual Ingest Server
Forscene is also introducing its virtual ingest server. “Until recently,” he explains, “if you wanted to work with Forscene, you had to have a physical server, which would tap into your storage and do the ingest and transcode to our proprietary proxy format to put it into the cloud.
“That meant, especially for smaller companies, a big cost of entry to use the system,” he continues. “Or if you’re a camera operator out in the field in the middle of nowhere, you didn’t want to be carting a computer around with you just so you could upload proxies.
“Now we can run our ingest software inside a virtual machine on the customer’s own hardware,” he continues. “That can be a server or a computer they already have, or it can be a laptop out in the field.
“If I’ve got it running on my MacBook, I can shoot some stuff on my camera, plug that card into my MacBook, run up the virtual server, and it will ingest and transcode and put the stuff into the cloud so that other people can then start rough-cutting. When I get back to base with the full-resolution media, I just plug that in, relink, and everything is done.
One benefit is cost-efficiency. “It kills a couple of birds,” Roberts points out. “Smaller companies don’t have a huge cost of entry to get started, so they can offer Forscene to their customers. They don’t have to buy a big piece of hardware.
“But also, for bigger post houses, rather than saying, ‘OK, I’m going to buy one server that can handle four streams or eight streams’, they can say, ‘OK, I’ve got a couple of computers already in my machine room, and now I can run up the virtual server to handle those four streams.’
“If another job comes in and they need another four streams of media,” he continues, “they can run up another virtual server — as many as they want. Once they’re done, they can just shut [the streams] down again. It’s very easy to scale up and down. We can run virtual servers directly in Microsoft Azure or in Amazon. We can tap into an IP stream in the cloud and run our ingest software and deliver the proxy of that stream to the editors as a growing file, about 15 to 20 seconds behind live.
Roberts offers an example. “Let’s say, in a sports application where you’re working with a truck, rather than sending a satellite feed or a line back to base, you can go to an IP stream over standard internet bandwidth and send that IP stream into the cloud — either to be delivered and recorded somewhere or to be recorded in the cloud in Amazon or Azure.
“We can run our transcoding server in the same cloud and tap into that stream and deliver proxies to the editors wherever they may be. They can cut highlights packages and then publish back to the cloud. We can do the whole thing virtually.”
Forscene’s third major announcement, double-system audio, is intended to address the “plethora of camera formats in use,” Roberts explains. “Especially for reality shows, it’s becoming more and more common to have multiple cameras of various quality levels shooting material.
He notes that those cameras don’t all have the same capability for recording quality audio nor can they all run with timecode.
“You have the scenario,” he says, “where you might have six cameras shooting something and you’re also going to have a sound guy with a couple of channels of radio mics and a boom mic as well. You’ve got all this media, all referring to one sequence in your show, but you need to be able to synchronize it and put it all together.
“What we can do now in Forscene is automatically synchronize media based on timecode — camera media and separate audio media. If there is media that doesn’t have timecode,” he adds, “we can find a reference point and set an auxiliary timecode track.
“We can literally drag-and-drop timecode from the media that does have timecode onto a piece that doesn’t have timecode, and we can say, ‘This is the timecode you should use,’ and synchronize the whole thing. It’s then presented to the editor as a split-screen view: they can see the different cameras and hear the master audio, and they can do their edit from that material.”