NAB Reflections: Masstech Innovations’ Mike Palmer Discusses SGL-Merger Process, the Power of AI
New metadata enrichment, predictive-analytics help make MAM more efficient
Last week marked the NAB Show debut of Masstech Innovations, the entity that resulted from last summer’s merger of Masstech and SGL. The new company instantly became one of the broadcast industry’s largest media-asset–management players, with the two product portfolios brought under a single umbrella over the past 10 months.
SVG sat down with Masstech Innovations CTO Mike Palmer during NAB 2018 to discuss what the new company was highlighting at the show, how the merger process has gone, how the two companies’ product lines have been integrated, and how AI is enabling Masstech Innovations to offer metadata enrichment and predictive-analytics capabilities to its customers.
How has the merger of Masstech and SGL gone, and is the process now complete?
The two companies are fully integrated at this point, and we made a decision to maintain both product lines. This means no impact to existing customers and what they’re using, which has had very good response [from customers]. And, with an integrated development team [established], it makes us much more efficient in everything that we’re doing. But it also gives us an opportunity to look strategically at where we’re going.
What is Masstech highlighting at NAB 2018?
Our overall theme is, we know that our customers are in a dynamic environment right now. Their storage needs and their production needs are going to change dynamically in the coming year, so they need a frictionless surface off of which both of those things can move. We are that platform.
We’re showing two big things at the show: one is metadata enrichment based on AI services, and the other is predictive analytics. And these overlay our storage management and make our [systems] much more efficient.
Tell us a bit more about metadata enrichment.
First off, we view ourselves as an extended object store, so you can access us with a single namespace. We store the metadata beyond what you normally get in, for example, an Amazon S3 storage bucket. We’ll take everything, including the unstructured metadata. And we become the repository of record.
We sit underneath the MAM and the PAM and above the physical storage. Keeping your metadata there gives you the flexibility to, at some point, replace your MAM and PAM without having to worry about your metadata being lost along with your assets. With the metadata enrichment, we have a layer that provides selective enrichment of content in the archive. You get to choose: do you want speech-to-text, object recognition, sentiment recognition, location recognition, all these things.
Categorization, which is really important for normalizing that unstructured metadata into a structured fashion, becomes available to the MAM and the PAM. MAM and PAM are going after the sophisticated search, and you can’t do sophisticated search without sophisticated metadata. And the MAM and the PAM do not have the bandwidth to go down into the archive and enrich a bunch of this stuff at the same time you’re also using it for production storage.
How would this media-enrichment capability be deployed in a real-world scenario, especially in a sports-production setting?
In a sports environment, we can do the facial recognition, logo recognition, name recognition off of jerseys — without impacting how you’re normally using your asset-management and production system.
For instance, with our metadata enrichment, you can say something happened between July and September of 1998 and use speech-to-text AI on everything in that time period. Then, as a second step, you can run that through categorization, so you will be able to say if it is news, sports, entertainment, or something else. As a third step, you can narrow it down even more by running it through facial recognition.
What you’ve done is multiple passes of metadata enrichment through AI services. Each one is successively narrower. But you’ve avoided applying the most expensive metadata enrichment to the entire library. You’ve done it in a smart way. Customers that don’t have sophisticated MAM requirements can use our interface for the search. For customers that have a medium-to-sophisticated need, the MAM is the appropriate level, and they have access to all of our metadata to perform searches on top of that.
Tell us about the new analytics capabilities you’re showcasing.
The second area that we’re working with heavily is analytics. We want the system to predictively move content between storage layers and also to different storage locations. We know that customers have different storage needs now and those will continue to evolve over time. The market offerings for storage will also continue to evolve. One customer may need cloud, and another customer may need on-prem, and this has all got to be very fluid up and down. [Organizations] may be moving their content from one cloud provider into another cloud provider into a hybrid environment and then into on-prem storage. Even if you’re still working with LTO, you’ve got migrations to cover in that time.
By gathering metadata on how that content has been used over time, we can begin to predict, for example, that, at a certain time of year for the last two years, you started pulling this content and moving it over here into your [active] production system. We can start moving content from the deep archive to a near-line storage [ahead of time].
We can give people deep insights into how they’re actually using their content, where’s it coming from, and where it’s going to. Maybe I’ve got this much content on-prem, but now there’s a better price in another cloud storage over here, so maybe it makes sense to move things at that point. That’s the type of thing we’re working on.
Right now, we’ve gathered the metadata; we have the metrics and analytics and the UIs and all that requires. The next step is to apply the AI and start automatically extracting some conclusions from that data.
Here at [NAB 2018], we are showing the analytics portion for the first time. The AI portion on top is coming later this year — hopefully, at IBC.
This interview has been edited for length and clarity.