Amazon Develops New Media Analysis Features for Amazon Rekognition Video

Amazon Rekognition Video is a machine learning (ML) based service that can analyze videos to detect objects, people, faces, text, scenes, and activities, as well as detect any inappropriate content. Users can automate four common media analysis tasks – detection of black frames, end credits, shot changes, and color bars using fully managed, ML-powered APIs from Amazon Rekognition Video.

These features enable users to execute workflows such as content preparation, ad insertion, and add ‘binge-markers’ to content at scale in the cloud. Videos often contain a short duration of empty black frames with no audio to demarcate ad insertion slots or end of a scene. Using Amazon Rekognition Video, users can detect such sequences to automate ad insertion or package content for Video-On-Demand (VOD) by removing unwanted segments. Next, to implement interactive viewer prompts such as ‘Next Episode’ in VOD applications, users can identify the exact frames where the closing credits start and end in a video. Further, Amazon Rekognition Video enables users to detect shot changes, when a scene cuts from one camera to another. Using this information, users can create promotional videos using selected shots, generate high-quality preview thumbnails by choosing key frames in shots, and insert ads without disrupting viewer experience, for example, by avoiding the middle of a shot when someone is speaking. Lastly, users can detect sections of video that display SMPTE (Society of Motion Picture and Television Engineers) color bars, to remove them from VOD content, or to detect issues such as loss of broadcast signals in a recording, when color bars may be shown continuously as the default signal.

With these APIs, users can easily analyze large volumes of videos stored in Amazon S3 and get SMPTE timecodes and timestamps for each detection – without requiring any machine learning experience. Returned SMPTE timecodes are frame accurate, which means that Amazon Rekognition Video provides the exact frame number when it detects a relevant segment of video, and also handles various video frame rate formats, such as drop frame and fractional frame rates under the hood. Using the frame accurate metadata from Amazon Rekognition Video, users can either automate operational tasks completely, or significantly reduce the review workload of trained human operators. This enables users to execute media analysis workflows at scale in the cloud. USers pay only for the minutes of video they analyze. There are no minimum fees, licenses, or upfront commitments.

Media analysis features for Amazon Rekognition Video are now available in all AWS Regions supported by Amazon Rekognition. To get started, please visit the product webpage, read our blog, refer to our documentation, and download the latest AWS SDK. To try these features with videos, users can use the Media Insights Engine.

Password must contain the following:

A lowercase letter

A capital (uppercase) letter

A number

Minimum 8 characters