White Paper: Accelerated, Automated Workflows for Global Content Delivery

By Richard Heitmann, VP, Marketing, Aspera, an IBM company

Sports-video production has reached a pivotal moment. The evolving digital-media ecosystem is undergoing rapid innovation and change with the explosive growth in size and volume of digital content, the proliferation of multiple video formats, and the insatiable appetite of audiences to consume more media more quickly on more devices. Moreover, viewer demand for coverage of international events continues to increase, and more and more viewers turn to the Internet for live match streaming. Consumers continue to embrace new technology and devices, which in turn influence their viewing habits. How do media companies keep up?

Richard Heitmann, VP, marketing, Aspera, an IBM company

Richard Heitmann, VP, marketing, Aspera, an IBM company

The digital supply chain needs to become rapid, fluid, predictable, efficient, and ubiquitous. Digital content, regardless of the size or distance it travels, should flow at a rate determined by business needs, across any available networks, and not constricted by legacy technology. The movement of content globally needs to flow less like an error-prone relay race and more like an interconnected, high-speed pipeline. The capture, ingest, processing, and delivery of content should be fast, automated, and reliable while leveraging elastic scale-out infrastructure, be it in the cloud or on premises.

Overarching Customer Goals

To create a content-delivery workflow integrating commercial software, open platforms, and high-speed transport, media companies should consider the following attributes or goals:

With Aspera On Demand’s “direct to object storage,” content can be securely moved directly into cloud-based storage at line speed.

With Aspera On Demand’s “direct to object storage,” content can be securely moved directly into cloud-based storage at line speed.

Maximize efficiency: New solutions should increase efficiency by more fully utilizing deployed assets and infrastructure, eliminating the waste of inferior technology, and reducing management costs through simplicity or automation.

Build on what you own and know: Consider solutions that take advantage of and build on IP networks, computer hardware, and software infrastructure already deployed. Data-transfer technology and workflow automation should operate at maximum efficiency over commodity IP networks.

Eliminate artificial technology bottlenecks: Traditional approaches to reliable network data delivery using TCP-based protocols like FTP, secure copy, CIFS, and NFS are all artificially limited in efficiency and speed for transfers over today’s wide-area network paths. These bottlenecks are most pronounced when network bandwidths are large, distances are long, and network conditions are challenging — precisely the conditions under which the global media-supply chain operates. TCP bottlenecks can be entirely eliminated with new approaches to reliable, large-content movement, but typical acceleration and data-blasting solutions fail to solve the problem and, in some cases, introduce tremendous waste in bandwidth. Thus, new transport technology must accommodate a wide variety of business needs and should be chosen very carefully.

Map out business processes carefully: Workflow form should follow function. Before orchestrating, authoring, integrating, or automating processes, it’s important to understand the business flow and target key processes for automation. Starting small and incorporating change gradually is a strategy followed by many. Once processes have been mapped out, automation can benefit certain workflows and tasks. A key understanding is which processes to target first and which will most benefit from automation.

Increase predictability for your business: Reduce costs associated with meeting or reducing lead times and increasing predictability in file processing and distribution schedules. There are two key aspects to predictability:

  1. Meeting timeframes using global and regional networks of varying conditions, inbound and outbound.
  2. Creating consistent workflows through automation and other means, ensuring predictability.

Workflow integration using composition and process-orchestration frameworks, with automation and tracking designed to scale with very large numbers of files moving through the pipeline and for truly “highly available” usage, increases predictability. For example, a typical VOD-advertising ingest workflow may process several hundred content files and several thousand schedule files per hour, arriving through network file transfers with virus checking, transcoding, archiving and reporting, and graceful degrade on failure. Many conventional automation and managed file-transfer tracking and reporting systems work well when file volumes and arrival rates are slow, but they break down under load: reporting systems come to a crawl, automation processes fall behind or fail altogether, and single points of failure in the system, such as a failed or misconfigured storage system, can stop the entire workflow when they go bad.

Elastically scale the pipeline as needed, globally: The delivery pipeline should scale from a network, server, and storage perspective. The underlying technology should be able to start small and incrementally add resources to scale up performance and load non-disruptively to workflows and users with linear gains in throughput or better.

The underlying transport technology and software implementation should be able to support aggregate transfer speeds at gigabits per second on commodity hardware and scale linearly in throughput with each additional transfer: 100 concurrent transfers ought to achieve the same throughput as one transfer at 100 times the speed. And concurrent transfers need to gracefully share limited network bandwidth and system resources. Here true congestion control and bandwidth-sharing fairness is essential to avoid certain jobs from unintentionally denying service to other jobs by hogging limited bandwidth (or, worse, drowning out other critical network applications, such as e-mail, Web, and other TCP-based traffic).

Aspera Orchestrator helps build efficient, predictable file-processing pipelines and streamline complex workflows.

Aspera Orchestrator helps build efficient, predictable file-processing pipelines and streamline complex workflows.

When the required scale exceeds the fixed capacity of the existing data center(s), workloads should be able to burst out to the cloud with high-speed transfer of large content, parallel processing into the required target formats and bitrates, and high-speed delivery to ensure that service-level agreements are met.

For moving large content to the cloud, maximum speed of transfer and scale-out of storage are necessary and achieved through direct integration with the underlying object-storage interfaces. Moving files to the local storage attached to the virtual machine in the cloud dramatically reduces the file and data-set size limits to the maximum size of the attached disk and delays processing because the content must be copied to the object-based storage before it can be fully utilized. Deep integration with the underlying object-storage application-program interfaces (APIs) ensures maximum speed end to end and adds numerous transfer-management features, such as pause, resume, and encryption at rest.

Provide comprehensive security throughput: Security should be provided end to end and top to bottom. This includes secure user and endpoint authentication, authorization, encryption (at rest and in flight), and integration with antivirus and other security technologies (like directory services or identity management). Federal standards for encryption (AES-128) and FIPS-140 compliance should be met.

Maximize interoperability without compromising performance: Perhaps the single most common attribute of today’s media-file–transfer pipelines is the need to smoothly interoperate transfers and automation across all the operating systems and file systems in use today, cloud or on-premises, while maintaining performance and security.

For maximum performance-transfer capability with the least system resources and highly precise rate control, it’s essential to write native code for every OS platform. Although this is phenomenally more difficult engineering with significantly more complexity and requires test and deployment of a huge matrix of builds by the vendor, it enables true breakthrough performance and consistency in speed across desktop, browser, and server environments and across operating systems.

Achieving Predictability Throughout the Pipeline

For many companies, the costs of ensuring some level of predictability in lead and processing times are quite high. It’s safer to send a tape overnight than risk a schedule slip using IT technologies that may fail because content is especially large, the network distance is great, or a large number of concurrent jobs are kicked off simultaneously.

The reality is that predictability should be achievable irrespective of the move to IT systems, distance, location, load, or other timings. Properly designed software systems scale up to handle very large numbers of files and frequency of processing; have loosely coupled components, each of which can be made highly available; and can be extended for new functionality. They enable IT systems to perform with increased predictability over former physical-media/tape–processing pipelines. An end-to-end workflow-automation approach builds on these principles and offers a way to realize the benefits of file-based IT systems for dramatic productivity increases and cost savings in the global media-supply chain.

Editor’s note: This white paper was originally published in the Fall Edition of the SVG SportsTech Journal.

Password must contain the following:

A lowercase letter

A capital (uppercase) letter

A number

Minimum 8 characters