top of page

Scalable Distributed Job Processing for Media Workflows

  • Writer: primalmotion studio
    primalmotion studio
  • Jan 1, 2019
  • 1 min read

Technical requirement summary highlights:

  • - Execute sync/async job in single/multiple steps

  • - Execute high volume fault tolerance and high-performance jobs

  • - Balance flexibly execution across service instances with rate limiter

  • - Define and register job types dynamically at runtime

  • - Scale job executor nodes to suite required throughput

  • - Jobs can paused/resume on any service instance or globally

  • - Minimum footprint on CPU consumption allowing job executor service instance running thousands of jobs at a time

  • - Self-healing capabilities and ability to delegate job executions to other job executor instances transparently

  • - All integrated into another microservice architecture without strong dependencies, enabling independent development and production life cycles.

ree

Dalet Flex Enhance Production Capabilities, Resource Allocation and Auto-Scaling: support Kubernetes for media processing on Cloud Infrastructure

ree

... and don't forget a key requirement: "When things go wrong...": e.g.: "cancelling" in a distributed system and distributed Lock mechanism to orchestrate access to the objects being used by the jobs, microservice restarting while running jobs and guaranteeing business continuity...


JEF enables the media processing plugins: provides an open system for customers as partners to expand the Media platform with new integration and custom developments.





 
 
 

Comments


bottom of page