2025 BEITC Proceedings

JOHN
  • A Framework for Efficient Data Scripting in High-Volume Sports Graphics Workflows - $15

    Date: April 3, 2024
    Topics: ,

    Data is at the heart of storytelling for broadcasters across the world of sports. Through powerful data APIs, broadcasters can access and integrate massive volumes of real-time data into their on-air graphics, enabling stats-driven analysis and automated graphic updates. However, effective maintenance and integration of such high volumes of data can prove difficult from an IT and manual design perspective. This paper outlines how a scripting framework – leveraging open-source, high-performance scripting tools – can provide superior flexibility in secure IT environments and eliminate the need for repetitive design to utilize data in different presentation scenarios.

    Nikole McStanley | Chyron | Tampa, Fl., United States
    David Mayer | Chyron | Apex, N.C., United States
    Dan MacDonald | Chyron | Ottawa, Ontario, Canada



  • A Multi-Pronged Approach to Effectively Secure SMPTE ST 2059 Time Transfer - $15

    Date: April 26, 2020
    Topics: ,

    As the new version of the IEEE 1588 standard will be published in the very near future, its enhancements with respect to improved resilience and observability are summarized. The paper concludes on presenting ways the broadcasting industry can benefit from these new features efficiently and quickly.

    Nikolaus Ker? | Oregano Systems | Vienna, Austria
    Thomas Kernen | Mellanox Technologies | Russin, Switzerland



  • A Novel White-Balance System for Broadcast Cameras Using Machine Learning - $15

    Date: April 26, 2020
    Topics: ,

    Color constancy is the ability to perceive colors of objects, invariant to the color of the light source. Our eyes, the human visual system is capable of doing that. Color constancy algorithms first estimate the color of the illuminant light source and then correct the image so that the corrected image appears to be taken under a canonical light source. In this project, we use color constancy on a broadcast camera without processing the video but instead acting on the RGB gain control much akin to a camera operator. The reference color is provided by a neural network which is trained to infer the color of the illuminant just looking at a sample frame from the camera; from that reference, we can compute the direction of the error from the white illuminant in color space and then generate a new RGB gain triplet to the camera. With the new adjustment, the cycle continues and eventually, the camera will output an image that is corrected and the neural network will estimate the illuminant as white yielding a zero error and the whole system stabilizes.

    Edmundo Hoyle | GRUPO GLOBO | Rio de Janeiro, Brazil
    Alvaro Antelo | GRUPO GLOBO | Rio de Janeiro, Brazil
    Leandro Pena | GRUPO GLOBO | Rio de Janeiro, Brazil



  • A Platform for the Development and Deployment of Software-Defined Media Processing Pipelines - $15

    Date: April 3, 2024
    Topics: ,

    We present the implementation of a platform available today for the easy development and deployment of media process pipelines for video transcoding and Artificial Intelligence (AI) applications. This platform implements an open-source software architecture that combines broadcast industry standards (SMPTE ST 2110, AMWA NMOS, RIST, SRT and NDI) with proven IT industry methods (containers, Kubernetes, Helm) running on common-of-the-shelf (COTS) hardware that prevents vendor lock in on-prem, in the cloud or on the edge. Applications developed for this platform are easily deployed using an open-source automation layer and dynamically interconnected to build any desired pipeline or workflow. 

    Gareth Sylvester-Bradley | NVIDIA Development UK Ltd | Reading, United Kingdom
    Richard Hastie | NVIDIA Development UK Ltd | Reading, United Kingdom
    Pravin Sethia | NVIDIA Graphics Pvt Ltd | Pune, Maharashtra, India
    Thomas True | NVIDIA Corporation | Santa Clara, Calif., United States



  • A Review of State-of-the-Art Automatic Speech Recognition Services for CART and CC Applications - $15

    Date: April 26, 2020
    Topics: ,

    In this paper, we analyze how ready and useful Automatic Speech Recognition (ASR, also known as text-to-speech) services are for companies aiming to provide cost-effective Communication Access Realtime Translation (CART) or Closed Captioning (CC) services for corporate, education, and special events markets.

    Today?s major cloud infrastructure vendors provide a broad spectrum of artificial intelligence (AI) and machine learning (ML) services. ASR is one of the most common. Several vendors offer multiple APIs/engines to suit a wide range of ASR projects. A number of open-source AI/ML ASR engines are also available (assuming you are ready to embed or deploy them yourself). How can you determine which of these many options is best for your project?

    We analyze existing ASR offerings based on several different criteria. We discuss ASR accuracy, how it can be defined and measured, and what datasets and tools are available for benchmarking. Accuracy is paramount, making it a natural starting point, but other parameters may also significantly influence your choice of ASR engine. Real-time applications (CC for live streaming and on-premises CART services) demand low-latency responses from the ASR system and a specially designed streaming API ? neither of which are requirements for offline (and thus more relaxed) transcription scenarios.

    ASR engine extendability and flexibility are among the other key factors to consider. What languages does the ASR system support for online and offline transcriptions? Are there domain-specific vocabulary models/extensions? Can the ASR model be customized to recognize specialized vocabulary and specific terms (e.g., the names of a company?s products)? We try to shed some light on these questions as well.

    Misha Jiline | Epiphan Video | Ottawa, Ontario, Canada
    David Kirk | Epiphan Video | Ottawa, Ontario, Canada
    Greg Quirk | Epiphan Video | Ottawa, Ontario, Canada
    Mike Sandler | Epiphan Video | Ottawa, Ontario, Canada
    Michael Monette | Epiphan Video | Ottawa, Ontario, Canada



  • A Unified Cloud Pipeline for Production, Post, Mastering and Localization using IMF and Open TimeLine IO - $15

    Date: April 26, 2020
    Topics: ,

    By leveraging the Interoperable Master Format (IMF), Open TimeLine IO (OTLIO) and other open source and standardized tools such as C4ID Cloud Identifier, a centralized cloud based continuous master can be created. This continuous master can be constantly synchronized with work in progress in editing, visual effects, color grading and sound mixing systems to provide a single centrally available point of collaboration for all parties in production, post, localisation and distribution. This method avoids the requirement for proprietary asset management or file types, and allows creatives and engineers to use the tools they prefer and avoids repeatedly moving assets in and out of cloud. Furthermore it allows a paradigm shift in the localisation and distribution master process which can allow productions the ability to deliver a project almost instantly to any number of platforms, territories and versions once it is complete. This workflow also allows productions to?share conformed work in progress versions at any moment during the production/post/versioning life cycle without interrupting progress in any of these processes. This method also has side benefits such as total de-duplication of assets, no vendor or tool lock in for access to assets and the ability to link business systems to the content creation process, thus allowing order-to-delivery automation. This method overall offers benefits in terms of time, cost, operational effectiveness and most importantly creativity. This benefits all types of delivery from movies to?episodic, advertising, trailers or any produced content targeting multi platform, domestic?and international audiences. It also allows for the latest content trends such as branching storylines and audience interactive content to be stored and accessed in a non proprietary way. The?methodology?described in this paper provides?a step change in managing today’s content pipelines and future proofing for next generation?workflows.

    Richard Welsh | Sundog Media Toolkit Ltd | Bristol, United Kingdom