2025 BEITC Proceedings

JOHN
  • Orchestrating Systems to Get Data Where Users Need It - $15

    Date: April 26, 2020
    Topics: ,

    As the industry moves towards an IP-first world, creative teams require access to resources (local, remote, shared or distributed) at the press of a button, without the concern of what is happening “under the bonnet.” The complexity of creating orchestrated, multi-service chains that ensure monitoring and resilience are preserved when two or more orchestration systems are combined provides a new challenge for technical teams.

    This paper shows one possible approach that leverages existing practices, spanning multiple organizations and resources, to create a distributed production and distribution fabric, orchestrated by multiple control planes with no single point of failure.

    Jemma Phillips | BBC | London, UK
    Ivan Hassan | BBC | London, UK



  • OTT: Local Ingest at the Edge - $15

    Date: October 9, 2021
    Topics: ,

    With OTT coming of age, packaging of local content for cloud and OTT delivery can be complex and costly. Capturing the opportunity requires overcoming legacy pitfalls and meeting modern requirements.

    Ronald Alterio | Blonder Tongue Laboratories, Inc. | Old Bridge, New Jersey, United States



  • Overcoming Obstacles to Design a Robust Single Frequency Network in San Francisco - $15

    Date: April 26, 2020
    Topics: ,

    Designing an ATSC-3 single frequency network (SFN) in the San Francisco market is challenging on many levels. The San Francisco market has anomalous terrain, limited options for transmitter sites due to hillside scenic regulations, and an onerous permitting process, all of which significantly impede system design.?In addition, adjacent DMA markets have first adjacent channel facilities with protected contours encroaching well within the San Francisco DMA and city boundary, plus in-market adjacent channels pose a challenge for 3.0 SFN designs. In a post-repack environment, there will be sixteen full power stations ? five VHF, and eleven UHF.

    This paper will present the various obstacles encountered, and the solutions we?ve created to overcome those obstacles to create a robust, FCC-compliant SFN design.?We will describe the process we?ve undertaken to design ATSC-3 SFNs on both UHF and VHF and how we arrived at the proposed SFN designs.?Finally, we will discuss the futility of SFN design without proper propagation software deployment.

    P. Eric Dausman | Public Media Group, PBC | Boulder, CO, USA
    Ryan C. Wilhour | Kessler and Gehman Associates, Inc. | Gainesville, FL, USA



  • Perceptually Aware Live VBR Encoding Scheme for Adaptive AVC Streaming - $15

    Date: April 14, 2023
    Topics: ,

    Currently, a fixed set of bitrate-resolution pairs termed a “bitrate ladder” is used in live streaming applications. Similarly, two-pass variable bitrate (VBR) encoding schemes are not used in live streaming applications to avoid the additional latency added by the first-pass. Bitrate ladder optimization is necessary to (i) decrease storage or delivery costs or/and (ii) increase Quality of Experience. Using two-pass VBR encoding improves compression efficiency, owing to better encoding decisions in the second-pass encoding using the first-pass analysis. In this light, this paper introduces a perceptually-aware constrained Variable Bitrate (cVBR) encoding Scheme (Live VBR) for HTTP adaptive streaming applications, which includes a joint optimization of the perceptual redundancy between the representations of the bitrate ladder, maximizing the perceptual quality (in terms of VMAF) and optimized constant rate factor (CRF). Discrete Cosine Transform (DCT)-energy-based low-complexity spatial and temporal features for every video segment, namely, brightness, spatial texture information, and temporal activity, are extracted to predict a perceptually-aware bitrate ladder for encoding. Experimental results show that, on average, Live VBR yields bitrate savings of 7.21% and 13.03% to maintain the same peak PSNR and VMAF, respectively, compared to the reference HTTP Live Streaming (HLS) bitrate ladder Constant Bitrate (CBR) encoding using x264 AVC encoder without any noticeable additional latency in streaming. Additionally, Live VBR results in a 52.59% cumulative decrease in storage space for various representations, and a 28.78% cumulative decrease in energy consumption, considering a perceptual difference of 6 VMAF points.

    Vignesh V. Menon | Christian Doppler Laboratory ATHENA, Alpen-Adria-Universität | Klagenfurt, Austria
    Prajit T. Rajendran | CEA, List, F-91120 Palaiseau, Université Paris-Saclay | France
    Christian Feldmann | Bitmovin | Klagenfurt, Austria
    Martin Smole | Bitmovin | Klagenfurt, Austria
    Mohammad Ghanbari | Christian Doppler Laboratory ATHENA, Alpen-Adria-Universität and School of Computer Science and Electronic Engineering | Klagenfurt, Austria and University of Essex, United Kingdom
    Christian Timmerer | Christian Doppler Laboratory ATHENA, Alpen-Adria-Universität | Klagenfurt, Austria



  • Performance Evaluation of Low-Latency DASH and Low-Latency HLS Streaming Systems - $15

    Date: October 9, 2021
    Topics: ,

    A detailed report on the results of the detailed analysis performed on the capabilities and performance of two major variants of today’s low-latency streaming systems: (LL-HLS) and (LL-DASH).

    Bo Zhang | Brightcove Inc. | Boston, Massachusetts, United States
    Thiago Teixeira | Brightcove Inc. | Boston, Massachusetts, United States
    Yuriy Reznik | Brightcove Inc. | Boston, Massachusetts, United States



  • Personalized and Immersive Sound Experience Based on an Interoperable NGA (Next Generation Audio) End-to-End Chain - $15

    Date: April 26, 2020
    Topics: ,

    NGA gives the best possible listening experience in varying situations (e.g., improved intelligibility and understanding, adapting to reproduction set-up and listening context, audio content tailored to individual preferences and needs) saving bandwidth and production efforts. These scenarios are enabled by the so called renderer whose purpose is to convert a set of audio signals with associated metadata to a different configuration of audio signals (e.g., speaker feeds) based on the metadata and control inputs from the playback environment and user?s preference. The approach of defining a specific renderer with its own metadata is extremely viable in clearly defined vertical businesses such as cinema, packaged media, etc., but for broadcast, which is by its nature a transversal business, this is definitely not the case so standards that describe the metadata and the behaviour of the renderer become beneficial. The ADM (Audio Definition Model) standard defined in ITU-R BS.2076-1 is particularly relevant in this context to ensure interoperability and reproducibility along the chain. The aim of this paper is to describe ADM based use-cases and workflows and the efforts ongoing to promote a wide adoption and integration of the ADM.?

    David Marston | British Broadcasting Corporation | London & Salford, United Kingdom
    Thomas Nixon | British Broadcasting Corporation | London & Salford, United Kingdom
    Chris Pike | British Broadcasting Corporation | London & Salford, United Kingdom
    Matthieu Parmentier | France T?l?visions | Paris, France
    Paola Sunna | European Broadcasting Union | Geneva, Switzerland
    Michael Weitnauer | Institut f?r Rundfunktechnik GmbH | M?nich, Germany
    Benjamin Weiss | Institut f?r Rundfunktechnik GmbH | M?nich, Germany