2020 BEITC Proceedings

  • Introduction to the 5G Media Action Group (5G-MAG) - $15

    Date: April 26, 2020
    Topics: ,

    The 5G-MAG has recently been launched as an asociation that will take the emerging 3GPP 5G standards and turn them into viable solutions for use by media companies around the world. The establishment of the group was facilitated by the EBU in Geneva and its early membership includes many of Europe’s public service broadcasters along with some global manufacturers.

    The 5G-MAG aims to ensure to represent the interests of the world-wide media industry in the various 5G stakeholder groups including; 3GPP and DVB and the ATSC for standardisation, the European Commission and ITU for regulation and a wide range of industry bodies.

    Its main aim is to ensure that these emergent standards meet the needs of media companies for both production and distribution and to hep facilitate and promote trials for these use cases. It will also investigate and help design networks which are best suited to the distribution of media includng hybrid 5G/broadcast networks.

    Gregory Bensberg | Digital 3&4/ITV | London, United Kingdom



  • Is the Technology Ready Yet for Live Standards Conversion in the Cloud? - $15

    Date: April 26, 2020
    Topics: ,

    Broadcasters and media companies engaged in live international content distribution are familiar with the need for standards conversion.?Multiple broadcast frame rates and formats are in use throughout the world, and with an ever growing number of standards to support in mobile and streaming services, high quality live standards conversion is an essential part of many businesses.

    The prevailing hardware model has been successively refined to the point of motion compensated HD standards conversion available in compact 1U and 2U form factors or as single modular cards for commonly available infrastructure racks.?A wide variety of workflow tools have been added to these converters including audio shuffling and routing, high dynamic range and wide colour gamut mapping, picture enhancement tools, video/audio delay compensation, and metadata management.

    Connection and control of hardware standards converters is easy.?Multiple SDI and/or SFP input/output connectors enable the unit to be connected to the facility router, and settings can easily be modified via front panel buttons or a web interface.?A video confidence monitor on the front panel enables the operator to make a quick check of continuity, and audio headphones sockets enable audio quick checks.?Remote operation via a control interface or SNMP allows the unit status to be visible at all times, including feeding into automated control and monitoring systems which can raise alarms if any problems arise.

    The workflow complexity of attempting to use an on-premise converter within a live cloud-based workflow is daunting enough, even before the costs are considered.?Therefore, access to a software-based standards converter service in a cloud environment is a tempting prospect for broadcasters. Capital expenditure on physical hardware can be replaced with operating expenditure which is only charged when the conversion is needed.?A cloud-based converter can be accessed from any location within the broadcaster’s network as physical content delivery via BNCs is replaced by logical signal flow into software input/output processes.?

    However, here we encounter our first challenge. The SDI signal is now streamed data, wherein audio, video and metadata are multiplexed within a transport stream, controlled by a streaming service protocol.?The essence (audio and video) may be compressed and therefore need to be decoded before the standards conversion can be applied, and re-encoded on the output.?This implies the availability of the appropriate codecs and sufficient processing resource for transcoding.

    Our next challenge arises when monitoring the inputs and outputs to the converter.?Since the operator will not be co-located with the equipment, proxy audio and video signals needs to be delivered to a control room, with attendant issues of available bandwidth, security and latency.

    The biggest technical challenge is the decomposition of the complex motion compensated processes into a new architecture which can take advantage of the parallel processing capabilities offered by cloud instances.? Many years of refinement have led to highly efficient processing capabilities in hardware converters, but how do we replicate high quality real-time performance in software-only solutions??It’s tempting to use GPUs but cost and availability are an issue in rapid deployment services.

    However, implementation in software can also bring advantages.?As new standards and requirements emerge, modifications can be deployed in prototype (test) services which enable easier switchover into the live transmission path.?The hardware model required removing the converter from use while firmware upgrades were made, whereas a test software service can run in parallel to the main path.

    Rental models become much easier in software.?Provision of rental hardware for live events could become problematic if the broadcaster had not planned far enough ahead or had unexpected breakdown of equipment close to a major event.? Since software services can be spun up on cloud processing platforms at short notice, additional event provision becomes more feasible.

    Paola Hobson | InSync Technology Ltd | Petersfield, United Kingdom



  • JT-NM Tested August 2019, What Is In It For You? Results, Findings, and Methodologies - $15

    Date: April 26, 2020
    Topics: ,

    The JT-NM Tested program continues offering documented insights into how vendors? equipment aligns with the SMPTE ST-2110, ST 2022-7 and SMPTE ST-2059 standards. It has already proven that the SMPTE ST 2110 is a go-to standard for media-over-IP transport. However, as described in the ?EBU Pyramid,??having just a media transport without an open control plane is not enough. Therefore, it had been decided that the next iteration of the program has to address that and greatly expand its scope.

    So?the second iteration of the program included 3 types of tests, as follows:
    – Data plane: Basic SMPTE ST 2110 performance and behavior (this time including UHD formats)
    – Control plane: AMWA NMOS and JT-NM TR-1001-1 performance and behavior
    – Cybersecurity Vulnerability Assessment

    In this paper, the authors of the JT-NM Tested program and the editors of the test plans explain in great detail the new and improved test plans, testing infrastructure, testing procedures, methodologies, results?and overall findings.

    Readers of the paper will be able to apply the knowledge, offered resources and methodologies for various testing scenarios, including self-assessment of open-source or commercial media-over-IP software and hardware by vendors and R&D labs, equipment performance assessment and validation by users and solution architects, tendering and qualification of the equipment by in-house broadcasters? labs, and infrastructure architecture and troubleshooting by system integrators.

    Andrew Bonney | BBC R&D | Salford, Greater Manchester, UK
    Ievgen Kostiukevych | European Broadcasting Union | Geneva, Switzerland
    Pedro Ferreira | Bisect | Porto, Portugal
    Willem Vermost | Flemish Radio and Television Broadcasting Organisation | Brussels, Belgium



  • Machine Learning Based UHD Up-conversion Using Generative Neural Networks - $15

    Date: April 26, 2020
    Topics: ,

    Ultra HD (UHD) is now part of mainstream TV production. Various consumer electronic companies now offer a wide range of high-quality UHD TV sets, which accounts for a significant proportion of all new TVs sold globally. The popularity of UHD has raised viewers’ expectations for a stunning quality experience. In today’s media landscape, online providers have moved quickly to satisfy this UHD demand via ABR delivery.

    However, there remains a limited amount of content available that is UHD-native, and traditional content providers have a wealth of great content to offer the consumers even more value in UHD format. As consumer expectations grow, broadcasters will need to provide higher quality experiences, moving from a few UHD events to full-time or pop up channels, even if this is delivered only over ABR infrastructure. The question about where to get valuable UHD content remains open-ended, even though traditional up-conversion techniques can result in an end user experience that is more “HD-like”?than UHD.

    This presentation explores a different approach to up-conversion, using Generative Adversarial Neural Networks to synthesize detail in the upconverted image, leading to an experience that is much closer to native UHD, leading to more compelling, higher quality experiences for consumers.

    Alex Okell | MediaKind | Hedge End, Hampshire, United Kingdom
    Tony Jones | MediaKind | Hedge End, Hampshire, United Kingdom



  • Machine Learning for Per-Title Encoding - $15

    Date: April 26, 2020
    Topics: ,

    Video streaming content differs in terms of complexity and requires title-specific encoding settings to achieve a certain visual quality. Classic ?one-fits-all? encoding ladders ignore video-specific characteristics and apply the same encoding settings to all video files. In the worst-case scenario, this approach can lead to quality impairments, encoding artifacts or unnecessary large media files. A per-title encoding solution has the potential to significantly decrease the storage and delivery costs of video streams while improving the perceptual quality of the video. Traditional per-title encoding solutions typically require a large number of test encodes, resulting in high computation times and costs. In this paper, we illustrate a solution that implements the traditional per-title encoding approach and uses the resulting data for machine-learning-based improvements. By applying supervised, multivariate regression algorithms like Random Forest Regression, Multilayer Perceptron and Support Vector Regression we are able to predict mandatory video quality metrics (VMAF). That way, the test encodes are eliminated while preserving the benefits of per-title encoding.

    Daniel Silhavy | Fraunhofer FOKUS | Berlin, Germany
    Christopher Krauss | Fraunhofer FOKUS | Berlin, Germany
    Anita Chen | Fraunhofer FOKUS | Berlin, Germany
    Anh-Tu Nguyen | Fraunhofer FOKUS | Berlin, Germany
    Stefan Arbanowski | Fraunhofer FOKUS | Berlin, Germany
    Stephan Steglich | Fraunhofer FOKUS | Berlin, Germany
    Louay Bassbouss | Fraunhofer FOKUS | Berlin, Germany



  • NDI in the Master Control Room - $15

    Date: April 26, 2020
    Topics: ,

    It?s no secret that NDI is the new face of revolution in information transmitting. Capable of higher quality of information transfer, broadcast quality videos can be delivered and received in a frame-accurate format. This high quality transmission makes it a suitable choice for switching in a live production environment within the Master Control room. NDI streams are able to transmit a full frame full resolution version of videos and a lower resolution proxy version that can reduce the load on networks when streaming to multi-viewers, vision mixers, or preview screens. Its ability to multicast and switch to full resolution version of video transmissions make it the ideal choice to use in a Master Control room.

    Nurul Natasha Bte Rahyiu | Etere Pte Ltd | Singapore
    Fabio Gattari | Etere Pte Ltd | Singapore



  • News Workflow 2020 - An Extreme Case Study for Cloud and AI Enablement and a Brave New Way of Planning - $15

    Date: April 26, 2020
    Topics: ,

    With the candidate-packed 2020 elections on the horizon, the pressure is amplified to deliver even more comprehensive stories and visuals. Coupled with the logistics of news collection through a multitude of social sources, distribution to numerous viewing platforms, and the public?s desire to have the very latest information on the story they are following, many broadcasters must reinvent their workflow to thrive.?

    This presentation will address why the rundown is no longer the heart of the newsroom and how broadcasters can leverage cloud, AI and story-centric planning production tools to re-architect their news planning and production workflow in a way that supports the breadth and depth of coverage requirements, taking audiences on a connected content journey via the viewing platform of their choice.?

    Presentation touch points include:
    -How news producers can better manage communication and iterative story planning for their journalists in the newsroom and out in the field.?
    -Content management best practices to manage the tsunami of social news sources and statistic feeds for graphics.
    -Utilizing cloud technologies to support video news story production in the field in a time where 5G is not fully mature.
    -How AI brings to light a new class of productivity tools – for example – index mass content properly and make editorial content recommendations.
    -How linear and nonlinear/social can live together even during news shows.

    Raoul Cospen | Dalet | Paris, France



  • Next Steps of Broadcast Camera Integration into Full IP Infrastructures - $15

    Date: April 26, 2020
    Topics: ,

    Since early days of broadcast television there has been an ongoing evolution of the signal transmission between the camera heads and the camera base stations or CCU?s.

    Multicore cables where replaced by Triax cables in the 70?s and latest with the introduction of the UHD standards SMPTE hybrid Fiber cables became the de-facto standard for most of applications. At the same time the signals transmitted over the camera cables migrated from analogue signals to digital video signals and since a couple of years the most advanced broadcast cameras uses IP signals over the camera cables.

    The use of IP signals over the camera cable offered new ways of camera signal transmission over long distance as required for remote of REMI production workflows. More or less at the same time IP connectivity became used for broadcast infrastructures replacing the previously used base band connections by a format agnostic and more future proof solution.

    One can ask what?s next?

    At a full IP infrastructure the camera base station does not provide much more functionalities as a power supply for the camera head. But since an IP connection over a LAN or WAN network does not carry the power from the base station to the camera what is the use of the camera base station then. The paper is explaining the challenges and the potential solutions for a direct integration of a system camera as used for live broadcast applications into a full IP infrastructure. This next step in the evolution of camera signal transmission and connectivity will allow to establish new workflows and offer a much more flexible use of the cameras.

    Klaus Weber | Grass Valley Germany GmbH | Weiterstadt, Germany
    Ronny van Geel | Grass Valley Nederland BV | Breda, Netherlands



  • Orchestrating Systems to Get Data Where Users Need It - $15

    Date: April 26, 2020
    Topics: ,

    As the industry moves towards an IP-first world, creative teams require access to resources (local, remote, shared or distributed) at the press of a button, without the concern of what is happening “under the bonnet.” The complexity of creating orchestrated, multi-service chains that ensure monitoring and resilience are preserved when two or more orchestration systems are combined provides a new challenge for technical teams.

    This paper shows one possible approach that leverages existing practices, spanning multiple organizations and resources, to create a distributed production and distribution fabric, orchestrated by multiple control planes with no single point of failure.

    Jemma Phillips | BBC | London, UK
    Ivan Hassan | BBC | London, UK



  • Overcoming Obstacles to Design a Robust Single Frequency Network in San Francisco - $15

    Date: April 26, 2020
    Topics: ,

    Designing an ATSC-3 single frequency network (SFN) in the San Francisco market is challenging on many levels. The San Francisco market has anomalous terrain, limited options for transmitter sites due to hillside scenic regulations, and an onerous permitting process, all of which significantly impede system design.?In addition, adjacent DMA markets have first adjacent channel facilities with protected contours encroaching well within the San Francisco DMA and city boundary, plus in-market adjacent channels pose a challenge for 3.0 SFN designs. In a post-repack environment, there will be sixteen full power stations ? five VHF, and eleven UHF.

    This paper will present the various obstacles encountered, and the solutions we?ve created to overcome those obstacles to create a robust, FCC-compliant SFN design.?We will describe the process we?ve undertaken to design ATSC-3 SFNs on both UHF and VHF and how we arrived at the proposed SFN designs.?Finally, we will discuss the futility of SFN design without proper propagation software deployment.

    P. Eric Dausman | Public Media Group, PBC | Boulder, CO, USA
    Ryan C. Wilhour | Kessler and Gehman Associates, Inc. | Gainesville, FL, USA