Watson Captioning Live: Leveraging AI for Smarter, More Accessible Closed Captioning
The requirements for closed captioning were established more than two decades ago , but many broadcasters still struggle to deliver accurate, timely, and contextually-relevant captions. Breaking news, weather, and entertainment programming often feature delayed or incorrect captions, further
demonstrating that there is great room for improvement. These shortcomings lead to a confusing viewing experience for the nearly 48 million Americans with hearing loss and any other viewers who need captioning to fully digest content.
Committed to transforming broadcasters? ability to provide all audiences with more impactful viewing experiences, IBM Watson Media launched Watson Captioning Live , a trainable, cloud-based solution producing accurate captions in real-time to ensure audiences have equal access to timely and vital
information. Combining breakthrough AI technology like machine learning models and speech recognition, Watson Captioning Live redefines industry captioning standards.
The solution uses IBM Watson Speech to Text API to automatically ingest and transcribe spoken words and audio within a video. Watson Captioning Live is trained to automatically recognize and learn from data updates to ensure timely delivery of factually accurate captions. The product is designed to learn over time to increase its long-term value proposition for broadcast producers.
This paper will explore how IBM Watson Captioning Live leverages AI and machine learning technology to deliver accurate closed captions at scale, in real-time? to make programming more accessible for all.
Brandon Sullivan | The Weather Company Solutions | Austin, TX, USA
$15.00