This is a guest post by IBM Watson Artificial Intelligence Consultant, Debi Stack. Debi will join the Artificial Intelligence and Bots session at the NAB Show Digital Futures Exchange. You can follow her on twitter at @debrastack.
This post was originally published on LinkedIn.
The CEO of a global media & entertainment company recently visited IBM’s Watson headquarters for a briefing with our Watson consulting team. He had three key questions about IBM Watson:
- What is real and what is hype?
- How can Watson help monetize our content?
- How do we train Watson?
I’ll explore the answers to these questions and my recommendations for a cognitive media strategy in three blogs over the next three days:
CEO: “Tell me what Watson can really do for my business and what is marketing hype.”
Debi: Ok. First the hype: Watson will become our digital self, understanding me and my wants, needs and behaviors as well as I do. Whether I’m in my car, at home or at the office, Watson will know me along with data from my IoT sensors and remember prior experiences & conversations. We will have a natural dialogue just as if we’re talking with a friend or even ourselves. We’re not there yet but I believe the vision is attainable and now is the time to set your company apart as we train Watson to create VIP experiences for your viewers.
Here is the value proposition: You told me that it’s important to maintain the direct to consumer relationship along with your customer’s data & insights. In fact, your access to personal data is becoming more significant to your market value. IBM’s approach helps companies establish competitive advantage vs. contributing to the public knowledge graph. There is a growing demand for personal privacy and consumers expect a value exchange as I described in an earlier blog, “Trust Your Telco”.
Now, the reality: Watson’s key advantage is in its ability to read, hear & see text, images and video. The average person reads between 250 to 1000 words per minute with 60% to 85% comprehension. Watson reads 80 million pages per second. That’s impressive but it’s important to know that comprehension must be trained in a given domain.
For example, let’s talk about the show everyone is talking about. Watson will understand the words “13 Reasons Why” and we can search all Tweets to determine that sentiment about the show is mixed. But Watson needs to be trained to comprehend that “13 Reasons Why” is the title of a TV show on Netflix and the topic is bullying and suicide. We can train Watson to separate the positive sentiment for the show from the negative sentiment about the topic. We can even train Watson to direct the viewer to an exact moment in an episode when Hannah is bullied. Maybe, Watson will have a conversation with a viewer about the show. There are many possibilities.
CEO: “I’m very interested in a recommendation engine and conversation bots are top of mind. What is the best user experience for Watson?”
Debi: Speech will become the dominant user interface by 2020 because individuals are consuming more content today than is humanly possible to comprehend. According to Social Media Today, the average person will spend 2 hours on social media every day, which translates to a lifetime total of over 5 years and over 7 years watching TV.
For Watson, we usually start with chat as the interface so that our clients can get to market fast and we continue training Watson with user feedback. So, if Watson answers a question correctly we get a thumbs up from the user and if it can’t answer the question then we send it to our developers for additional training. We are also developing pre-trained conversation modules with speech-to-text for some use cases.
It’s easy to get started with Watson Virtual Agent, beginning with 105 common intents for customer service and customize the model for your customer experience. Notice the Watson logo in the lower left corner for a beta version of WVA and ask questions about the service.
CEO: “Can Watson help us with metadata tagging to help us gain better insights about our content?”
Debi: Absolutely. We can train Watson with your taxonomy to add metadata tags based on text, speech, sound or images – frame by frame with video. We demonstrated “Cognitive Highlights” during the Masters earlier this month. In this use case, Watson has the ability to see, hear and learn to identify great shots at the Masters, based on crowd noise, player gestures, and other indicators, to aid in the creation of highlight reels.
Let’s talk more about content monetization in tomorrow’s blog at Linked In / DebiStackIBM and I’ll finish the discussion on Friday with the final question and recommendations.