Build an AI character that can have conversations with individual viewers, listeners or consumers. Broadcasters should be able to define and train the character’s personality.
Through all phases of the challenge, we’ve had the opportunity to get to know the winners and get a better understanding of the thought processes that drove their submissions and what broadcasters should expect to see from the prototype demonstrations at NAB Show in April.
Jukebot – University of Minnesota: Department of Computer Science and Engineering
“Our project will give broadcasters a glimpse into what it would be like to use conversational agents in their interactions with listeners,” explained Risako Owan, a computer science Ph.D. student working on Jukebot for the University of Minnesota.
Jukebot is a chatbot API capable of answering simple questions and getting feedback from listeners for radio stations. The team developed a similar prototype for a different project and saw the opportunity to apply their expertise for this challenge. Their proposal focused on a single form of media, radio, in order to spend the appropriate time analyzing the data and designing the architectures.
“This also lessens the possibility of scope creep,” added Esha Singh, a computer science master’s student. “Music is a common domain with multiple structured datasets available which made it a great starting point for our project.”
The team leader, Maria Gini, is a professor of computer science and engineering with extensive research in artificial intelligence. She knows Jukebot shows only a small glimpse into what is possible.
“The AI technology utilized by our prototype shows only a segment of what is possible. Improved natural language processing and effective computing will further transform the way we currently interact with computers,” said Gini. “In the future, we will potentially be able to hold longer, more open-ended conversations that will encourage more interaction with listeners.”
DeepTalk: A Conversational Agent for Broadcasters – Michigan State University: NextGen Media Innovation Lab, College of Communication Arts and Sciences; i-PRoBe Lab, Department of Computer Science and Engineering; WKAR Public Media
The Michigan State University team’s prototype, DeepTalk, is a conversational agent that can be trained through deep learning to deliver information in the voice of a local broadcaster. While the conversation agent is its first step, this team also hopes to also use the agent to study the impact of artificial intelligence on journalism and broadcasting by investigating the audience’s perception of machine learning and AI in broadcasting.
“AI is increasingly becoming an integral part of our lives and cannot be ignored,” Prabu David, dean of the Michigan State University College of Communication Arts and Sciences said. “Active participation in the co-creation of AI-based technology offers broadcasters a role in shaping the technology to conform to their ethical principles and moral values.”
The DeepTalk team consists of communication researchers and computer scientists who have been studying the problem of style transfer.
“Style transfer is a fascinating paradigm in deep learning where attributes of one object are transferred to a different object,” explained computer science and engineering professor Arun Ross. “When we saw the call for the Innovation Challenge, we were immediately intrigued by the idea of designing a conversational agent whose speaking attributes could be gleaned from a real human.”
Ross is excited by the possibility of building a technology that can be customized by stations to create strong relationships with consumers. “This level of engagement can lead to a broader audience and a more engaged society,” he said.
The winning teams were each awarded $75,000 to advance their idea and build a working prototype to be demonstrated in October 2020.