The research announced at the AAAI-20 Conference in New York gives computer systems the ability to better comprehend and infer from natural language.
Researchers from the MIT-IBM Watson AI Lab, Tulane University and the University of Illinois this week unveiled research that allows a computer to more closely replicate human-based reading comprehension and inference.
The researchers have created what they termed “a breakthrough neuro-symbolic approach” to infusing knowledge into natural language processing. The approach was announced at the AAAI-20 Conference taking place all week in New York City.
Reasoning and inference are central to both humans and artificial intelligence, yet many enterprise AI systems still struggle to comprehend human language and textual entailment, which is defined as the relationship between two natural language sentences, according to IBM.
There have been two schools of thought or “camps” since the beginning of AI: one has focused on the use of neural networks/deep learning, which have been very effective and successful in the past several years, said David Cox, director for the MIT-IBM AI Watson Lab.
Neural networks and deep learning need data and additional compute power to thrive. The advent of the digitization of data has driven what Cox called “the neural networks/deep learning revolution.”