Alexa and Siri may in future learn language as kids do!

Researchers of Massachusetts Institute of Technology have developed a “semantic parser” that learns through observation to more closely mimic a child’s language-acquisition process.
Have you observed how children learn language? They learn by observing their environment, listening to the people around them, and connecting the dots between what they see and hear. This is how children establish their language's word order, such as where subjects and verbs fall in a sentence.
In the world of computing, learning language is the task of syntactic and semantic parsers. Parsers are becoming increasingly important for web searches, natural-language database querying, and voice-recognition systems such as Alexa and Siri.
MIT researchers have developed a parser that learns through observation to more closely mimic a child's language-acquisition process, which could greatly extend the parser's capabilities. To learn the structure of language, the parser observes captioned videos, with no other information, and associates the words with recorded objects and actions.
"People talk to each other in partial sentences, run-on thoughts, and jumbled language. You want a robot in your home that will adapt to their particular way of speaking ... and still figure out what they mean," suggest the researchers.
In the future, the parser could be used to improve natural interaction between humans and personal robots.
Source: http://news.mit.edu/2018/machines-learn-language-human-interaction-1031
RECOMMENDED NEWS