Education Information: Cardiff achieves ‘Champion’ status for gender equality in physics  |  Parent Interventions: Online survey to assess needs of children and young people with cancer   |  Parent Interventions: Study links severe childhood deprivation to difficulties in adulthood  |  Parent Interventions: New study aims to learn the lessons of homeschooling  |  Teacher Insights: Using e-learning to raise biosecurity awareness  |  National Edu News: Science and Technology in finding solutions to combat COVID-19  |  National Edu News: Ek Bharat Shreshtha Bharat programme  |  Health Monitor: Beware of Hepatitis D, It can Lead to Hepatocellular Carcinoma  |  Teacher Insights: Education project to understand Birmingham learning at home during COVID-19  |  Education Information: UoG launches new onlines to meet some of the challenges of Covid-19  |  Teacher Insights: Professor Woolfson awarded Humboldt Research Prize  |  Parent Interventions: Parents paying heavy price for lockdown  |  Teacher Insights: Great Science Share brings science investigations into homes  |  Education Information: App will reduce high risk of falls during and after Lockdown  |  Education Information: University of Manchester to decarbonise its investment portfolio  |  
November 13, 2018 Tuesday 11:17:54 AM IST

Alexa and Siri may in future learn language as kids do!

Science Innovations

Researchers of Massachusetts Institute of Technology have developed a “semantic parser” that learns through observation to more closely mimic a child’s language-acquisition process.

Have you observed how children learn language? They learn by observing their environment, listening to the people around them, and connecting the dots between what they see and hear. This is how children establish their language's word order, such as where subjects and verbs fall in a sentence.

In the world of computing, learning language is the task of syntactic and semantic parsers. Parsers are becoming increasingly important for web searches, natural-language database querying, and voice-recognition systems such as Alexa and Siri.

MIT researchers have developed a parser that learns through observation to more closely mimic a child's language-acquisition process, which could greatly extend the parser's capabilities. To learn the structure of language, the parser observes captioned videos, with no other information, and associates the words with recorded objects and actions.


"People talk to each other in partial sentences, run-on thoughts, and jumbled language. You want a robot in your home that will adapt to their particular way of speaking ... and still figure out what they mean," suggest the researchers.

In the future, the parser could be used to improve natural interaction between humans and personal robots.

Source: http://news.mit.edu/2018/machines-learn-language-human-interaction-1031


Comments