Trending: Adaptive Leadership in Times of Crisis  |  Teacher Insights: 'Lab in a box' projects for home learning  |  Policy Indications: A global collaboration to move AI principles to practice  |  Science Innovations: Translating lost languages using machine learning  |  Science Innovations: Scientists develop ‘mini-brains’ to help robots recognise pain & to self-repair  |  Health Monitor: Ayurvedic Postnatal Care  |  Parent Interventions: Online learning ergonomics: Keep your child engaged and strain-free  |  Parent Interventions: Cow’s milk protein intolerance risk factors   |  Parent Interventions: Safe sports for kids during Covid-19  |  Parent Interventions: E-modules increase provider knowledge related to adverse childhood experiences  |  Technology Inceptions: ICMR validates ‘COVIRAP' by IIT Kharagpur   |  National Edu News: India progressing rapidly towards the goal of indigenously made Supercomputers  |  Best Practices: “Aditi Urja Sanch” Unit at CSIR-NCL, Pune  |  Reflections: What Really Matters  |  Teacher Insights: New Harvard Online course course prepares professionals for a data-driven world  |  
November 13, 2018 Tuesday 11:17:54 AM IST

Alexa and Siri may in future learn language as kids do!

Science Innovations

Researchers of Massachusetts Institute of Technology have developed a “semantic parser” that learns through observation to more closely mimic a child’s language-acquisition process.

Have you observed how children learn language? They learn by observing their environment, listening to the people around them, and connecting the dots between what they see and hear. This is how children establish their language's word order, such as where subjects and verbs fall in a sentence.

In the world of computing, learning language is the task of syntactic and semantic parsers. Parsers are becoming increasingly important for web searches, natural-language database querying, and voice-recognition systems such as Alexa and Siri.

MIT researchers have developed a parser that learns through observation to more closely mimic a child's language-acquisition process, which could greatly extend the parser's capabilities. To learn the structure of language, the parser observes captioned videos, with no other information, and associates the words with recorded objects and actions.


"People talk to each other in partial sentences, run-on thoughts, and jumbled language. You want a robot in your home that will adapt to their particular way of speaking ... and still figure out what they mean," suggest the researchers.

In the future, the parser could be used to improve natural interaction between humans and personal robots.

Source: http://news.mit.edu/2018/machines-learn-language-human-interaction-1031


Comments