Teacher Insights: Chocolate, the Right Food to Improve Your Brain Power  |  Leadership Instincts: Strong Bricks Can Be Made from Bio solids and Clay  |  Parent Interventions: Order of Birth in Family Has Influence on Intelligence  |  Cover Story: MIND THE NET  |  Technology Inceptions: Oppo’s 10X Lossless Hybrid Zoom Smartphone Camera Tech to Enter Mass Production   |  Technology Inceptions: AI Can Help Improve Understanding of Earth Science  |  Cover Story: THE CYBER BRAIN  |  Science Innovations: New treatment for osteoporosis   |  Technology Inceptions: SpaceX Protests NASA Launch Contract Award  |  Science Innovations: Cost-efficient catalysts  |  Technology Inceptions: NASA to Launch New Space Telescope in 2023 to Explore Origins of Universe  |  Leadership Instincts: Social Media Cannot Cause Depression  |  Parent Interventions: Maternal Grandmothers Can Raise Survival Rate of Grandchildren  |  Teacher Insights: Waking Up Early No Guarantee for Success  |  Teacher Insights: Ask your girl child to do science, not become scientist  |  
  • Pallikkutam Magazine
  • Companion Magazine
  • Mentor
  • Smart Board

November 13, 2018 Tuesday 11:17:54 AM IST
Alexa and Siri may in future learn language as kids do!

Researchers of Massachusetts Institute of Technology have developed a “semantic parser” that learns through observation to more closely mimic a child’s language-acquisition process.

Have you observed how children learn language? They learn by observing their environment, listening to the people around them, and connecting the dots between what they see and hear. This is how children establish their language's word order, such as where subjects and verbs fall in a sentence.

In the world of computing, learning language is the task of syntactic and semantic parsers. Parsers are becoming increasingly important for web searches, natural-language database querying, and voice-recognition systems such as Alexa and Siri.

MIT researchers have developed a parser that learns through observation to more closely mimic a child's language-acquisition process, which could greatly extend the parser's capabilities. To learn the structure of language, the parser observes captioned videos, with no other information, and associates the words with recorded objects and actions.

"People talk to each other in partial sentences, run-on thoughts, and jumbled language. You want a robot in your home that will adapt to their particular way of speaking ... and still figure out what they mean," suggest the researchers.

In the future, the parser could be used to improve natural interaction between humans and personal robots.

Source: http://news.mit.edu/2018/machines-learn-language-human-interaction-1031

Comments