Cover Story: WHEN FOOD COMES CALLING  |  Cover Story: Yours Online, Kudumbashree  |  Cover Story: DATE WITH THE DIGITAL  |  Rajagiri Round Table: IT'S E-S FOR SHOPPING  |  Technology Inceptions: Astrophysicists Count All the Starlight in the Universe  |  Leadership Instincts: China’s female beauty paradigms changes themselves   |  Parent Interventions: Sleepless babies! Inactivity may be the culprit  |  Parent Interventions: How to teach kids to deal with money   |  Scholarships & Sponsorships: POST GRADUATE SCHOLARSHIP FOR SINGLE GIRL CHILD 2018-19  |  Technology Inceptions: Indian Robotics Company Emotix Launches Miko 2, a Companion for Children  |  Technology Inceptions: Samsung 860 QVO Affordable Multi-Terabyte Storage SSD Launched  |  Parent Interventions: Do not coerce your child for reluctant apology  |  Science Innovations: MIT engineers develop first-ever plane propelled by “ionic wind”  |  Parent Interventions: “Parentese” is good for infant’s language development  |  Technology Inceptions: First Gene-Edited Human Babies Claimed in China  |  
  • Pallikkutam Magazine
  • Companion Magazine
  • Mentor
  • Smart Board

November 13, 2018 Tuesday 11:17:54 AM IST
Alexa and Siri may in future learn language as kids do!

Researchers of Massachusetts Institute of Technology have developed a “semantic parser” that learns through observation to more closely mimic a child’s language-acquisition process.

Have you observed how children learn language? They learn by observing their environment, listening to the people around them, and connecting the dots between what they see and hear. This is how children establish their language's word order, such as where subjects and verbs fall in a sentence.

In the world of computing, learning language is the task of syntactic and semantic parsers. Parsers are becoming increasingly important for web searches, natural-language database querying, and voice-recognition systems such as Alexa and Siri.

MIT researchers have developed a parser that learns through observation to more closely mimic a child's language-acquisition process, which could greatly extend the parser's capabilities. To learn the structure of language, the parser observes captioned videos, with no other information, and associates the words with recorded objects and actions.

"People talk to each other in partial sentences, run-on thoughts, and jumbled language. You want a robot in your home that will adapt to their particular way of speaking ... and still figure out what they mean," suggest the researchers.

In the future, the parser could be used to improve natural interaction between humans and personal robots.

Source: http://news.mit.edu/2018/machines-learn-language-human-interaction-1031

Comments