Technology Inceptions: Carbons Emissions Can Be Converted to Useful Products  |  Technology Inceptions: Huawei Nova 5i Pro Leaked Schematic Tips   |  National Edu News: IIT Madras Tops in NIRF 2019 Overall Rankings  |  National Edu News: Kerala Polytechnic allotment 2019  |  International Edu News: Queens University UK to Accept IIT JEE, Other Engg Entrance Scores  |  Science Innovations: New battery technology  |  Teacher Insights: OCD from feeling of responsibility  |  Higher Studies: Time to Apply for NIPHM PG Diploma in Plant Health Management  |  Finance: Policy Makers Should Give Top Priority for Wealth Creation : Dhiraj Nayyar  |  Rajagiri Round Table: Draft New Education Policy: Some Misconceptions and Major Recommendations  |  Science Innovations: Injectable tissues on anvil  |  Parent Interventions: Parental praise good for child  |  Technology Inceptions: Spotify testing ‘Car Thing’  |  Teacher Insights: Strategy to curtail junk food  |  Career News: UPSC Assistant Geologist 2019 Appointment list Released  |  
  • Pallikkutam Magazine
  • Companion Magazine
  • Mentor
  • Smart Board
  • Pallikkutam Publications

November 13, 2018 Tuesday 11:17:54 AM IST

Alexa and Siri may in future learn language as kids do!

Science Innovations

Researchers of Massachusetts Institute of Technology have developed a “semantic parser” that learns through observation to more closely mimic a child’s language-acquisition process.

Have you observed how children learn language? They learn by observing their environment, listening to the people around them, and connecting the dots between what they see and hear. This is how children establish their language's word order, such as where subjects and verbs fall in a sentence.

In the world of computing, learning language is the task of syntactic and semantic parsers. Parsers are becoming increasingly important for web searches, natural-language database querying, and voice-recognition systems such as Alexa and Siri.

MIT researchers have developed a parser that learns through observation to more closely mimic a child's language-acquisition process, which could greatly extend the parser's capabilities. To learn the structure of language, the parser observes captioned videos, with no other information, and associates the words with recorded objects and actions.


"People talk to each other in partial sentences, run-on thoughts, and jumbled language. You want a robot in your home that will adapt to their particular way of speaking ... and still figure out what they mean," suggest the researchers.

In the future, the parser could be used to improve natural interaction between humans and personal robots.

Source: http://news.mit.edu/2018/machines-learn-language-human-interaction-1031


Comments