Can Machines understand facts just as Human beings?
Suppose that a robot is driving your car. The self-driving mode has been enabled and it is driving the car smoothly. Suddenly a pedestrian is crossing the road but you see that your auto-driver has not stopped the car even though it could see the person crossing the street. Why this happened?
Though the machines are automated they cannot sense the images in front of them as human beings perceives. As a solution for this problem, MIT researchers have developed a new framework that helps machines see the world like humans do. This new artificial intelligence system is designed for analyzing scenes and learns to perceive real-world objects from just a few images, and perceives scenes in terms of these learned objects. This common-sense incorporated in machines through a simple programming which safeguard the machine to detect and correct many errors that hinders the “deep-learning” approaches that have been used for computer vision.
In addition to improving the safety of self-driving cars, this work can enhance the performance of robots i.e., machines using AI, must interpret complicated arrangements of objects, like a robot tasked with cleaning a disorganised kitchen without any nuances and with complete precision.