International Edu News: What will education look like in future  |  Leadership Instincts: Advanced Leadership Initiative welcomes its most diverse group of fellows  |  International Edu News: Diet may influence risk of aggressive prostate cancer  |  Teacher Insights: Laurie Anderson to present Norton Lectures  |  Policy Indications: NEP 2020: Implementation Plan for School Education  |  National Edu News: Union Education Minister virtually interacts with KV students   |  Expert Counsel: The India Way  |  Science Innovations: DST Scientists find clue to anomalous behaviour of self-propelled fluctuations  |  Technology Inceptions: INSPIRE Faculty fellow’s engineering to produce heat-tolerant wheat varieties  |  National Edu News: Indians to soon have access to Chitra Flow Diverter stent  |  National Edu News: Sensitive Youth will Create New India: Smriti Zubin Irani  |  Education Information: Sports Ministry to name all upgraded sporting facilities after sportspersons  |  Finance: Elephant in the Room  |  Guest Column: Pandemic Effect on Education  |  Parent Interventions: Fast food restaurant proximity likely doesn't affect children's weight   |  
December 03, 2020 Thursday 12:34:59 PM IST

New tool to check for data leakage from AI systems

Technology Inceptions

In recent years, security and privacy researchers have shown that AI models are vulnerable to inference attacks that enable hackers to extract sensitive information about training data. The attack involves hackers repeatedly asking the AI service to generate information and analysing the data for a pattern. Once they have determined the pattern, they can deduce if a specific type of data was used for training the AI program. Using these attacks, hackers can even reconstruct the original dataset that was most likely used to train the AI engine.

To address this problem, Assistant Professor Reza Shokri from NUS Computing, who is also NUS Presidential Young Professor, and his team have developed a full-fledged open-source tool that can help companies determine if their AI services are vulnerable to such inference attacks. The analysis, based on what is known as Membership Inference Attacks, aims at determining if a particular data record was part of the model’s training data. By simulating such attacks, the privacy analysis algorithm can quantify how much the model leaks about individual data records in its training set. This reflects the risk of different attacks that try to reconstruct the dataset completely or partially. It generates extensive reports that, in particular, highlight the vulnerable areas in the training data that were used.

By analysing the result of the privacy analysis, the tool can provide a scorecard which details how accurately the attackers could identify the original datasets used for training. The scorecards can help organisations to identify weak spots in their datasets, and show the results of possible techniques that they can adopt to pre-emptively mitigate a possible Membership Inference Attack.

The NUS team coined this tool “Machine Learning Privacy Meter” (ML Privacy Meter), and the innovative breakthrough is the development of a standardised general attack formula. This general attack formula provides a framework for their AI algorithm properly test and quantifies various types of membership inference attacks. The tool is based on the research led by the NUS team in the last three years. Before the development of this method, there was no standardised method to properly test and quantify the privacy risks of machine learning algorithms, which made it difficult to provide a tangible analysis.

Moving forward, Asst Prof Shokri is leading a team to work with industry partners to explore integrating the ML Privacy Meter into their AI services. His team is also working on algorithms that enable training AI models which are privacy-preserving by design.

(Content Courtesy: https://news.nus.edu.sg/new-tool-can-check-for-data-leakage-from-ai-systems/)


Comments