Leadership Instincts: IIT Hyderabad -ICAT MoU for Collaboration in Autonomous Navigation  |  Education Information: IIT Hyderabad Retains Top 10 Rank in QS Rankings in India  |  Cover Story: Elimination Round or Aptitude Test- How to Align CUET with NEP 2020 Goals  |  Life Inspirations: Master of a Dog House  |  Education Information: Climate Predictions: Is it all a Piffle!  |  Leadership Instincts: Raj Mashruwala Establishes CfHE Vagbhata Chair in Medical Devices at IITH   |  Parent Interventions: 10 Tricks to Help You Prepare for This Year's IB Chemistry Test  |  National Edu News: TiHAN supports a Chair for Prof Srikanth Saripalli at IIT Hyderabad  |  Teacher Insights: How To Build Competitive Mindset in Children Without Stressing Them  |  Parent Interventions: What Books Children Must Read this Summer Vacation   |  Policy Indications: CUET Mandatory for Central Universities  |  Teacher Insights: Classroom Dialogue for a Better World  |  Rajagiri Round Table: Is Time Ripe for Entrepreneurial Universities in India?  |  Life Inspirations: How to Overcome Fear of Public Speaking  |  Parent Interventions: Wide Ranging Problems of Preterm Infants  |  
February 27, 2020 Thursday 01:16:45 PM IST

New AI tools could help tackle online abuse

Technology Inceptions

New tools which could be used to automatically detect and counter abuse on social media are being developed by researchers at the University of Sheffield. The research, led by Professor Kalina Bontcheva from the University’s Department of Computer Science, is developing new artificial intelligence (AI) and natural language processing (NLP) methods that could be used to responsibly tackle abuse and hate speech online.

Launched in collaboration with Wendy Hui Kyong Chun of Simon Fraser University in Canada, the project is looking at AI methods that are currently being used to detect online abuse and hate speech within two areas; the gaming industry and messages directed at politicians on social media.

Researchers in the study intends to use their findings to develop new AI algorithms that are effective, fair and unbiased. The systems will be context-aware and respectful of language differences within communities based on race, ethnicity, gender, and sexuality.

Researchers will examine the biases embedded within current content moderation systems that often use rigid definitions or determinations of abusive language. These current systems often paradoxically create new forms of discrimination or bias based on identity, including sex, gender, ethnicity, culture, religion, political affiliation or others. The research team is aiming to address these effects by producing more context-aware, dynamic systems of detection.


Furthermore, the project is looking to empower users by making the new tools open source, so they can be embedded within new strategies to democratically tackle abuse and hate speech online. They could also be used as part of community-based care and response measures.

The project, Responsible AI for Inclusive, Democratic Societies: A cross-disciplinary approach to detecting and countering abusive language online, is led by the University of Sheffield in collaboration with Simon Fraser University. It is funded by UK Research and Innovation (UKRI) as one of 10 UK-Canada projects to support the responsible development of AI, including ensuring all members of society trust AI and benefit from it.


(Content Courtesy: https://www.sheffield.ac.uk/news/nr/how-to-tackle-deal-online-abuse-social-media-gaming-video-games-new-research-tech-1.882350)


Comments