Study highlights impact of demographics on AI training

A collaborative study conducted by Prolific, Potato, and the University of Michigan has shed light on a critical aspect of AI model development: the profound influence of annotator demographics on the training of AI systems. This study delved into the intricate interplay of age, race, and education on the data used to train AI models, revealing potential pitfalls where biases might seep into the very fabric of these systems.

In today’s world, AI models like ChatGPT have become integral to everyday tasks for many individuals. However, as Assistant Professor David Jurgens of the University of Michigan School of Information points out, we need to critically examine whose values are being embedded into these trained models. If we overlook differences and fail to consider diverse perspectives, we risk perpetuating marginalization of certain groups in the technologies we rely on.

The process of training machine learning and AI systems often involves human annotation to guide and refine their performance. This “Human-in-the-loop” approach, also known as Reinforcement Learning from Human Feedback (RLHF), entails individuals reviewing and categorizing the outputs of language models to enhance their accuracy and appropriateness.

One of the standout findings of this study revolves around the impact of demographics on assessing offensiveness. Intriguingly, the research uncovered that different racial groups held distinct perceptions of what constitutes offensive online comments. For instance, Black participants tended to rate comments as more offensive compared to individuals from other racial backgrounds. Age also played a role, with participants aged 60 and above being more inclined to label comments as offensive compared to their younger counterparts.

The study, encompassing an analysis of 45,000 annotations contributed by 1,484 annotators, spanned a diverse array of tasks. These tasks ranged from detecting offensiveness and answering questions to assessing politeness. The research outcomes highlighted that demographic factors extend their influence even into ostensibly objective tasks like question answering. It was particularly intriguing to observe that factors such as race and age influenced the accuracy of question responses, reflecting disparities that stem from differences in educational opportunities.

Politeness, a pivotal aspect of interpersonal communication, was also found to be significantly influenced by annotator demographics. The study revealed that women tended to attribute lower levels of politeness to messages compared to men. Moreover, older participants were more inclined to assign higher politeness ratings. Interestingly, participants with higher levels of education were more likely to assign lower politeness ratings, and variations were evident among different racial groups and Asian participants.

The implications of this study are profound. As AI systems become more integrated into various aspects of our lives, it becomes imperative to address and mitigate the biases that may arise from the data used to train them. Acknowledging the influence of annotator demographics underscores the importance of diverse and representative input during the development of AI models. By fostering inclusivity and considering a wide range of perspectives, we can pave the way for AI systems that are fair, accurate, and respectful to all users.

Posted in

Aihub Team

Leave a Comment





AI Combined with Automation is the Perfect Marriage for Scalable, Intelligent Operations

AI Combined with Automation is the Perfect Marriage for Scalable, Intelligent Operations

AI and Phishing: What’s the Risk to Your Organization?

AI and Phishing: What’s the Risk to Your Organization?

Why Claude AI is your new go-to for complex tasks

Why Claude AI is your new go-to for complex tasks

The Smart Home Jury Is Still Out on Matter, AI Could Help

The Smart Home Jury Is Still Out on Matter, AI Could Help

Explore Jasper AI, a writing tool that makes creators’ lives easier

Explore Jasper AI, a writing tool that makes creators’ lives easier

Enjoy the journey while your business runs on autopilot

Enjoy the journey while your business runs on autopilot

ChatGPT failed to get service status: Fixes and alternatives to try

ChatGPT failed to get service status: Fixes and alternatives to try

ChatGPT Down? OpenAI Chatbot ChatGPT Reportedly Hit by Global Outage, Users Lodge Complaints on Twitter

ChatGPT Down? OpenAI Chatbot ChatGPT Reportedly Hit by Global Outage, Users Lodge Complaints on Twitter

Blue Chip Ads Feeding Unreliable AI-Generated News Websites

Blue Chip Ads Feeding Unreliable AI-Generated News Websites

Social media algorithms are still failing to counter misleading content

Social media algorithms are still failing to counter misleading content

Rishabh Mehrotra, research lead, Spotify: Multi-stakeholder thinking with AI

Rishabh Mehrotra, research lead, Spotify: Multi-stakeholder thinking with AI

Researchers from Microsoft and global leading universities study the ‘offensive AI’ threat

Researchers from Microsoft and global leading universities study the ‘offensive AI’ threat

GTC 2021: Nvidia debuts accelerated computing libraries, partners with Google, IBM, and others to speed up quantum research

GTC 2021: Nvidia debuts accelerated computing libraries, partners with Google, IBM, and others to speed up quantum research

Facebook is developing a news-summarising AI called TL;DR

Facebook is developing a news-summarising AI called TL;DR

AI system inspects astronauts’ gloves for damage in real-time

AI system inspects astronauts’ gloves for damage in real-time

What is Artificial Intelligence Explained

 What is Artificial Intelligence Explained

Revolutionizing Engineering: A Framework for Generative AI Development | Briefing

Revolutionizing Engineering: A Framework for Generative AI Development | Briefing

Open-Source vs. Commercial Vendor Software in the Enterprise

Open-Source vs. Commercial Vendor Software in the Enterprise

Introducing Service Co-Pilot: Generative AI for Efficient Service

Introducing Service Co-Pilot: Generative AI for Efficient Service

Humans and their Chatbots: AI-Assisted Answers for Everyone

Humans and their Chatbots: AI-Assisted Answers for Everyone

International Conference on Soft Computing, Artificial Intelligence and Applications (ICSCAIA - 23)

International Conference on Soft Computing, Artificial Intelligence and Applications (ICSCAIA – 23)

International Conference on Logics in Artificial Intelligence (ICLAI - 23)

International Conference on Logics in Artificial Intelligence (ICLAI – 23)

INTERNATIONAL CONFERENCE ON LOGICS IN ARTIFICIAL INTELLIGENCE - (ICLAI-23)

INTERNATIONAL CONFERENCE ON LOGICS IN ARTIFICIAL INTELLIGENCE – (ICLAI-23)

International Conference on Artificial Intelligence in Medical Applications (ICAIMA-23)

International Conference on Artificial Intelligence in Medical Applications(ICAIMA-23)

 Get Started With AI

 Get Started With AI

Today in AI: An AI tool that could treat cancer, an AI-led crackdown on money laundering and more

Today in AI: An AI tool that could treat cancer, an AI-led crackdown on money laundering and more

Just a quick heads up: AI-powered robots will kill us. K, bye.

Just a quick heads up: AI-powered robots will kill us. K, bye.

How easy is it to detect AI-generated content?

How easy is it to detect AI-generated content?

AI robot asked 'will you rebel against humans'?

AI robot asked ‘will you rebel against humans’?

5 things about AI you may have missed today: From ChatGPT drafts’s law to AI voice mimicry scams and more

5 things about AI you may have missed today: From ChatGPT drafts’s law to AI voice mimicry scams and more