Is AI exacerbating disparities in education?

by Pelican Press
24 views 8 minutes read

Is AI exacerbating disparities in education?

AI and education
Credit: Pixabay from Pexels

While much has been made of artificial intelligence’s promise to improve educational opportunities and outcomes, a group of Stanford students is highlighting the importance of a different perspective on AI in education: It isn’t just about using AI to bridge gaps in educating our students. It is also about how we “educate” AI.

In a first-ever collaboration between the Stanford Center for Racial Justice (SCRJ) and Stanford Law School’s International Human Rights and Conflict Resolution Clinic, 10 students recently had the opportunity to research and write a report on AI and education for Ashwini K.P., the United Nations Special Rapporteur on Contemporary Forms of Racism, Racial Discrimination, Xenophobia and Related Intolerance.

“One of the areas I specifically wanted to dive into while in law school was the intersection of artificial intelligence and racial justice,” says report-co author Imani Nokuri, JD ’25, “so this was a dream project.”

The project found its way to Stanford Law School following discussions between the Special Rapporteur and Gulika Reddy, director of the International Human Rights and Conflict Resolution Clinic, about how the clinic can support the Special Rapporteur’s mandate.

“Supporting Ashwini K.P.’s mandate was in alignment with our clinic’s work on equality and non-discrimination and a great learning opportunity for our students,” Reddy says.

“The clinic consulted a range of experts working at the intersection of AI and human rights and decided to focus on AI and education. Given this focus, we thought of the Center for Racial Justice and the important work they have done in that area, and it seemed like the ideal time for students in the clinic and center to work together on a project.”

Six clinic students and four SCRJ students collaborated over four weeks during the Spring 2024 quarter, delving into the promises and pitfalls of artificial intelligence in the realm of global education programs, specifically AI’s potential to exacerbate existing inequalities and discrimination. Their final memo was folded into Ashwini K.P.’s broader thematic report on AI published in June 2024. Following their completion of the memo, the research team met with the special rapporteur over Zoom to present their findings and recommendations.

“It is not everyday you wake up and say, ‘Today I get to present my research to a United Nations special rapporteur,'” says Maya King, JD ’25, a Human Rights Clinic student who focused her portion of the project on a comparative analysis of how different countries regulate AI.

“It was a fantastic experience to see our efforts move from the research phase, to a written memo, to global dissemination to members of the United Nations in less than three months.”

Can AI exacerbate past discrimination?

In the realm of education, artificial intelligence is used to enhance teaching, personalize learning experiences, and help educators make predictions about a student’s future—such as their risk of dropping out of high school, likelihood of college admission, career opportunities, and other variables. AI in schools has many benefits, including helping to better accommodate students with disabilities, explains Hoang Pham, director of Education and Opportunity at SCRJ, where he leads research and policy initiatives to address racial inequities in the U.S. education system.

But there’s a flip side.

AI algorithms can exacerbate racial disparities in education for a variety of reasons, including because the historical data that developers input into the technology to “train” it often replicates pre-existing biases, says Pham who oversaw the UN project with Reddy and Shaw Drake, clinical supervising attorney.

“Predictive analytical tools, for example, play a role in determining the likelihood of future student success, and while these tools are intended to assist educators in improving outcomes for students, predictive analytics often rate racial minorities as less likely to succeed academically,” Pham says.

“This is because factors with historically racially disparate outcomes, such as attendance, behavior, grades, income, and sometimes race, are used in the algorithms to generate predictions, which then reflects the racial disparities in the data.”

In their report, the students stress the complexity of the issue and the need for nuance and a mindful approach to incorporating AI into education: ” No one-size-fits-all answer will resolve the dilemmas that arise in this area,” they observe, “making it critically important for stakeholders to consider the complexities of AI in education and to further explore frameworks that address its rapidly evolving nature—where the racial discrimination problems of today may not be those of tomorrow.”

According to Nokuri, one of the student co-authors from SCRJ, the report strongly urged AI developers and governments “to consider the human factor, and the holistic point of view.” Among the cited research was a paper focusing on data transparency in education led by Hariharan Subramonyam, an assistant professor at the Stanford Graduate School of Education and faculty fellow at Stanford’s Institute for Human-Centered Artificial Intelligence.

“When all you do is rely on data, and don’t seek the input and insight of the teachers and students—the very people the technology is supposed to aid—that’s when you run the risk of perpetuating past mistakes and essentially encoding them into the future,” Nokuri says. “A lot of our recommendations came down to including the input of community stakeholders as we develop this powerful new technology.”

Recommendations for AI in education

“The students really drove this work with exceptional skill and speed,” Drake says. “Every student dove deep into their research and produced an exceptionally useful document, all while working together seamlessly and on a short time frame. It is a joy to see their hard work have such an impact.”

In addition to recommending that teachers, students, and marginalized racial and ethnic groups should be consulted when developing AI technologies, the students’ report also called for governments to develop public education programs focused on the responsible use of AI and to promote open-source tools in education to allow communities equal access to AI technology. Additionally, the report recommends that countries continue to research how existing laws and regulations might apply to racially discriminatory outcomes from the usage of AI.

The UN report will feed into future projects at the clinic and SCRJ. “We are going to build from this, do more research, and continue to advance the conversation around what AI in education needs to be to ensure it helps mitigate and not exacerbate racial disparities that we know have existed in education for a very long time,” Pham says.

The Clinic will also continue its work in the area of equality and non-discrimination, which has included work conducted in partnership with impacted communities and civil society in Jamaica, Uganda, and the United States, as well as with the UN Special Rapporteur on extrajudicial, summary or arbitrary executions.

More information:
Contemporary forms of racism, racial discrimination, xenophobia and related intolerance Report of the Special Rapporteur on contemporary forms of racism, racial discrimination, xenophobia and related intolerance, Ashwini K.P.*: documents.un.org/doc/undoc/gen … /20/pdf/g2408420.pdf

Provided by
Stanford University


Citation:
Is AI exacerbating disparities in education? (2024, September 18)
retrieved 18 September 2024
from

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.




Source link

#exacerbating #disparities #education

You may also like