Gain insights on navigating AI ethics and data protection from the Texas Tribune Festival. Learn about top keywords and key phrases.
Articulated Insight – “News, Race and Culture in the Information Age”
Illustration: TIME; reference image: Dan Komoda
AUSTIN, Texas (Sept. 17, 2024) — As artificial intelligence (AI) becomes an increasingly powerful tool in shaping everything from hiring practices to healthcare decisions, experts are raising urgent concerns about its potential to perpetuate bias and harm marginalized communities. During a panel at the Texas Tribune Festival, leaders like Alondra Nelson and Arati Prabhakar made clear that the AI industry’s lack of diversity is not just a technical oversight—it’s a societal crisis that demands immediate action.
“AI systems are being used to make decisions that affect people’s lives in profound ways, from who gets a job to who gets approved for a loan,” said Alondra Nelson, former advisor in the Biden-Harris administration. “The problem is, these systems are often developed without the input of the very communities they impact the most,” says Nelson.
Nelson, a leading voice in AI ethics, addressed how AI systems, while inherently mathematical and technical, have significant social implications. “AI is effectively math and statistics, but it’s not something that we can’t get a handle on,” Nelson noted. She emphasized that understanding AI’s impacts doesn’t require a technical background but a commitment to ethical oversight and community engagement.
Nelson’s involvement with the AI Democracy Project was driven by the need to address concerns over the bias’s of AI. The project, co-founded with journalist Julia Angwin a New York Times Opinion writer, evaluated AI systems, including chatbots, for misinformation that could affect democratic processes. This initiative brings together bipartisan election officials to scrutinize the accuracy of AI outputs, demonstrating the crucial role of domain experts in identifying and correcting errors that could mislead the public.
Nelson highlighted the case of Robert Williams, a Black man from Detroit who was wrongfully arrested due to a faulty facial recognition system. The error led to Williams losing his job and suffering severe personal consequences. “This isn’t an isolated incident. It’s a clear example of how AI, when designed and implemented without diverse perspectives, can amplify discrimination,” Nelson said.
Arati Prabhakar, Director of the White House Office of Science and Technology Policy, during her one on one session with Andrew Desiderio, Senior Congressional Reporter with Punchbowl News, she touched on the broader implications of biased AI in sectors like healthcare and criminal justice. “AI can scale discrimination at an unprecedented level,” Prabhakar said. “It’s already illegal to discriminate, but AI makes it easier to embed and expand these biases, often without transparency or accountability.”
Despite efforts by regulatory bodies like the Equal Employment Opportunity Commission and the Federal Trade Commission, Prabhakar stressed that current measures are insufficient. “We need a comprehensive legal framework to ensure AI is developed and used responsibly,” she added. Prabhakar emphasized the critical need for legislation that specifically addresses AI-driven discrimination and privacy violations, especially in protecting vulnerable communities from harm.
The lack of diversity among AI developers is a significant part of the problem. According to Nelson, the tech industry remains overwhelmingly white and male, which skews the perspectives embedded in AI systems. “We need more people of color and women in AI to make these systems fair,” Nelson said. “Without diverse voices, we risk creating technologies that only serve a narrow segment of society.”
One immediate step individuals can take to protect their data from AI misuse is opting out of data sharing, as explained by Ravit Dotan, an AI ethics expert at TechBetter. You can also download her comparison of Claude vs ChatGPT. Dotan outlined a straightforward process to opt out of data training by OpenAI’s ChatGPT:
- Navigate to OpenAI’s Privacy Portal: Search for it online to access.
- Initiate Your Request: Click on “Make a Privacy Request” in the top right corner.
- Specify Your Needs: Select what action you want to take regarding your data.
- Submit Your Request: Finalize your selection and submit.
- Confirm Your Request: Check your email for a verification link from OpenAI.
- Complete the Process: Follow the email instructions to complete your request.
As marginalized communities are frequently targeted with misinformation which leads to real-world repercussions it is important to stay informed. A lack of robust privacy protections has already led to widespread exploitation, including phishing scams and identity theft, disproportionately affecting Black and Brown communities.
“Protecting your data is not just about privacy; it’s about safeguarding yourself from targeted harm,” Dotan said. “AI companies often train on data by default, and opting out is a way to take back control.”
Nelson and Prabhakar’s bring to light that without inclusive governance and stringent protections, AI will continue to mirror and magnify the biases that pervade society. The path forward requires not only technical solutions but a fundamental shift in who has the power to shape AI’s future.
“AI is not an inevitable force; it’s something we can influence,” Nelson said. “We need to act now to ensure that it’s used in ways that are fair and just for everyone.”
As AI continues to evolve, experts argue that ethical oversight and diverse input are not optional—they are essential to prevent the perpetuation of systemic inequalities. The time for change is now, and the stakes could not be higher.
Follow New York Edge News for exclusive reporting, tips, and insights on the ever-evolving conversations on AI, and how we as people of color need to be informed to safeguard our liberties and continue to be seen and active in the advancement of society in the 21st century Digital + Machine Age.
#AIethics, #dataprotection, #technologyethics