The Side of AI They’re Not Telling You

Artificial intelligence (AI) is now ubiquitous, from virtual assistants to algorithms driving autonomous vehicles to content creation and personalized marketing. However, beneath its technological marvels lie significant social and environmental costs. And if humanity waits too long to address them, it might be too late.

Social Impact and the Widening Racial Wealth Gap

The social ramifications of AI, particularly for people of color, are nothing short of concerning. As companies have begun to rely on AI to automate tasks, myriad problems have arisen around AI systems for perpetuating biases and exacerbating existing inequalities, especially in employment. But now, AI-driven automation threatens jobs traditionally held by people of color.

Nearly every industry, particularly manufacturing, retail and transportation, all of which employ large numbers of minority workers, are increasingly adopting AI-driven automation to reduce costs and increase efficiency. This shift often leads to job displacement, as machines and algorithms replace human labor.

Minority communities already face higher unemployment rates and economic instability, and the rise of AI threatens to exacerbate these issues. As AI-driven automation continues to permeate various sectors, the gap between those who can adapt to the changing job markets and those who cannot is likely to widen, leading to greater economic inequality.

Experts conservatively estimate that by 2030, AI will have eliminated 400 to 800 million jobs worldwide ­— a staggering 23% of the global workforce. Jobs at every level will be impacted: customer service representatives, manufacturers, even diagnosticians are at risk of being replaced by AI. And while AI quickly erases these jobs, the workforce might not be able to create new positions, leaving a quarter of the world unemployed.

In the U.S. alone, studies show that generative AI could automate 50% of existing high-mobility jobs for American adults without a degree. That number increases greatly when broken down by race.

Currently, the median Black household holds just 15 percent of the wealth of the median white household, with $44,900 compared to $285,000 in total assets.

AI is estimated to add $7 trillion to global wealth annually, with $2 trillion benefiting the United States. This influx could translate to an average increase of $3,400 per US household by 2045.

However, experts at the McKinsey Institute for Black Economic Mobility predict that this wealth is unlikely to be evenly distributed.

Black Americans currently capture only 38 cents of every new dollar of household wealth. If trends persist, this distribution could widen the racial wealth gap by $43 billion annually by 2045.

Black workers, who already face an unemployment rate double that of white workers, are disproportionately represented in jobs at high risk of automation. Nearly a quarter of Black workers are in roles with over 75% automation potential, compared to 20% of white workers. This precarious situation is compounded by concerns about job displacement due to AI, with 53 percent of Black respondents fearing AI will replace their jobs in the next five years, compared to 39 percent of white respondents.

AI’s automation capabilities extend beyond low-wage jobs, threatening high-mobility positions that have historically offered Black workers a pathway to better earnings without requiring a four-year degree. These jobs, categorized by the McKinsey Institute as “gateway” and “target” positions, are critical for upward mobility. Seventy-four percent of Black workers do not have college degrees, but in the past five years, one in every eight has moved to a “gateway or target job.”

 

However, many tasks in these roles are susceptible to automation by generative AI, potentially closing off vital career advancement opportunities for workers of color.

“Gen AI may significantly affect those occupations, as many of the tasks associated with them are precisely what gen AI can do well,” says author Jan Shelly Brown. “Coding bootcamps and trainings have risen in popularity and have unlocked access to high-paying jobs for many workers without college degrees. But such pathways are also at risk of disruption, as gen AI–enabled programming has the potential to automate many entry-level coding positions.”

Bias in AI Systems

AI systems are only as good as the data they are trained on, and their biases can become more pronounced as they evolve. IBM defines AI bias as algorithms that “produce biased results that reflect and perpetuate human biases within a society, including historical and current social inequality.”

In the past year, numerous studies have highlighted the apparent bias in various AI systems. One Stanford University study highlighted the negative impact of AI bias on non-native English speakers, whose work is often mistakenly flagged as AI-generated, potentially leading to accusations of cheating. Another study found that young Black girls encountered racial stereotypes in AI-generated imagery.

“Facial recognition and technology may not resemble them, or there could be biased language models that can perpetuate harmful stereotypes,” said Misty Freeman, an educator and unconscious bias coach.

Scientists from MIT found that language models tend to classify certain occupations along gender lines, designating “flight attendant,” “secretary” and “physician’s assistant” as feminine, while “fisherman,” “lawyer” and “judge” are seen as masculine roles. This gender bias in AI models reflects and reinforces societal stereotypes.

Furthermore, Dartmouth researchers discovered that language models often have biases, such as stereotypes, embedded within them. Their study revealed that these models could unfairly attribute skills or occupations to individuals based on their gender, reinforcing prejudiced assumptions about who is suited for certain jobs.

“Surfacing and responding to algorithmic bias upfront can potentially avert harmful impacts to users and heavy liabilities against the operators and creators of algorithms, including computer programmers, government and industry leaders,” said Dr. Nicole Turner Lee, the director of the Center for Technology Innovation at Brookings Institution.

Facial recognition systems have higher error rates for people with darker skin tones. A National Institute of Standards and Technology study revealed that many commercial facial recognition systems regularly misidentify people of color at higher rates than white individuals, leading to an increase of wrongful arrests, surveillance and discrimination.

In the employment sector, AI-driven hiring algorithms have been shown to disadvantage minority candidates. These systems often rely on historical hiring data, which can reflect past discriminatory practices. As a result, AI algorithms may inadvertently favor candidates who fit the profile of previously hired employees, perpetuating a cycle of exclusion for people who have a different background.

The Environmental Toll of AI

AI systems, especially those involving deep learning and large-scale data processing, demand immense power. The International Energy Agency estimates that electricity usage by data centers will increase by 50% between 2022 and 2026, primarily due to AI processing demands. By 2026, data centers processing AI will consume as much power annually as Germany.

A study by the University of Massachusetts Amherst found that training a single AI model can emit as much carbon dioxide as five cars over their lifetimes.

“If you look at the history of computational advances, I think we’re in the ‘amazed by what we can do, this is great, let’s do it phase,’” said Clifford Stein, interim director of Columbia University’s Data Science Institute. “But we should be coming to a phase where we’re aware of the energy usage and taking that into our calculations of whether we should or shouldn’t be doing it, or how big the model should be. We should be developing the tools to think about if it’s worth using these large language models given how much energy they’re consuming, and at least be aware of their energy and environmental costs.”

As AI technologies advance and become more complex, their computational requirements grow exponentially, placing additional strain on energy grids that often rely on fossil fuels. This not only increases energy consumption but also contributes to greenhouse gas emissions, exacerbating climate change.

E-Waste and Resource Depletion

The production and disposal of AI hardware also contribute to environmental degradation. Manufacturing servers, GPUs and other AI components require rare earth metals and finite resources. Extracting and processing these materials often lead to habitat destruction, soil erosion and water contamination.

“When you look especially at new data campuses that are being built in areas where there haven’t been many data centers before, you’re plopping down major energy- and water-using resources in moderate-size cities,” explained Adam Wierman, professor of computing and mathematical sciences at CalTech. Wierman has led several campaigns to make data centers more sustainable by improving the standards of measurement and reporting the carbon costs of computation.

“These centers have an impact on energy and water rates for people who live there,” he said. “There’s the pollution associated with the backup generators at the data centers, which have to be run in order to do regular maintenance. And so there’s major local environmental and economic impacts from them in addition to just the global usage of carbon.

“They don’t create many jobs for the neighborhoods where they go because there’s not a lot of human needs in terms of running them once they’re built,” he said. “So, there are huge challenges around their construction. And yet they’re essential for advancing AI and all of the improvements that come with that.”

Addressing the Challenges

These significant challenges posed by AI require comprehensive and coordinated efforts.

Sustainable AI Development

To reduce AI’s environmental impact, we must develop more energy-efficient algorithms and hardware. Researchers and developers should prioritize sustainability, focusing on reducing AI systems’ carbon footprint. This could involve optimizing algorithms, utilizing renewable energy sources for data centers and improving hardware efficiency.

“We need a paradigm shift in how we develop AI technologies,” argues Dr. Kate Saenko, a computer scientist at Boston University. “Sustainability must be a core consideration in AI research and development to mitigate its environmental impact.”

The technology industry should also adopt circular economy principles, emphasizing the reuse and recycling of AI hardware. By extending equipment lifecycles and ensuring proper disposal, companies can minimize e-waste and reduce the demand for new resources.

Diversity and Inclusion

Addressing AI’s social impact on people of color requires a commitment to diversity and inclusion in AI development. Technology companies must actively seek to diversify their workforces, ensuring that teams developing AI systems are representative of the broader population. A diverse team — comprised of actual humans, not  AI-generated algorithms — is more likely to identify and mitigate biases in AI algorithms, leading to fairer and more equitable outcomes.

“It is still surprisingly difficult to define and measure fairness when it comes to technology,” Lee said. “While it will not always be possible to satisfy all notions of fairness at the same time, companies and other operators of algorithms must be aware that there is no simple metric to measure fairness that a software engineer can apply, especially in the design of algorithms and the determination of the appropriate trade-offs between accuracy and fairness.

“Fairness is a human, not a mathematical, determination, grounded in shared ethical beliefs,” she continued. “Thus, algorithmic decisions that may have a serious consequence for people will require human involvement.”

There should also be greater transparency and accountability in AI systems. Companies must be willing to audit their algorithms and make adjustments to address biases.

Lee theorized that this will likely involve multiple changes, such as regular evaluations of AI systems for discriminatory outcomes and the implementation of corrective measures when biases are identified.

Policy and Regulation

Governments also have a crucial role to play in regulating AI to ensure its responsible use. Policymakers should establish guidelines and standards for AI development, emphasizing sustainability and equity. This could include setting benchmarks for energy efficiency, mandating bias audits for AI systems or providing incentives for companies that prioritize ethical AI practices.

“In the decision to create and bring algorithms to market, the ethics of likely outcomes must be considered—especially in areas where governments, civil society, or policymakers see potential for harm, and where there is a risk of perpetuating existing biases or making protected groups more vulnerable to existing societal inequalities,” Lee said. “That is why it’s important for algorithm operators and developers to always be asking themselves: Will we leave some groups of people worse off as a result of the algorithm’s design or its unintended consequences?”

Social safety nets also must be strengthened to support workers displaced by AI-driven automation. This could involve investing in retraining programs, promoting lifelong learning and ensuring access to new job opportunities. By providing support for those affected by technological shifts, policymakers can help mitigate the economic impact on vulnerable communities while also minimizing the environmental impact on our planet.

For better or worse, AI will have a significant impact on our future. For every opportunity it brings someone, another is losing a job. For every problem AI solves, another one arises.

In order to protect our future, a holistic approach is needed to mitigate these challenges, encompassing sustainable AI development, diversity and inclusion efforts, and robust policy and regulatory frameworks. By addressing these issues proactively, we can step into a more equitable and sustainable future for all.


Title: The Side of AI They’re Not Telling You
URL: https://relevantmagazine.com/magazine/the-side-of-ai-theyre-not-telling-you/
Source: REL ::: RELEVANT
Source URL: http://www.relevantmagazine.com/rss/relevantmagazine.xml
Date: July 17, 2024 at 09:55PM
Feedly Board(s): Religion