Removing gender data can worsen AI Bias In 2019 Apple Card was accused of discrimination against women.The company declined a woman’s application for a credit line increase, even though her credit records were better than her husband’s. Meanwhile they granted her husband a credit line that was 20 times higher than hers. The NY State Department found no violations of fair lending since Apple had not used gender data in the development of its algorithms. If Apple had fully adhered to anti-discrimination laws, what led to the paradoxical outcome? A recent research paper explains this paradox. The researchers found that anti-discrimination measures and laws, specifically with respect to the collection and use of sensitive data for ML models can have the opposite effect. The researchers looked at an example data set of a global financial company. They found that all things being equal, women are better borrowers than men, and individuals with more work experience are better borrowers than those with less. Thus, a woman with three years of work experience could be as creditworthy as a man with five years of experience. The data set also showed that women tend to have less work experience than men on average. In addition, the dataset used to train AI algorithms, comprised of information of past borrowers, consisting of about 80 percent men and 20 percent women on average globally. In the absence of gender data, the model treated individuals with the same number of years of experience equally. Since women represent a minority of past borrowers, it is unsurprising that the algorithm would predict the average person to behave like a man rather than a woman. Applicants with five years of experience would be granted credit, while those with three year or less would be denied, regardless of gender. This did not only increase discrimination but also hurt profitability as women with three years of work experience would have been creditworthy enough and should have been issued loans had the algorithm used gender data to differentiate between women and men. The researchers compared the outcomes in jurisdictions like Singapore where gender data can be included and the EU where the collection of gender data is allowed, but not its use in the final model. The researchers also looked at a methodology to create a secondary model to predict the gender of an applicant. This approach increased accuracy to 91% and reduced gender discrimination by almost 70 percent (as well increased profitability by 0.15 percent) This research shows again the importance for companies to understand the deeper workings of the ML algorithms and the linkage to the underlying (training) data. Source https://lnkd.in/daZkrC_x
Female investors and AI bias concerns
Explore top LinkedIn content from expert professionals.
Summary
The term “female-investors-and-ai-bias-concerns” highlights ongoing issues where women are underrepresented in AI development and investment, and where AI tools can unintentionally reinforce gender bias due to skewed data and design choices. This concept explores how women’s perspectives can identify unfair outcomes—and why their involvement in AI and investment decisions helps create more equitable technology.
- Increase representation: Make sure women have a seat at the table in AI development and funding decisions to reduce bias and broaden perspectives.
- Question data sources: Encourage teams to examine the data used for training AI models and recognize the ways it can reinforce stereotypes or exclude women.
- Value critical voices: Create space for careful evaluation and questioning of new technologies, as these insights often reveal hidden risks and ethical concerns.
-
-
How Women’s Unique Evaluation Of AI Tools Influences Corporate Culture: “When it comes to adopting AI tools at work, studies have shown that men are more likely to experiment with these tools, while women tend to hesitate. That doesn't mean women are less tech-savvy or less open to innovation. It often means they're asking different questions. And those questions reveal something important about how corporate culture is being shaped in the AI era. Women in the workplace are not saying AI is bad. They’re not rejecting it outright. What they’re doing is pausing. They’re questioning how it works, who created it, what data it was trained on, and whether it could be misused. In many cases, they're also concerned about how others will perceive their use of it. Will they look like they're cutting corners? Will the tool reinforce bias? Will their job become obsolete? That kind of hesitation is discernment and the careful weighing of trade-offs. And it reflects a kind of emotional intelligence and long-term thinking that often gets undervalued in tech conversations. Companies that ignore these perspectives risk designing workflows, cultures, and even ethics policies that leave people behind. If you have a team where the loudest voices are the ones who embrace new tools quickly, and quieter voices are the ones raising concerns, you need to ask yourself: are you hearing the full story? Women may not be the early adopters of every AI tool, but they’re often the first to see unintended consequences. They may be the first to notice that the chatbot is reinforcing stereotypes, or that an AI-powered hiring tool is filtering out qualified candidates based on biased data, which are culture-shaping concerns. I've interviewed hundreds of executives, and the best ones aren't the people who jump on every new technology as soon as it hits the market. They're the ones who ask, ‘Does this make sense for our people? Does it help us do better work? Does it reflect the values we say we care about?’ And more often than not, it’s women who are asking those kinds of questions. Think about what that means in a practical sense. When a company is rolling out a new AI writing tool, a male leader might focus on efficiency. A female leader might ask if the tool risks replacing human insight or if it undermines original thinking. Neither approach is wrong. But they lead to different outcomes.” Read more 👉 https://lnkd.in/enqz6jNy ✍️ Article by Dr. Diane Hamilton #WomenInSTEM #GirlsInSTEM #STEMGems #GiveGirlsRoleModels
-
We called it progress. Turns out, it's a wedge. When it comes to AI, women are underrepresented, disproportionately impacted, use it less, and trust it less. Why the World Economic Forum predicts it will take 134 years to close the AI gender gap. How did we create yet another gap 🙄 before AI even got off the ground? Because we haven't closed the previous gaps. Women make up less than 22% of AI professionals globally. In technical roles, that number drops even lower. The gap shows up in models, machines, and money. #️⃣ Data bias: AI models trained on biased data reinforce gender stereotypes, like women linked to nurses, men to CEOs. I read an early study by UNESCO where Llama 2 and ChatGPT were asked to make up stories about women and men. In stories about men, words like "treasure," "woods," "sea," and "adventurous" dominated, while women were more often described with "garden," "love," "gentle," and "husband." Oh, and women were described in domestic roles 4X more often than men. ⚙️ Product design: Virtual assistants are often default female—submissive, helpful, and programmable. We've seen design flaws like this before, like in facial recognition systems that tend to perform worst on black women compared to white men. 💲 Funding: Women-led AI startups receive a fraction of VC funding compared to male-led ones. In fact, only 4% of AI startups are led by women. Then there's disproportionate impact. 80% of jobs will be affected in some way by AI. 57% of jobs susceptible to disruption are held by women compared to 43% of men. If women are anxious, it's because we should be. Women are 1.5X more likely to need to move into new occupations than men due to AI. But we're not anxious about AI just because of its impact on work and jobs; we also don't TRUST it. We know AI algorithms perpetuate bias, and we also know we're more subject to online harm like deepfakes, cyber threats, and fraud. Then there are bigger questions around psychological safety, an altered sense of reality, and social isolation in an increasingly digital world. Sounds like AI is sexist. A literal threat to women -- our livelihood, our social being, our online safety and privacy, our kids. But I don't want to throw it away for all that... ...it's that the most powerful technology claiming to shape our future is being built and deployed by a homogeneous few. This isn't about responsible AI, this is about representation, impact, and responsible humans deciding what to DO with AI. Listen to my conversation with Adriana O'Kain on Mercer's AI-volution podcast. Closing the AI Gender Gap: 🎙️ Spotify: https://lnkd.in/geyp2Scn Apple: https://lnkd.in/g5FamDEJ #FutureOfWork #DigitalDivide #EthicalTech #InclusiveDesign #AI #EquityInTech #HRTech #WomenInTech