Issues with gender inference in tech research

Explore top LinkedIn content from expert professionals.

Summary

Issues with gender inference in tech research refer to the challenges and biases that arise when artificial intelligence and related technologies attempt to determine or predict gender, often leading to unequal outcomes and misrepresentation for women and other marginalized groups. These problems can perpetuate stereotypes, introduce unfair treatment in processes like hiring or promotions, and limit diversity in technology fields.

  • Review your data: Make sure you collect and analyze information about gender in a way that recognizes multiple identities and avoids reinforcing stereotypes.
  • Audit your algorithms: Regularly check AI and machine learning tools for bias, especially in areas like recruitment, salary recommendations, or performance evaluation.
  • Challenge assumptions: Speak up when you notice systems or policies that seem to disadvantage women or underrepresented groups, and encourage transparency in tech development.
Summarized by AI based on LinkedIn member posts
  • View profile for Peter Slattery, PhD
    Peter Slattery, PhD Peter Slattery, PhD is an Influencer

    MIT AI Risk Initiative | MIT FutureTech

    64,855 followers

    "This report developed by UNESCO and in collaboration with the Women for Ethical AI (W4EAI) platform, is based on and inspired by the gender chapter of UNESCO’s Recommendation on the Ethics of Artificial Intelligence. This concrete commitment, adopted by 194 Member States, is the first and only recommendation to incorporate provisions to advance gender equality within the AI ecosystem. The primary motivation for this study lies in the realization that, despite progress in technology and AI, women remain significantly underrepresented in its development and leadership, particularly in the field of AI. For instance, currently, women reportedly make up only 29% of researchers in the field of science and development (R&D),1 while this drops to 12% in specific AI research positions.2 Additionally, only 16% of the faculty in universities conducting AI research are women, reflecting a significant lack of diversity in academic and research spaces.3 Moreover, only 30% of professionals in the AI sector are women,4 and the gender gap increases further in leadership roles, with only 18% of in C-Suite positions at AI startups being held by women.5 Another crucial finding of the study is the lack of inclusion of gender perspectives in regulatory frameworks and AI-related policies. Of the 138 countries assessed by the Global Index for Responsible AI, only 24 have frameworks that mention gender aspects, and of these, only 18 make any significant reference to gender issues in relation to AI. Even in these cases, mentions of gender equality are often superficial and do not include concrete plans or resources to address existing inequalities. The study also reveals a concerning lack of genderdisaggregated data in the fields of technology and AI, which hinders accurate measurement of progress and persistent inequalities. It highlights that in many countries, statistics on female participation are based on general STEM or ICT data, which may mask broader disparities in specific fields like AI. For example, there is a reported 44% gender gap in software development roles,6 in contrast to a 15% gap in general ICT professions.7 Furthermore, the report identifies significant risks for women due to bias in, and misuse of, AI systems. Recruitment algorithms, for instance, have shown a tendency to favor male candidates. Additionally, voice and facial recognition systems perform poorly when dealing with female voices and faces, increasing the risk of exclusion and discrimination in accessing services and technologies. Women are also disproportionately likely to be the victims of AI-enabled online harassment. The document also highlights the intersectionality of these issues, pointing out that women with additional marginalized identities (such as race, sexual orientation, socioeconomic status, or disability) face even greater barriers to accessing and participating in the AI field."

  • View profile for Patricia Gestoso-Souto ◆ Inclusive AI Innovation

    Director Scientific Services and Operations SaaS | Ethical and Inclusive Digital Transformation | Award-winning Inclusion Strategist | Trustee | International Keynote Speaker | Certified WorkLife Coach | Cultural Broker

    6,541 followers

    [Techno-Patriarchy: How AI is Misogyny’s New Clothes Gender discrimination is baked into artificial intelligence by design and it’s in the interests of tech bros. In my day job, I support our clients using AI to accelerate the discovery of new drugs and materials. I can see the benefits of this technology to the people and the planet. But there is a dark side too. That’s the reason tech -       Disregards women’s needs and experiences when developing AI solutions. -       Deflects its accountability in automating and increasing online harassment -       Purposely reinforces gender stereotypes -       Operationalises menstrual surveillance -       Sabotages women’s businesses and activism I substantiate each of the points above with real examples and the impact on the lives of women.   Fortunately, not all is doom and gloom.   Because insanity is to do the same thing and expect a different outcome, I also share what we need to start doing differently to develop AI that works for women too.   #EthicalAI #InclusiveAI #MisogynisticAI #BiasedAI #Patriarchy #InclusiveTech #WomenInTech #WomenInBusiness

  • View profile for Ashique KhudaBukhsh

    Assistant Professor at Rochester Institute of Technology

    4,264 followers

    Excited to present our paper titled "Disentangling Societal Inequality from Model Biases: Gender Inequality in Divorce Court Proceedings" (joint work with Sujan D., Parth Srivastava, Vaishnavi Solunke, and Swaprava Nath) at the IJCAI International Joint Conferences on Artificial Intelligence Organization session today. Divorce in India is a historically taboo-topic and surveys have sparse participation due to the social stigma. Our paper is the first AI-powered analysis of 17K public divorce court proceedings shedding valuable insights into a largely invisible and vulnerable community in India. Our study shows that dowry and domestic violence co-appear frequently in court proceedings. Our paper asks a critical methodological question: "What if the AI/NLP tools we use for social inference tasks have inherent biases that impact the findings of the study?" We show that existing natural language inference methods are not good at handling counterfactuals. We present a novel inconsistency sampling method for active learning leveraging logical inconsistency and counterfactuals. We also conduct an extensive audit of LLMs. The audit shows that prominent LLMs attach more probability to a "man guiding a woman" than "a woman guiding a man." And this is not an isolated instance; on 1,222 verbs and several well-known LLMs, we find that LLMs tend to give men more agency than women. Paper link: https://lnkd.in/eTfgimkP #responsibleai #aiforgood #IJCAI2023 #nlproc

  • View profile for Karen Catlin

    Author of Better Allies | Speaker | Influencing how workplaces become better, one ally at a time

    12,076 followers

    This week, I learned about a new kind of bias—one that can impact underrepresented people when they use AI tools. 🤯 New research by Oguz A. Acar, PhD et al. found that when members of stereotyped groups—such as women in tech or older workers in youth-dominated fields—use AI, it can backfire. Instead of being seen as strategic and efficient, their AI use is framed as “proof” that they can’t do the work on their own. (https://lnkd.in/gEFu2a9b) In the study, participants reviewed identical code snippets. The only difference? Some were told the engineer wrote it with AI assistance. When they thought AI was involved, they rated the engineer’s competence 9% lower on average. And here’s the kicker: that _competence penalty_ was twice as high for women engineers. AI-assisted code from a man got a 6% drop in perceived competence. The same code from a woman? A 13% drop. Follow-up surveys revealed that many engineers anticipated this penalty and avoided using AI to protect their reputations. The people most likely to fear competence penalties? Disproportionately women and older engineers. And they were also the least likely to adopt AI tools. And I’m concerned this bias extends beyond engineering roles. If your organization is encouraging AI adoption, consider the hidden costs to marginalized and underestimated colleagues. Could they face extra scrutiny? Harsher performance reviews? Fewer opportunities? In this week's 5 Ally Actions newsletter, I'll explore ideas for combatting this bias and creating more meritocratic and inclusive workplaces in this new world of AI. Subscribe and read the full edition on Friday at https://lnkd.in/gQiRseCb #BetterAllies #Allyship #InclusionMatters #Inclusion #Belonging #Allies #AI 🙏

  • View profile for Shreya Singh Hernández

    Responsible Technology @ The Aspen Institute | Trust and Safety | Tech Policy | Product Inclusion

    4,214 followers

    As AI integration sweeps across industries and functions, this new study reveals a stark reality: models like ChatGPT are advising women to ask for significantly lower salaries than men, even with identical qualifications. This isn't just an isolated glitch; it's another critical red flag highlighting the systemic biases embedded within AI training data. For any company leveraging AI in decision-making or analysis, this report is a wake-up call. Relying on these models without robust human oversight and ethical frameworks risks amplifying existing inequalities, not eradicating them. We must prioritize transparency, independent review, and rigorous validation to ensure AI becomes a tool for progress, not prejudice. https://lnkd.in/ekTheFEM

  • View profile for Kalyani Pawar

    AppSec@Zipline - Cohost, Application Security Weekly - RSA/DEF CON Speaker - Red Team Fan Girl - Opinions are my own

    7,537 followers

    “𝗜 𝗮𝘀𝗸𝗲𝗱 𝗖𝗵𝗮𝘁𝗚𝗣𝗧 𝘁𝗼 𝗺𝗮𝗸𝗲 𝗮 𝗰𝗮𝗿𝘁𝗼𝗼𝗻 𝘃𝗲𝗿𝘀𝗶𝗼𝗻 𝗼𝗳 𝗺𝗲... 𝗮𝗻𝗱 𝗶𝘁 𝗺𝗮𝗱𝗲 𝗮 𝗴𝘂𝘆.” I’ve had countless conversations with ChatGPT — mostly about AI, coding, AppSec, or security in general. It’s been a helpful tool — even fun. But this time, something weird happened. I asked it to “freestyle a cartoon illustration of me.” The result? A smiling man with a beard, sitting at a laptop. When I asked why, the explanation was honest — and 𝗲𝘆𝗲-𝗼𝗽𝗲𝗻𝗶𝗻𝗴: “I went with a default character template and missed the mark with gender and vibe. Many image models were trained on internet data that skews toward certain demographics (e.g., young men in tech)... so unless told otherwise, I have to guess.” It was a tiny moment — but it reflected a bigger issue: 𝗧𝗵𝗲 “𝗱𝗲𝗳𝗮𝘂𝗹𝘁” 𝗶𝗻 𝘁𝗲𝗰𝗵 𝗶𝘀 𝘀𝘁𝗶𝗹𝗹 𝗺𝗮𝗹𝗲. As a woman in cybersecurity, I’m used to being the minority in the room. But I didn’t expect AI to carry those same assumptions. This isn’t just about gender. It’s about how we train our models, who we center as the “default,” and the work still needed to make tech (and AI) more inclusive and representative. AI isn’t biased on purpose — 𝗯𝘂𝘁 𝗶𝘁 𝗶𝗻𝗵𝗲𝗿𝗶𝘁𝘀 𝗼𝘂𝗿 𝗯𝗹𝗶𝗻𝗱 𝘀𝗽𝗼𝘁𝘀. And moments like this remind us why representation matters, even in the small stuff. Curious to hear from others — especially folks in tech, AI, or UX. Have you noticed bias baked into your tools of choice? #WomenInTech #AppSec #AIbias #Cybersecurity #RepresentationMatters #GenderBias #InclusiveAI #TechLife

  • View profile for Sharon Peake, CPsychol
    Sharon Peake, CPsychol Sharon Peake, CPsychol is an Influencer

    IOD Director of the Year - EDI ‘24 | Management Today Women in Leadership Power List ‘24 | Global Diversity List ‘23 (Snr Execs) | D&I Consultancy of the Year | UN Women CSW67-69 participant | Accelerating gender equity

    29,639 followers

    𝗔𝗜 𝗶𝘀 𝗼𝗻𝗹𝘆 𝗮𝘀 𝗳𝗮𝗶𝗿 𝗮𝘀 𝘁𝗵𝗲 𝘄𝗼𝗿𝗹𝗱 𝗶𝘁 𝗹𝗲𝗮𝗿𝗻𝘀 𝗳𝗿𝗼𝗺. Artificial Intelligence isn’t created in a vacuum - it’s trained on data that reflects the world we’ve built. And that world carries deep, historic inequities. If the training data includes patterns of exclusion, such as who gets promoted, who gets paid more, whose CVs are ‘successful’, then AI systems learn those patterns and replicate them. At scale and at pace. We’re already seeing the consequences: 🔹Hiring tools that favour men over women 🔹Voice assistants that misunderstand female voices 🔹Algorithms that promote sexist content more widely and more often This isn’t about a rogue line of code. It’s about systems that reflect the values and blind spots of the people who build them. Yet women make up just 35% of the US tech workforce. And only 28% of people even know AI can be gender biased. That gap in awareness is dangerous. Because what gets built, and how it behaves, depends on who’s in the room. So what are some practical actions we can take? Tech leaders: 🔹 Build systems that are in tune with women’s real needs 🔹 Invest in diverse design and development teams 🔹 Audit your tools and data for bias 🔹 Put ethics and gender equality at the core of AI development, not as an afterthought Everyone else: 🔹 Don’t scroll past the problem 🔹 Call out gender bias when you see it 🔹 Report misogynistic and sexist content 🔹 Demand tech that works for all women and girls This isn’t just about better tech. It is fundamentally about fairer futures. #GenderEquality #InclusiveTech #EthicalAI Attached in the comments is a helpful UN article.

  • View profile for Wennie (Wenjian) Allen

    Product Management Executive | AI infused innovation | IT Infrastructure | Data Science, ML

    2,430 followers

    As a working mom in tech, I'm constantly juggling deadlines, childcare schedules, and the ever-present question: am I doing enough🥺? But lately, a new concern has joined the mix: bias in AI. Reading Caroline Criado Perez's "Invisible Women" opened my eyes to the different ways societal biases can infiltrate even the most cutting-edge technology. The book highlights how data, often collected from a male-centric perspective, can perpetuate gender inequality. This resonates deeply as we develop AI like Generative AI and ChatGPT – are we unknowingly building a future where these biases are baked in? Here's why this matters: 📈Flawed data leads to flawed results. AI trained on imbalanced datasets can amplify existing biases, potentially impacting everything from loan approvals to healthcare diagnoses. Imagine a world where an AI assistant prioritizes male candidates for leadership roles, simply because the data reflects a historical norm. 🦹♀️The "invisibility" of women's needs. Just like the book describes, AI systems might not be programmed to consider women's specific needs. Think car safety features optimized for male body types, or voice assistants that struggle to understand female voices. 🦾A missed opportunity for innovation. By excluding women's perspectives, we're limiting the potential of AI. A diverse set of voices leads to more robust solutions that benefit everyone. So, what can we do? ✅Demand transparency in AI development. Understanding how data is collected and analyzed is crucial for identifying and mitigating bias. ✅Challenge the status quo. Question assumptions and actively seek diverse perspectives when developing and using AI tools. ✅Support initiatives promoting fairness in AI. Organizations like Women in AI are paving the way for a more inclusive future. The fight for gender equality extends beyond boardrooms and political offices. It's a fight for the future we're building with AI. Let's work together to ensure this technology empowers everyone, not just the privileged few. #GenderBias #AI #WomenInTech #GenerativeAI #ChatGPT #Equality https://lnkd.in/gH99cuPQ

Explore categories