AI Bias Issues

Explore top LinkedIn content from expert professionals.

  • View profile for Martyn Redstone

    On-Call Head of AI Governance for HR | Ethical AI • Responsible AI • AI Risk Assessment • AI Policy • EU AI Act Readiness • Workforce AI Literacy | UK • Europe • Middle East • Asia • ANZ • USA

    20,072 followers

    LinkedIn just responded to the bias claims. They think they refuted my research. I believe they just confirmed it. Following the recent discussions on whether the algorithm suppresses women's voices, LinkedIn's Head of Responsible AI and AI Governance, Sakshi Jain, posted a new Engineering Blog post to "clarify" how the feed works (link in comments). I’ve analysed the post. Far from debunking the issue, it inadvertently confirms the exact mechanism of Proxy Bias I identified in my report (link in comments). Here is the breakdown: 1. The blog spends most of its time denying that the algorithm uses "gender" as a variable. And I agree. My report never claimed the code contained if gender == female. That would be Direct Discrimination. I have always argued this is about Indirect Discrimination via proxies. 2. Crucially, the blog explicitly lists the signals they do optimise for: "position," "industry," and "activity." These are the exact proxies my report flagged. -> Industry/Position: Men are historically overrepresented in high-visibility industries (Tech/Finance) and senior roles. Optimising for these signals without a fairness constraint systematically amplifies men. -> Activity: The (now-viral) trend of women rewriting profiles in "male-coded" language (and seeing 3-figure percentage lift) proves that the algorithm’s "activity" signal favours male linguistic patterns ("agentic" vs. "communal"). 3. The blog confirms the algorithm is neutral in intent (it doesn't see gender) but discriminatory in outcome (because it optimises for biased proxies). In the UK, this is the textbook definition of Indirect Discrimination under the Equality Act 2010. In the EU, this is a Systemic Risk under the Digital Services Act (DSA). LinkedIn has proven that they can fix this. Their Recruiter product uses "fairness-aware ranking" to mitigate these exact proxies (likely for AI Act compliance). The question remains: Why is that same fairness framework not being applied to the public feed? 👉 What We Are Doing About It Analysis is important, but action is essential. I am proud to support the new petition, "Calling for Fair Visibility for All on LinkedIn". This isn't just a complaint; it’s a demand for transparency. We are calling for an independent equity audit of the algorithm and a clear mechanism to report unexplained visibility collapse. If you are tired of guessing which "proxy" you tripped over today, join us and sign the petition (link in the comments).

  • View profile for Cass Cooper, MHR

    🌬️ Professional Chaos Whisperer | 📈 Sales Enablement & Learning Portfolio Leader | 🤖 Ethical AI & Salesforce Integration | 🌍 Global Leadership Development | 📰 Columnist & 🎙️ Speaker

    10,065 followers

    What happens when a Black woman switches her gender on LinkedIn to “male”? …apparently not the same thing that happens to white women. ✨ Over the past week, I’ve watched post after post from white women saying their visibility skyrocketed the moment they changed their profile gender from woman → man. More impressions. More likes. More reach. 📈 So I tried the same thing. And my visibility dropped. 👀 Here’s why that result matters: these experiments are being treated as if they’re only about gender; in reality they reveal something deeper about race + gender + algorithmic legitimacy. 🔍 A white woman toggling her gender is basically conducting a test inside a system where her racial credibility stays constant. She changes one variable. The algorithm keeps the rest of her privilege intact. 💡 When a Black woman does the same test? I’m not stepping into “white male privilege”; I’m stepping into a category that platforms and society have historically coded as less trustworthy, less safe, or less “professional.” Black + male is not treated the same as white + male. Not culturally. Not algorithmically. 🧩 So while white women are proving that gender bias exists (which is true), they’re doing it without naming the racial insulation that makes their results possible. Meanwhile, Black women and women of color are reminded—again—that we can’t separate gender from race because the world doesn’t separate them for us. 🗣️ This isn’t about placing blame; it’s about widening the conversation so the conclusions match the complexity. 🌍 If we’re going to talk about bias, visibility, and influence online, we cannot pretend we all start from the same default settings. 🔥 I’m curious: Have you run your own experiment with identity signals on this platform? What changed… and what didn’t? 👇🏾

  • View profile for Wies Bratby

    Fancy a 93% salary increase? | Former Lawyer & HR Director | Negotiation Expert and Career Strategist for Women in Corporate | Supporting 750+ career women through my coaching program (DM me for details)

    18,399 followers

    In 2025, AI is still suggesting lower salaries for women doing the same work. We ran a simple test: same prompt, same job title, same years of experience. The only variable? Changing "he" to "she." The result? A consistent salary gap in AI-generated recommendations. No algorithm defines your worth - You do. This isn't just a technical error—it's algorithmic bias in action. These tools learn from historical data that reflects decades of pay inequity. And now they're perpetuating it at scale. What we can do: → Audit the AI tools we use in HR and talent management → Train teams to recognize and question biased outputs → Ensure compensation frameworks are based on role, skill, and impact—not gender → Advocate for transparency in algorithmic decision-making Technology should advance equity, not encode inequality. If your organization uses AI in hiring, compensation, or performance management, it's time to ask: what biases are we automating?

  • View profile for Karim Sarkis

    Culture, Media and Entertainment, TMT @Strategy&

    8,167 followers

    AI bias is real. Here’s an example that’s close to home. I asked ChatGPT to create a close-up image of a woman from the Middle East who is crying. Despite numerous attempts and clear instructions, it was unable to create an image without having the woman wearing a hijab, or head covering. When asked why it was failing, it responded: “it seems there may be a bias in the model towards associating certain features or contexts with specific cultural elements”. The AI recognizes its bias yet is unable to alter its outcome. Now imagine this bias in algorithms controlling who gets a bank loan, who gets accepted into a university, who is denied a visa, who is targeted by an AI-drone. The consequences are broad and impact our society. AI developers bear a critical responsibility to address and mitigate these biases. Governments also need to remain vigilant when it comes to AI legislation. Initial steps to take include: diversifying training datasets, developing more sophisticated algorithms that can understand and follow nuanced instructions, and ongoing monitoring for bias in AI outputs. The real challenge is to remove the bias from people’s minds! #aibias #diversityintech #responsibleai

  • View profile for Aishwarya Srinivasan
    Aishwarya Srinivasan Aishwarya Srinivasan is an Influencer
    598,982 followers

    I wasn’t actively looking for this book, but it found me at just the right time. Fairness and Machine Learning: Limitations and Opportunities by Solon Barocas, @Moritz Hardt, and Arvind Narayanan is one of those rare books that forces you to pause and rethink everything about AI fairness. It doesn’t just outline the problem—it dives deep into why fairness in AI is so complex and how we can approach it in a more meaningful way. A few things that hit home for me: →Fairness isn’t just a technical problem; it’s a societal one. You can tweak a model all you want, but if the data reflects systemic inequalities, the results will too. → There’s a dangerous overreliance on statistical fixes. Just because a model achieves “parity” doesn’t mean it’s truly fair. Metrics alone can’t solve fairness. → Causality matters. AI models learn correlations, not truths, and that distinction makes all the difference in high-stakes decisions. → The legal system isn’t ready for AI-driven discrimination. The book explores how U.S. anti-discrimination laws fail to address algorithmic decision-making and why fairness cannot be purely a legal compliance exercise. So, how do we fix this? The book doesn’t offer one-size-fits-all solutions (because there aren’t any), but it does provide a roadmap: → Intervene at the data level, not just the model. Bias starts long before a model is trained—rethinking data collection and representation is crucial. → Move beyond statistical fairness metrics. The book highlights the limitations of simplistic fairness measures and advocates for context-specific fairness definitions. → Embed fairness in the entire ML pipeline. Instead of retrofitting fairness after deployment, it should be considered at every stage—from problem definition to evaluation. → Leverage causality, not just correlation. Understanding the why behind patterns in data is key to designing fairer models. → Rethink automation itself. Sometimes, the right answer isn’t a “fairer” algorithm—it’s questioning whether an automated system should be making a decision at all. Who should read this? 📌 AI practitioners who want to build responsible models 📌 Policymakers working on AI regulations 📌 Ethicists thinking beyond just numbers and metrics 📌 Anyone who’s ever asked, Is this AI system actually fair? This book challenges the idea that fairness can be reduced to an optimization problem and forces us to confront the uncomfortable reality that maybe some decisions shouldn’t be automated at all. Would love to hear your thoughts—have you read it? Or do you have other must-reads on AI fairness? 👇 ↧↧↧↧↧↧↧ Share this with your network ♻️ Follow me (Aishwarya Srinivasan) for no-BS AI news, insights, and educational content!

  • View profile for Victoria Hedlund

    The AI ‘Bias Girl’ | LinkedIn Top 12 AI Voice to follow in Europe | Helping Educators Maintain Critical Oversight of GenAI Bias Risks

    3,834 followers

    ⚠️ Warning: Don’t follow the OpenAI prompting advice released yesterday unless you want biased outputs that reinforce gaps between your students. Yesterday, OpenAI released a K12 prompting guide (in comments). It scaffolded ‘okay’, ‘good’ and ‘great’ prompts, and celebrated the success of those labelled as “great”. But there’s nothing to celebrate here. In fact, there’s more to fear. Many of the “great” examples rely on asking GenAI to produce 'engaging activities'. That sounds harmless. But when left open, the word “engaging” brings in all kinds of bias from the training data. Take this example prompt from the guide: “Create a lesson plan for a high school history class on World War II. Include an engaging activity, discussion questions, and suggestions for multimedia resources. Tailor the content for students with a basic understanding of 20th-century history.” The outputs this kind of prompt generates often favour dominant norms: here Western-Centric, neurotypical, gender under-representation, privileged. Thousands of teachers, lecturers and teacher educators are working every day to narrow these gaps in attainment. But vague prompts like “make it engaging” can quietly widen them, unless we know how to guide these tools with care. In my research on physics outputs from GenAI, I’ve started to categorise how this bias appears. It shows in how explanations are framed, who is represented, and which learners are centred. Over the next few weeks, I’ll be sharing a series that explores ten common forms of bias in GenAI lesson outputs, and how we can mitigate against them through more intentional prompting. The topics are: ➡️ Accessibility Bias ➡️ Cognitive Style Bias ➡️ Modality Bias ➡️ Cultural Bias and Western-Centric Defaults ➡️ Identity-Neutral Design ➡️ Participation Bias ➡️ Home Context and Privilege Assumptions ➡️ Gender Bias and Role Stereotypes ➡️ Neurodiversity Bias ➡️ Teacher-Centric Power Dynamics These patterns affect more than just content. They shape who feels seen, supported and challenged in the learning process. ⬇️ Check out my simple analysis of bias in OpenAI's recommended 'great' prompt - link in comments. If you have examples, experiences or questions, please drop them in the comments or message me directly, so we can build this set of mitigations together as educators.

  • View profile for Ross Dawson
    Ross Dawson Ross Dawson is an Influencer

    Futurist | Board advisor | Global keynote speaker | Humans + AI Leader | Bestselling author | Podcaster | LinkedIn Top Voice | Founder: AHT Group - Informivity - Bondi Innovation

    34,043 followers

    As AI advances apace, potentially beyond "Slave AI", framing and designing "Friendly AI" may be our best approach. A comprehensive review article on the space uncovers the foundations, pros and cons, applications, and future directions for the space. The paper defines Friendly AI (FAI) as "an initiative to create systems that not only prioritise human safety and well-being but also actively foster mutual respect, understanding, and trust between humans and AI, ensuring alignment with human values and emotional needs in all interactions and decisions." It intends to go beyond existing anthropocentric frameworks. Key insights in the review paper from include: 🔄 Balance Ethical Frameworks and Practical Feasibility. The development of FAI relies on integrating ethical principles like deontology, value alignment, and altruism. While these frameworks provide a moral compass, their operationalization faces challenges due to the evolving nature of human values and cultural diversity. 🌍 Address Global Collaboration Barriers. Developing FAI requires global cooperation, but diverging ethical standards, regulatory priorities, and commercial interests hinder alignment. Establishing international platforms and shared frameworks could harmonize these efforts across nations and industries. 🔍 Enhance Transparency with Explainable AI. Explainable AI (XAI) techniques like LIME and SHAP empower users to understand AI decisions, fostering trust and enabling ethical oversight. This transparency is foundational to FAI’s goal of aligning AI behavior with human expectations. 🔐 Build Trust Through Privacy Preservation. Privacy-preserving methods, such as federated learning and differential privacy, protect user data and ensure ethical compliance. These approaches are critical to maintaining user trust and upholding FAI's values of dignity and respect. ⚖️ Embed Fairness in AI Systems. Fairness techniques mitigate bias by addressing imbalances in data and outputs. Ensuring equitable treatment of diverse groups aligns AI systems with societal values and supports FAI’s commitment to inclusivity. 💡 Leverage Affective Computing for Empathy. Affective Computing (AC) enhances AI’s ability to interpret human emotions, enabling empathetic interactions. AC is pivotal in healthcare, education, and robotics, bridging human-AI communication for more "friendly" systems. 📈 Focus on ANI-AGI Transition Challenges. Advancing AI capabilities in nuanced decision-making, memory, and contextual understanding is crucial for transitioning from narrow AI (ANI) to general AI (AGI) while maintaining alignment with FAI principles. 🤝 Foster Multi-Stakeholder Collaboration. FAI’s realization demands structured collaboration across governments, academia, and industries. Clear guidelines, shared resources, and public inclusion can address diverging goals and accelerate FAI’s adoption globally. Link to paper in comments

  • View profile for Oliver King

    Founder & Investor | AI Operations for Financial Services

    5,052 followers

    Your AI project will succeed or fail before a single model is deployed. The critical decisions happen during vendor selection — especially in fintech where the consequences of poor implementation extend beyond wasted budgets to regulatory exposure and customer trust. Financial institutions have always excelled at vendor risk management. The difference with AI? The risks are less visible and the consequences more profound. After working on dozens of fintech AI implementations, I've identified four essential filters that determine success when internal AI capabilities are limited: 1️⃣ Integration Readiness For fintech specifically, look beyond the demo. Request documentation on how the vendor handles system integrations. The most advanced AI is worthless if it can't connect to your legacy infrastructure. 2️⃣ Interpretability and Governance Fit In financial services, "black box" AI is potentially non-compliant. Effective vendors should provide tiered explanations for different stakeholders, from technical teams to compliance officers to regulators. Ask for examples of model documentation specifically designed for financial service audits. 3️⃣ Capability Transfer Mechanics With 71% of companies reporting an AI skills gap, knowledge transfer becomes essential. Structure contracts with explicit "shadow-the-vendor" periods where your team works alongside implementation experts. The goal: independence without expertise gaps that create regulatory risks. 4️⃣ Road-Map Transparency and Exit Options Financial services move slower than technology. Ensure your vendor's development roadmap aligns with regulatory timelines and includes established processes for model updates that won't trigger new compliance reviews. Document clear exit rights that include data migration support. In regulated industries like fintech, vendor selection is your primary risk management strategy. The most successful implementations I've witnessed weren't led by AI experts, but by operational leaders who applied these filters systematically, documenting each requirement against specific regulatory and business needs. Successful AI implementation in regulated industries is fundamentally about process rigor before technical rigor. #fintech #ai #governance

  • View profile for Dr. Cosima Meyer

    Passionate Advocate for Sustainable ML Products and Diversity in Tech | Futuremaker 2024 | Google’s Women Techmakers Ambassador | PhD @ Uni Mannheim

    3,973 followers

    Artificial intelligence is shaping our world, but how do we ensure it aligns with human and moral values? Moral AI (by Jana Schaich Borg, Walter Sinnott-Armstrong, and Vincent Conitzer) dives into the critical intersection of ethics and AI, exploring the decisions that will shape our future. The book provides a balanced view of what moral AI is, why it's important, and how we can achieve it. It's a highly recommended read for anyone in the field, whether technical or non-technical. What makes this book stand out for me: ▷ Diverse authors: The authors come from different fields and backgrounds, acknowledging that solving AI's challenges requires a multidisciplinary approach. ▷ Comprehensive coverage: The book dives into defining AI, its shortcomings in moral reasoning, safety concerns, privacy, fairness, and legal/moral responsibility ▷ Practical considerations: It explores ethical and practical considerations, including biases in data and the importance of representative samples ▷ Real-world examples: Illustrative case studies and examples highlight common AI problems and potential misuses ▷ Forward-thinking: The book raises questions about unforeseen consequences and the need for careful planning as AI progresses ▷ Solutions-oriented: It discusses implementing human morality into AI systems and offers strategies for achieving this ▷ Actionable insights: The book provides calls to action for scaling moral AI technical tools, disseminating best practices, and promoting civic participation. "Moral AI" also emphasizes that technology alone isn't enough and that AI systems are created by humans within a human society. It highlights the importance of communication and collaboration between AI contributors and stresses also the potential that we have when we succeed in creating moral AI. 📖 Book: https://lnkd.in/e3rYarRv 🗒️ Review: https://lnkd.in/eSJy6-A9 #MoralAI #AIethics #Ethics #ResponsibleAI

Explore categories