As I delved into BCG's latest Women on AI usage report(https://lnkd.in/efvZd7MP) , I was struck by a powerful insight: 68% of women in our industry use GenAI tools at work more than once a week, slightly higher than the 66% of men. Yet, my experience with AI image generation tools reveals a glaring bias that made me pause and reflect. Playground 2.5 - When I prompted an abstract illustration of a person climbing stairs with milestones, it defaulted to male images. - A vertical timeline of skill development and accomplishments—returned male images. Google imagen - Describing a basketball player embracing values of integrity, compassion, and authenticity—again, only male images appeared. - A collage representing diverse experiences and intellectual pursuits—no women in sight. DALL-E Open Ai - Prompted for a basketball player in a classical music hall, performing physics experiments, integrating integrity, compassion, and authenticity—received an image of a male player. These biases in AI-generated images made me think twice about the technology we rely on. 1. Is my #prompting technique wrong? Should I be more specific to get images of #women? 2. It seems that the representation of #cismen is hardwired into these tools. This brings up a crucial question: Should AI models like ChatGPT ask relevant questions about the kind of person we are envisioning to ensure accurate and #inclusive outputs? 3. It could be my own #bias interpreting these figures as cis men, but it’s also a reflection of how we’ve been wired and, importantly, how AI is being trained. Despite women leading in AI tool usage, the tools often fail to represent us accurately because they mirror the biases ingrained in their training data. As I prepare the next edition of the GenAI in HR – My Playbook newsletter , this realization underscores the critical need for addressing inherent biases in AI models. We need AI that is truly inclusive, reflecting the diversity and richness of everyone's experiences. This is a call to action. Let’s challenge these biases and work towards AI that sees and represents us all. Stay tuned for more insights in the upcoming newsletter- Subscribe on LinkedIn https://lnkd.in/eG8BmKhr #AI #DiversityInTech #WomenInAI #GenderBias #InclusiveAI
Understanding Diverse Viewpoints on AI Technology
Explore top LinkedIn content from expert professionals.
Summary
Understanding diverse viewpoints on AI technology requires considering the various perspectives, biases, and societal impacts surrounding artificial intelligence. By engaging with diverse communities and highlighting inclusivity, we can guide AI development to reflect the vast experiences and values of all humans.
- Address inherent biases: Recognize and challenge the biases in AI systems by working toward more inclusive and diverse training data and development teams.
- Encourage dialogue and education: Create spaces for open conversations about the societal implications of AI and encourage education to ensure all communities can participate and benefit from technological advancements.
- Advocate for transparency: Demand openness from developers about how AI systems are built and trained to ensure accountability and ethical use of the technology.
-
-
I've been doing a lot of reading and thinking about artificial intelligence for a while, and I feel it's really important that we all educate ourselves on this rapidly evolving technology. AI is already being used in so many areas, for good and bad purposes. From language models that can write amazing poetry and code to facial recognition systems being misused for surveillance. As a Black woman, I'm especially concerned about making sure our communities don't get left behind or exploited as AI continues advancing at a remarkable pace. We've seen time and again how new technologies can be misused in ways that harm marginalized groups. Just look at how data-driven algorithms have baked in real-world biases around race, gender, and more. I truly believe that having more Black people, and especially Black women, knowledgeable about AI will be crucial. We need to be at the table shaping these tools and laws around them, not just being acted upon by external forces we don't understand. AI isn't going anywhere - in fact, it will only become more widespread and powerful. So we have to get ahead of the curve! I know there is also a lot of skepticism and concerns around AI, even for technologies we already use like Google Maps, Alexa, and recommendation algorithms. And that's fair! We should be questioning the systems that impact our lives, especially given the historical precedents of new technologies being weaponized against our communities. But burying our heads in the sand isn't the answer either. Whether it's studying computer science and data ethics, or just learning about the basics of machine learning techniques, I encourage everyone to make time to understand this transformative shift happening in technology. We can't let AI be another example of our communities getting left in the dark about world-altering changes until it's too late. Let's get informed, get involved, and make sure AI develops in a way that empowers and uplifts us. Our future selves will thank us. Ayana Elon Founder, Black Girl Ai www.blackgirlai.com #BlackGirlAi #AI #Tech #Learning #Community #Empowerment #Technology
-
As AI becomes more ubiquitous and robust, ensuring it is aligned with the goals of diverse communities is crucial. AI systems are the product of many different decisions made by those who develop and deploy them. Therefore, working with diverse communities to build responsible AI is necessary to create responsible AI that benefits everyone and warrants people’s trust. By engaging with diverse communities, we can learn from their perspectives, experiences, and challenges and co-create AI solutions that are fair, inclusive, and beneficial for all. Moreover, we can foster trust, collaboration, and innovation among different stakeholders and empower communities to participate in the AI ecosystem. I spent the last week at the United Nations diving into this topic. The teams at UN Women & Unstereotype Alliance allowed me to share how teams use Microsoft's Inclusive Design Toolkit to partner with diverse communities to understand their goals, guiding AI Product development towards more equitable outcomes by keeping people and their goals at the center of systems design decisions. The toolkit and more can be found at https://lnkd.in/eTdpKhGY
-
While AI models are emblematic of the data they're trained on, they also inadvertently mirror the perspectives and biases of their creators. This challenges the myth of AI neutrality and underscores the broader complexities of bias— which isn't merely a data issue but permeates every stage of model development. However, it's not just about acknowledging these biases but understanding their intricacies, especially in a society grappling with polarizing views. Instead of an impossible standard of neutrality, maybe our focus should shift to honesty and customization. By offering transparency about a model's inherent biases and allowing users to personalize their AI interactions, we might be able to strike a balance that serves diverse perspectives without amplifying misinformation. Yet, with every stride in AI customization, we tread on a double-edged sword. The same tools that can weed out unpleasantness can also be weaponized to reinforce misinformation, underscoring the ethical complexities we must navigate. As AI's role in society becomes more pronounced, so does the imperative for ethical diligence and transparency. #Transparency #Personalization #EthicalAI #AIandPrivacy
-
I learned a lot last week at NeurIPS; I'll try to share some excellent talk and posters here during the next couple of days. The first one is "The Many Faces of Responsible AI" by Lora Aroyo, where, in her words, she presented a number of data-centric use cases that illustrate the inherent ambiguity of content and natural diversity of human perspectives that cause unavoidable disagreement that needs to be treated as signal and not noise. This leads to a call for action to establish culturally-aware and society-centered research on impacts of data quality and data diversity for the purposes of training and evaluating ML models and fostering responsible AI deployment in diverse sociocultural contexts. Abstract:Conventional machine learning paradigms often rely on binary distinctions between positive and negative examples, disregarding the nuanced subjectivity that permeates real-world tasks and content. This simplistic dichotomy has served us well so far, but because it obscures the inherent diversity in human perspectives and opinions, as well as the inherent ambiguity of content and tasks, it poses limitations on model performance aligned with real-world expectations. This becomes even more critical when we study the impact and potential multifaceted risks associated with the adoption of emerging generative AI capabilities across different cultures and geographies. To address this, we argue that to achieve robust and responsible AI systems we need to shift our focus away from a single point of truth and weave in a diversity of perspectives in the data used by AI systems to ensure the trust, safety and reliability of model outputs... Available at:
-
Artificial Intelligence is based on a limited view of what "intelligence" actually is. AI automates the kind of intelligence that has to do with logic, grammar, and math. The kind of intelligence that gets you good grades in school. The kind of intelligence that engineers excel at. AI admittedly can do wonders analyzing vast amounts of data. So we are seeing its usefulness in market intelligence, scenario planning, and formulaic styles of writing. (LinkedIn is offering a rewrite of this post, which I'm sure will be better designed to attract your attention based on known advertising principles.) But, to quote the Bard, "There are more things in heaven and earth, Horatio, than are dreamt of in your philosophy." Psychologists have identified multiple other kinds of intelligence, e.g. emotional intelligence, social intelligence, spatial intelligence, somatic intelligence. Strategic leaders show a wide range of these intelligences. Strategy is the creation of unique, differentiating, inspiring choices that challenge our limits. This is where the boundaries of artificial so-called intelligence are most obvious. Hatching a great strategy is a creative act that brings the whole being, and the social organism of a team, into play. It might begin with an intuitive spark, and out-of-the-box observation, and get refined through conversations for possibility that pose provocative questions like "How might we....." "What if....." "Is it possible that...." These conversations build commitment to a new direction and align the creativity of a team. For more skeptical thoughts on AI, see my article at https://lnkd.in/gi2cCWra
-
AI Reflections Over Thanksgiving Thanksgiving was full of family fun and downtime. In those quiet moments, I thought about how AI is showing up in our daily lives. It's these human moments—sharing stories, playing games, and enjoying movies and sports together—that show the beauty of what it means to be human which AI doesn't yet capture. A personal story that stands out is my husband’s. He lost his right leg to cancer at age 10 and has used various prosthetics over the years. His most recent advanced bionic legs adapt to his gait and the terrain. This highlights the progress and potential of AI to improve lives. As a published author of young adult sci-fi novels, he is both excited and worried as AI makes inroads into the publishing world. Can AI augment creativity, or will it overshadow the human touch in storytelling? The power of human connection like 72K people at Wembley Stadium singing "We Are the Champions" or NFL fans in Frankfurt belting out the chorus to "Sweet Caroline" at a Dolphins vs. Chiefs game, brings this home. Music, sports, storytelling, and art bring us together in ways AI can't copy. We need to keep these moments of connection alive, especially in a world that often feels divided. Navigating the AI era requires us to appreciate and preserve these uniquely human and cultural experiences. We need to embrace AI's benefits while understanding its limitations. Here are some things we can do: 1) Understand AI - Understanding and staying abreast of AI innovation will help us guide its development, deployment, and use thoughtfully. 2) Advocate for transparency - Pushing for clear, open information about how AI works and is used is crucial. This helps ensure that AI advancements are ethical and beneficial for everyone. 3) Consider different perspectives - We need to consider varied viewpoints and tradeoffs to influence AI's role in society, https://lnkd.in/gdZntSb3 4) Empathize with others - It's important to respect individual situations and unique journeys. Someone's choices about their use of AI can be deeply personal. How has AI changed your life or job? Let's share stories and find that middle ground in a sometimes polarized conversation about AI, embracing it responsibly. #ResponsibleAI #Ethics #Humanity GrowthPath Partners
-
The intersection of #education, #technology, and #societal issues presents a nuanced landscape, one that is vividly illustrated through the contrasting stories of Mary Wood’s reprimand for her progressive curriculum and the global phenomenon of AI nationalism. These scenarios, while seemingly disparate, converge on a critical point: the transformative role of technology in education and its potential to either bridge or exacerbate societal divides. 𝐇𝐞𝐫 𝐬𝐭𝐮𝐝𝐞𝐧𝐭𝐬 𝐫𝐞𝐩𝐨𝐫𝐭𝐞𝐝 𝐡𝐞𝐫 𝐟𝐨𝐫 𝐚 𝐥𝐞𝐬𝐬𝐨𝐧 𝐨𝐧 𝐫𝐚𝐜𝐞. 𝐂𝐚𝐧 𝐬𝐡𝐞 𝐭𝐫𝐮𝐬𝐭 𝐭𝐡𝐞𝐦 𝐚𝐠𝐚𝐢𝐧? 𝐌𝐚𝐫𝐲 𝐖𝐨𝐨𝐝 faced reprimand (links below) for teaching 𝐓𝐚-𝐍𝐞𝐡𝐢𝐬𝐢 𝐂𝐨𝐚𝐭𝐞𝐬’s exploration of race, a decision that sparked debate over educational content and the role of educators in addressing complex societal issues. Her story underscores the delicate balance between fostering critical thinking and navigating the sensitivities of a diverse student body in an era where discussions on race and identity are increasingly polarized. 𝐀𝐈 𝐍𝐚𝐭𝐢𝐨𝐧𝐚𝐥𝐢𝐬𝐦: 𝐀 𝐍𝐞𝐰 𝐃𝐢𝐠𝐢𝐭𝐚𝐥 𝐃𝐢𝐯𝐢𝐝𝐞 𝐢𝐧 𝐭𝐡𝐞 𝐌𝐚𝐤𝐢𝐧𝐠 Parallel to this is the challenge posed by 𝐀𝐈 𝐧𝐚𝐭𝐢𝐨𝐧𝐚𝐥𝐢𝐬𝐦, a trend where the race to dominate AI technology leads nations to closely guard their innovations. This phenomenon threatens to create a new kind of divide in the digital realm, one that could limit the exchange of ideas and innovations and one where local data access, rules ! 𝐓𝐡𝐞 𝐁𝐫𝐢𝐝𝐠𝐞 Bridging these narratives, is a common theme: 𝐭𝐡𝐞 𝐬𝐭𝐫𝐮𝐠𝐠𝐥𝐞 𝐭𝐨 𝐛𝐚𝐥𝐚𝐧𝐜𝐞 𝐩𝐫𝐨𝐠𝐫𝐞𝐬𝐬 𝐰𝐢𝐭𝐡 𝐢𝐧𝐜𝐥𝐮𝐬𝐢𝐯𝐢𝐭𝐲. In Wood's case, it's integrating race discussions to foster understanding, not alienation. With AI nationalism, it's ensuring AI advances don't widen global inequities or limit educational collaboration. To navigate these challenges, 𝐚 𝐦𝐮𝐥𝐭𝐢𝐟𝐚𝐜𝐞𝐭𝐞𝐝 𝐚𝐩𝐩𝐫𝐨𝐚𝐜𝐡 is needed: ➊ 𝐎𝐩𝐞𝐧 𝐃𝐢𝐚𝐥𝐨𝐠𝐮𝐞𝐬: Foster classroom discussions on societal issues. ➋ 𝐆𝐥𝐨𝐛𝐚𝐥 𝐀𝐈 𝐂𝐨𝐥𝐥𝐚𝐛𝐨𝐫𝐚𝐭𝐢𝐨𝐧: Encourage worldwide cooperation in AI development. ➌ 𝐈𝐧𝐜𝐥𝐮𝐬𝐢𝐯𝐞 𝐀𝐈: Ensure AI development includes diverse perspectives. ➍ 𝐃𝐢𝐠𝐢𝐭𝐚𝐥 𝐋𝐢𝐭𝐞𝐫𝐚𝐜𝐲: Teach students to critically assess content and technology. ➎ 𝐁𝐫𝐢𝐝𝐠𝐞 𝐃𝐢𝐯𝐢𝐝𝐞𝐬: Use AI to enhance understanding across cultural and societal lines. 𝘏𝘰𝘸 𝘤𝘢𝘯 𝘸𝘦, 𝘢𝘴 𝘦𝘥𝘶𝘤𝘢𝘵𝘰𝘳𝘴, 𝘵𝘦𝘤𝘩𝘯𝘰𝘭𝘰𝘨𝘪𝘴𝘵𝘴, 𝘢𝘯𝘥 𝘱𝘰𝘭𝘪𝘤𝘺𝘮𝘢𝘬𝘦𝘳𝘴, 𝘤𝘰𝘭𝘭𝘢𝘣𝘰𝘳𝘢𝘵𝘦 𝘵𝘰 𝘦𝘯𝘴𝘶𝘳𝘦 𝘵𝘦𝘤𝘩𝘯𝘰𝘭𝘰𝘨𝘺 𝘯𝘰𝘵 𝘰𝘯𝘭𝘺 𝘢𝘥𝘷𝘢𝘯𝘤𝘦𝘴 𝘦𝘥𝘶𝘤𝘢𝘵𝘪𝘰𝘯 𝘣𝘶𝘵 𝘥𝘰𝘦𝘴 𝘴𝘰 𝘪𝘯 𝘢 𝘸𝘢𝘺 𝘵𝘩𝘢𝘵 𝘧𝘰𝘴𝘵𝘦𝘳𝘴 𝘪𝘯𝘤𝘭𝘶𝘴𝘪𝘷𝘪𝘵𝘺 𝘢𝘯𝘥 𝘶𝘯𝘥𝘦𝘳𝘴𝘵𝘢𝘯𝘥𝘪𝘯𝘨? 𝘗𝘭𝘦𝘢𝘴𝘦 𝘴𝘩𝘢𝘳𝘦 𝘺𝘰𝘶𝘳 𝘵𝘩𝘰𝘶𝘨𝘩𝘵𝘴 𝘢𝘯𝘥 𝘦𝘹𝘱𝘦𝘳𝘪𝘦𝘯𝘤𝘦𝘴. #EducationReform #AIinEducation #inclusiveai #equitableai #bridgingdivides
-
The White House release of the Executive Order on the Safe, Secure, and Trustworthy Development and Use of #ArtificialIntelligence (AI) ( https://lnkd.in/e6BkKU9V ) continues a discussion on the issues related to this enabling #technology from "existential threat to humanity" to it "must be embraced." Dr. James Marrone and Dr. Marek P. (Marek N. Posard) wrote a commentary in RAND Corporation looking at the yin and yang debate of AI within society to include "objective study can help policymakers and the public understand how to harness the benefits of AI while also mitigating its risks." As they look at AI in many contexts and frameworks to include policy, this conclusion quote was key - "esoteric philosophical debate about the supposed true nature of AI will sideline the search for evidence to solve the very real problems afflicting society today." This type of intelligent and fact based thinking is needed #motivation on a global scale to better understand and mitigate known or perceive risks of AI, and with the recent Office of the Director of National Intelligence Annual Threat Assessment of the U.S. Intelligence Community ( https://lnkd.in/g7naE-8F ), AI based capabilities are "being developed and are proliferating faster than companies and governments can shape norms, protect privacy, and prevent dangerous outcomes" and this perception should drive our #culture to continue to openly discuss the yin and yang of AI. As we continue to embrace new technology like AI, Swalé ®published an article in Forbes ( https://lnkd.in/eyRhUMHK ) this year that gives a very good perspective to close out this post as related to AI and fear with "fear mongering that has always been a part of technology is today causing highly skilled individuals to question their ability to have value in the future, as if one day they’ll wake up and AI will be more human than they are."