✳ Bridging Ethics and Operations in AI Systems✳ Governance for AI systems needs to balance operational goals with ethical considerations. #ISO5339 and #ISO24368 provide practical tools for embedding ethics into the development and management of AI systems. ➡Connecting ISO5339 to Ethical Operations ISO5339 offers detailed guidance for integrating ethical principles into AI workflows. It focuses on creating systems that are responsive to the people and communities they affect. 1. Engaging Stakeholders Stakeholders impacted by AI systems often bring perspectives that developers may overlook. ISO5339 emphasizes working with users, affected communities, and industry partners to uncover potential risks and ensure systems are designed with real-world impact in mind. 2. Ensuring Transparency AI systems must be explainable to maintain trust. ISO5339 recommends designing systems that can communicate how decisions are made in a way that non-technical users can understand. This is especially critical in areas where decisions directly affect lives, such as healthcare or hiring. 3. Evaluating Bias Bias in AI systems often arises from incomplete data or unintended algorithmic behaviors. ISO5339 supports ongoing evaluations to identify and address these issues during development and deployment, reducing the likelihood of harm. ➡Expanding on Ethics with ISO24368 ISO24368 provides a broader view of the societal and ethical challenges of AI, offering additional guidance for long-term accountability and fairness. ✅Fairness: AI systems can unintentionally reinforce existing inequalities. ISO24368 emphasizes assessing decisions to prevent discriminatory impacts and to align outcomes with social expectations. ✅Transparency: Systems that operate without clarity risk losing user trust. ISO24368 highlights the importance of creating processes where decision-making paths are fully traceable and understandable. ✅Human Accountability: Decisions made by AI should remain subject to human review. ISO24368 stresses the need for mechanisms that allow organizations to take responsibility for outcomes and override decisions when necessary. ➡Applying These Standards in Practice Ethical considerations cannot be separated from operational processes. ISO24368 encourages organizations to incorporate ethical reviews and risk assessments at each stage of the AI lifecycle. ISO5339 focuses on embedding these principles during system design, ensuring that ethics is part of both the foundation and the long-term management of AI systems. ➡Lessons from #EthicalMachines In "Ethical Machines", Reid Blackman, Ph.D. highlights the importance of making ethics practical. He argues for actionable frameworks that ensure AI systems are designed to meet societal expectations and business goals. Blackman’s focus on stakeholder input, decision transparency, and accountability closely aligns with the goals of ISO5339 and ISO24368, providing a clear way forward for organizations.
Engaging Stakeholders In AI Ethics Discussions
Explore top LinkedIn content from expert professionals.
Summary
Engaging stakeholders in AI ethics discussions ensures diverse perspectives are included when designing and implementing AI systems, fostering fairness, transparency, and accountability. This collaborative approach helps address bias, uphold societal values, and build trust in AI technologies.
- Involve diverse voices: Invite individuals from underrepresented groups, affected communities, and various industries to provide input on potential impacts of AI systems.
- Address power imbalances: Recognize and mitigate power dynamics to ensure all stakeholders can meaningfully contribute to ethical AI development and oversight.
- Emphasize transparency: Promote clear, accessible communication about how AI systems function and make decisions, particularly when outcomes impact people's lives.
-
-
AI Governance discussions that only include regulators and tech companies miss a critical voice: individuals and communities who are most affected by the vulnerabilities AI could create. The decision to listen, learn, and invite new leaders to the table could shape a AI driven future of equity, compassion, human creativity, and opportunity – rather than one of exclusion and exploitation. Sandeep Ravindran's recent article in Science Magazine (https://lnkd.in/ebsCKNh9) artfully examines how underrepresented communities are taking matters into their own hands. Initiatives like Masakhane (https://www.masakhane.io/) – a pan-African effort led by volunteer researchers and coders to help the African community better navigate the internet while managing its dangers – remind us of the human face of AI governance. But too often, the communities whose data is exploited are the same communities excluded from the benefits of AI. A long-time partner of the The Patrick J. McGovern Foundation and founder of Indigenous in AI/ML (https://lnkd.in/eVmUfk53), Michael Running Wolf asserts: “Data is the new oil…And so there’s sort of this very colonial perspective of, this is a land grab.” Transforming opportunity through AI and creating a more abundant future for everyone will require us to boldly challenge fundamental assumptions about proprietary research, lack of representation, and the absence of mechanisms for shared ownership. As leaders like Michael pave the way, we are proud to support their efforts and we eagerly invite the ethical AI community to continue finding ways to learn from civil society as we all make space for new champions and perspectives. Check out the full article and share your thoughts below. #aiforgood #aigovernance #equity #representation #datasovereignty #decolonizedata #equity #justice #civilsociety
-
The Decision Tree for Responsible AI is a guide developed by AAAS (American Association for the Advancement of Science) to help put ethical principles into practice when creating and using AI, and aid users and their organizations in making informed choices regarding the development or deployment of AI solutions. The DT is meant to be versatile, but may not cover every unique situation and might not always have clear yes/no answers. It's advised to continually consult the chart throughout the AI solution's development and deployment, considering the changing nature of projects. Engaging stakeholders inclusively is vital to this framework. Before using the tree, determine who is best suited to answer the questions based on their expertise. To do this, the decision tree is referring to Partnership on AI's white paper “Making AI Inclusive” (see: https://lnkd.in/gEeDhe4q) on stakeholder engagement to make sure that the right people are included and get a seat on the table: 1. All participation is a form of labor that should be recognized 2. Stakeholder engagement must address inherent power asymmetries 3. Inclusion and participation can be integrated across all stages of the development lifecycle 4. Inclusion and participation must be integrated to the application of other responsible AI principles The decision tree was developed against the backdrop of the NIST AI Risk Management Framework (AI RMF 1.0) and its definition of 7 principles of trustworthy AI: valid and reliable, safe, secure and resilient, accountable and transparent, explainable and interpretable, privacy enhanced, and fair with their harmful biases managed. See: https://lnkd.in/gHp5iE7x Apart from the decision tree itself, it is worth having a look at the additional resources at the end of the paper: - 4 overall guiding principles for evaluating AI in the context of human rights (Informed Consent, Beneficence, Nonmaleficence, Justice). - Examples of groups that are commonly subject to disproportionate impacts. - Common ways that AI can lead to harm (Over-reliance on safety features, inadequate fail-safes, over-reliance on automation, distortion of reality or gaslighting, reduced self-esteem/reputation damage, addiction/attention hijacking, identity theft, misattribution, economic exploitation, devaluation of expertise, dehumanization, public shaming, loss of liberty, loss of privacy, environmental impact, erosion of social & democratic structures). See for more from Microsoft: https://lnkd.in/gCVK9kNe - Examples of guidance for regular post-deployment monitoring and auditing of AI systems. #decisiontree #RAI