This new white paper "Introduction to AI assurance" by the UK Department for Science, Innovation, and Technology from Feb 12, 2024, provides an EXCELLENT overview of assurance methods and international technical standards that can be utilized to create and implement ethical AI systems. The new guidance is based on the UK AI governance framework, laid out in the 2023 white paper "A pro-innovation approach to AI regulation". This white paper defined 5 universal principles applicable across various sectors to guide and shape the responsible development and utilization of AI technologies throughout the economy: - Safety, Security, and Robustness - Appropriate Transparency and Explainability - Fairness - Accountability and Governance - Contestability and Redress The 2023 white paper also introduced a suite of tools designed to aid organizations in understanding "how" these outcomes can be achieved in practice, emphasizing tools for trustworthy AI, including assurance mechanisms and global technical standards. See: https://lnkd.in/gydvi9Tt The new publication, "Introduction to AI assurance," is a deep dive into these assurance mechanisms and standards. AI assurance encompasses a spectrum of techniques for evaluating AI systems throughout their lifecycle. These range from qualitative assessments for evaluating potential risks and societal impacts to quantitative assessments for measuring performance and legal compliance. Key techniques include: - Risk Assessment: Identifies potential risks like bias, privacy, misuse of technology, and reputational damage. - Impact Assessment: Anticipates broader effects on the environment, human rights, and data protection. - Bias Audit: Examines data and outcomes for unfair biases. - Compliance Audit: Reviews adherence to policies, regulations, and legal requirements. - Conformity Assessment: Verifies if a system meets required standards, often through performance testing. - Formal Verification: Uses mathematical methods to confirm if a system satisfies specific criteria. The white paper also explains how organizations in the UK can ensure their AI systems are responsibly governed, risk-assessed, and compliant with regulations: 1.) For demonstrating good internal governance processes around AI, a conformity assessment against standards like ISO/IEC 42001 (AI Management System) is recommended. 2.) To understand the potential risks of AI systems being acquired, an algorithmic impact assessment by a accredited conformity assessment body is advised. This involves (self) assessment against a proprietary framework or responsible AI toolkit. 3.) Ensuring AI systems adhere to existing data protection regulations involves a compliance audit by a third-party assurance provider. This white paper also has exceptional infographics! Pls, check it out, and TY Victoria Beckman for posting and providing us with great updates as always!
Global Standards for AI Safety
Explore top LinkedIn content from expert professionals.
Summary
Global standards for AI safety are universal guidelines and frameworks established to ensure that artificial intelligence technologies are developed and used ethically, transparently, and responsibly across the globe. By addressing concerns such as fairness, security, and risk mitigation, these standards aim to protect individuals, organizations, and societies from potential harms while promoting innovation.
- Focus on transparency: Establish clear and accessible guidelines for the development, deployment, and monitoring of AI systems to ensure trust and accountability.
- Develop effective risk assessments: Implement rigorous frameworks to identify, evaluate, and address potential harms from advanced AI, including biases, security risks, and societal impacts.
- Embrace international cooperation: Collaborate across countries and sectors to align AI safety practices and create consistent global regulations for responsible AI use.
-
-
The council of the European Union has officially approved the artificial Intelligence (AI) Act on Tuesday 21 May 2024, a landmark legislation designed to harmonise rules on AI within the EU. This pioneering law, which follows a “risk-based” approach, aims to set a global standard for AI regulation. Marking a final step in the legislative process, the Council of the European Union today approved the EU AI Act. In March, the European Parliament overwhelmingly endorsed the AI Act. The Act will next be published in the Official Journal. The law begins to go into force across the EU 20 days afterward. Matthieu Michel, Belgian Secretary of Digitalization, said "With the AI act, Europe emphasizes the importance of trust, transparency and accountability when dealing with new technologies" Before a high-risk AI system is deployed for public services, a fundamental rights impact assessment will be required. The regulation also provides for increased transparency regarding the development and use of high-risk AI systems. High-risk AI systems will need to be registered in the EU database for high-risk AI, and users of an emotion recognition system will have to inform people when they are being exposed to such a system. The new law categorises different types of artificial intelligence according to risk. AI systems presenting only limited risk would be subject to very light transparency obligations, while high-risk AI systems would be authorised, but subject to a set of requirements and obligations to gain access to the EU market. AI systems such as, for example, cognitive behavioural manipulation and social scoring will be banned from the EU because their risk is deemed unacceptable. The law also prohibits the use of AI for predictive policing based on profiling and systems that use biometric data to categorise people according to specific categories such as race, religion, or sexual orientation. To ensure proper enforcement, the Act establishes: ➡ An AI Office within the Commission to enforce the rules across the EU ➡ A scientific panel of independent experts to support enforcement ➡ An AI Board to promote consistent and effective application of the AI Act ➡ An advisory forum to provide expertise to the AI Board and the Commission Corporate boards must be prepared to govern their company for compliance, as well as risk and innovation in relation to the implementation of AI and other technologies. Optima Board Services Group advises boards on governing a broad range of tech and emerging technologies as a part of both the ‘technology regulatory complexity multiplier’ TM and the ‘board digital portfolio’ TM. #aigovernance #artificialintelligencegovernance #aiact #compliance #artificialintelligence #responsibleai #corporategovernance https://lnkd.in/gNQu32zU
-
In recent months, I have had the pleasure of contributing to the International Scientific Report on the Safety of Advanced AI, a project of the UK government's Department for Science, Innovation and Technology (DSIT) and AI Safety Institute. This report sets out an up-to-date, science-based understanding of the safety of advanced AI systems. The independent, international, and inclusive report is a landmark moment of international collaboration. It marks the first time the international community has come together to supports efforts to build a shared scientific and evidence-based understanding of frontier AI risks. The intention to create such a report was announced at the AI Safety Summit in November 2023 This interim report is published ahead of the AI Seoul Summit next week. The final report will publish before the AI Action Summit in France. The interim report restricts its focus to a summary of the evidence on general-purpose AI, which have advanced rapidly in recent years. The report synthesizes the evidence base on the capabilities of, and risks from, general-purpose AI and evaluates technical methods for assessing and mitigating them. Key report takeaways include: 1️⃣ General-purpose AI can be used to advance the public interest, leading to enhanced wellbeing, prosperity, and scientific discoveries. 2️⃣ According to many metrics, the capabilities of general-purpose AI are advancing rapidly. Whether there has been significant progress on fundamental challenges such as causal reasoning is debated among researchers. 3️⃣ Experts disagree on the expected pace of future progress of general-purpose AI capabilities, variously supporting the possibility of slow, rapid, or extremely rapid progress. 4️⃣ There is limited understanding of the capabilities and inner workings of general-purpose AI systems. Improving our understanding should be a priority. 5️⃣ Like all powerful technologies, current and future general-purpose AI can be used to cause harm. For example, malicious actors can use AI for large-scale disinformation and influence operations, fraud, and scams. 6️⃣ Malfunctioning general-purpose AI can also cause harm, for instance through biassed decisions with respect to protected characteristics like race, gender, culture, age, and disability. 7️⃣ Future advances in general-purpose AI could pose systemic risks, including labour market disruption, and economic power inequalities. Experts have different views on the risk of humanity losing control over AI in a way that could result in catastrophic outcomes. 8️⃣ Several technical methods (including benchmarking, red-teaming and auditing training data) can help to mitigate risks, though all current methods have limitations, and improvements are required. 9️⃣ The future of AI is uncertain, with a wide range of scenarios appearing possible. The decisions of societies and governments will significantly impact its future. #ResponsibleAI #GenerativeAI #ArtificialIntelligence #AI #AISafety
-
🚨 Breaking News: Just Released! 🚨 The U.S. Artificial Intelligence Safety Institute (AISI) has unveiled its groundbreaking vision, mission, and strategic goals, released yesterday. This pivotal document sets the stage for the future of AI safety and innovation, presenting a comprehensive roadmap designed to ensure AI technologies benefit society while minimizing risks. Key Highlights: 🔹 Vision: AISI envisions a future where safe AI innovation enables a thriving world. The institute aims to harness AI's potential to accelerate scientific discovery, technological innovation, and economic growth, while addressing significant risks associated with powerful AI systems. 🔹 Mission: The mission is clear - beneficial AI depends on AI safety, and AI safety depends on science. AISI is dedicated to defining and advancing the science of AI safety, promoting trust and accelerating innovation through rigorous scientific research and standards. 🔹 Strategic Goals: 1. Advancing AI Safety Science: AISI will focus on empirical research, testing, and evaluation to develop practical safety solutions for advanced AI models, systems, and agents. 2. Developing and Disseminating AI Safety Practices: The institute plans to build and publish specific metrics, evaluation tools, and guidelines to assess and mitigate AI risks. 3. Supporting AI Safety Ecosystems: AISI aims to promote the adoption of safety guidelines and foster international cooperation to ensure global AI safety standards. 🔥 Hot Takes and Precedences: -"Safety breeds trust, and trust accelerates innovation." AISI's approach mirrors historical successes in other technologies, emphasizing safety as the cornerstone for unlocking AI's full potential. - Collaboration is Key: AISI will work with diverse stakeholders, including government agencies, international partners, and the private sector, to build a connected and resilient AI safety ecosystem. - Global Reach: By leading an inclusive, international network on AI safety, AISI underscores the necessity for globally adopted safety practices. This document is a must-read for anyone involved in the AI landscape. Stay informed and engaged as AISI leads the way towards a safer, more innovative future in AI. 🌍🔍 For more details, dive into the full document attached below. Follow WhitegloveAI for updates! #AISafety #Innovation #AIResearch #Technology #CLevelExecs #ArtificialIntelligence #AISI #BreakingNews #NIST Feel free to share your thoughts and join the conversation!
-
I'm thrilled to announce the release of my latest article published by The Brookings Institution, co-authored with Sabrina Küspert, titled "Regulating General-Purpose AI: Areas of Convergence and Divergence across the EU and the US." 🔍 Key Highlights: EU's Proactive Approach to AI Regulation: -The EU AI Act introduces binding rules specifically for general-purpose AI models. -The creation of the European AI Office ensures centralized oversight and enforcement, aiming for transparency and systemic risk management across AI applications. -This comprehensive framework underscores the EU's commitment to fostering innovation while safeguarding public interests. US Executive Order 14110: A Paradigm Shift in AI Policy: -The Executive Order marks the most extensive AI governance strategy in the US, focusing on the safe, secure, and trustworthy development and use of AI. -By leveraging the Defense Production Act, it mandates reporting and adherence to strict guidelines for dual-use foundation models, addressing potential economic and security risks. -The establishment of the White House AI Council and NIST's AI Safety Institute represents a coordinated effort to unify AI governance across federal agencies. Towards Harmonized International AI Governance: -Our analysis reveals both convergence and divergence in the regulatory approaches of the EU and the US, highlighting areas of potential collaboration. -The G7 Code of Conduct on AI, a voluntary international framework, is viewed as a crucial step towards aligning AI policies globally, promoting shared standards and best practices. -Even when domestic regulatory approaches diverge, this collaborative effort underscores the importance of international cooperation in managing the rapid advancements in AI technology. 🔗 Read the Full Article Here: https://lnkd.in/g-jeGXvm #AI #AIGovernance #EUAIAct #USExecutiveOrder #AIRegulation
-
Are we sacrificing safety for speed? I just got off a call with a podcast host talking about about how humanity has become the #AI test bed! Today, powerful AI tools are being released to millions at a breakneck pace. In the past 2 weeks alone, four more #LLMs were released in an already crowded space - each with larger context window, better performance etc. And yet all of them still carry the same flaws - lack of accuracy (hallucinations), bias which makes them difficult to trust for most applications beside a few. In the rush to roll out the latest generative AI and large language models, it seems the companies developing them have forgotten some important lessons from the past. I remember a time, not too long ago, when software was meticulously crafted, rigorously tested, and slowly rolled out to users. When products were built to last. My Sony Music System from 1993 still works. Please don’t judge! In the mad dash to be first to market with the hottest new #LLM, those practices have fallen by the wayside. The companies themselves admit they can't fully anticipate how the models will behave in the wild. We are all becoming unwitting test subjects. Biases from the training data used by the generative models continue to rear their ugly head in their outputs along with hallucinations. A recent study showed that over 80% of the population believe everything that is written. Every lapse in truthfulness undermines trust and spreads misinformation at an unprecedented scale. History shows that a lack of testing and human oversight can cause grievous implications. While the potential of LLMs is immense, the risks of undertested AI infiltrating every aspect of our lives and society are greater. Move fast and break things is a dangerous tactic. What can we do? ➡️ Lobby for proactive regulation, stronger industry standards, and major investment in AI ethics and safety research. ➡️ Researchers & practitioners need to build evaluation frameworks into every step of the process, not just bolted on after the fact. ➡️ Business leaders need to step back, slow down and embrace the responsible practices and wisdom of other safety-critical industries like aerospace, medicine and nuclear energy. As users: ➡️ Demand evidence of rigorous testing, risk assessment, and bias analysis before using or promoting an AI system. ➡️ Prioritize AI solutions that incorporate responsible development practices, oversight, and safety considerations. ➡️ Embrace safety-critical engineering. The plane is being built as we're flying it with #generativeAI. Let's make sure we're heading to a destination we actually want to arrive at. It's still not too late to put in trustworthy practices and safeguards. Our collective future may depend on it. #AI #ResponsibleAI #ethicalAI #genAI Center for Equitable AI and Machine Learning Systems Latimer Oslo for AI Michael Anton Dila Kem-Laurin Lubin, PhD-C