Concern for children's use of conversational AI is mounting. In our new guest blog post, Kathleen Anderson looks at how policy should consider the specific ways and contexts in which children learn how to trust to set up frameworks that anticipate and respond to emerging harm. ‘The interactions between children and conversational AI tools constitute a new kind of relationship, which will also contribute to the development of trust.’ https://lnkd.in/eWntJenW
Ada Lovelace Institute
Research Services
London, England 22,028 followers
Independent research institute with a mission to ensure data and AI work for people and society.
About us
The Ada Lovelace Institute is an independent research institute with a mission to ensure data and AI work for people and society. Ada will promote informed public understanding of the impact of AI and data-driven technologies on different groups in society.
- Website
- https://www.adalovelaceinstitute.org/
External link for Ada Lovelace Institute
- Industry
- Research Services
- Company size
- 11-50 employees
- Headquarters
- London, England
- Type
- Nonprofit
- Founded
- 2018
- Specialties
- AI, Data, Research, Public Deliberation, Public Engagement , and Technology
Locations
- Primary Get directions
100 St John Street
London, England EC1M 4EH, GB
Employees at Ada Lovelace Institute
Updates
-
Ada Lovelace Institute reposted this
AI chatbots are already causing real harm, yet UK law offers almost no meaningful protection. New analysis from Ada Lovelace Institute shows existing regulations don’t cover the risks posed by Advanced AI Assistants, writes Julia Smakman. https://lnkd.in/eVNfgpM4
-
💡Spotlight on liability: our new research finds that current liability rules are not sufficient to ensure AI risk is managed effectively and distributed fairly. In the context of wide-scale adoption of general-purpose AI systems across public services and the economy, this means that unmanaged legal and financial risk is loaded onto downstream deployers of AI, such as local authorities and small businesses. These deployers have few effective means of addressing these risks or seeking redress, and the resulting harms can affect their products, services and the people who use them. Read ‘Risky Business’: https://lnkd.in/e_EfVgjX
-
ICYMI: our latest policy briefing shows that nearly 9 in 10 people in the UK want independent AI regulation -- yet the current oversight of AI falls far behind that of other sectors, with no clear plans for improvement. The table below contrasts the safeguards in place for AI systems with those in place across four other critical sectors. For AI to positively transform society, its development, deployment and governance must reflect public attitudes. Without this alignment, the government risks investing in technologies that deepen inequalities, erode trust and ultimately fail to serve the public interest. Read more: https://lnkd.in/eUEWRnFb
-
-
🎄All we want for Christmas is for AI regulation to work for people and society. Our new public attitudes polling finds: 🔴 The UK public are concerned that tech companies’ needs are being prioritised over their own when it comes to AI regulation. 🔴 People want AI technologies to be regulated independently, and for the government and regulators to have the authority to halt the sale of systems causing harm. 🔴 An overwhelming majority think it is important that AI systems are developed and used in ways that treat people fairly. 🔴 A similar number believe AI products and services should not be rolled out until they are proven safe, even if this slows things down. Read ‘Great (public) expectations’ ⬇️ https://lnkd.in/eJkp69gT
-
-
Ada Lovelace Institute reposted this
Today, Ada Lovelace Institute launched their report, The Regulation of Delegation. We at AWO (Alex Lawrence-Archer and Lucie Audibert) carried out extensive legal analysis in support of the report, assessing whether current law provides effective protection from harms which could be caused by these tools, which are increasingly embedded in everyday life. We found a range of issues, with real concerns about gaps in protection as the scale and scope of the public's use of AI assistants grows, and the law struggles to keep up. Ada are planning more work on the issue, including recommendations for reform. An event - hosted by Ada - on 10 December is one opportunity to be part of that conversation. Otherwise, please do get in touch if these issues are of interest to you. https://lnkd.in/eDJtA5pn https://lnkd.in/eV2x-fzF Thanks to Julia Smakman and Harry Farmer for commissioning this work.
-
Our new policy briefing, authored by Julia Smakman and based on a legal analysis from AWO Agency, finds that the law in England and Wales provides no meaningful protection against the harms of Advanced AI Assistants. As they have become more widely used by the general public, Assistants have made headlines for encouraging dangerous behaviour, enabling delusional thought patterns and their general ability to sway people’s opinions. We need urgent action from policymakers and regulators to address these risks by filling in these significant legal gaps. Read the briefing: https://lnkd.in/e5DJaWPM
-
Ada's Octavia Field Reid is speaking at techUK's Digital Ethics Summit next week! Register to join the conversation here ➡️ https://lnkd.in/eqG2U_vr
-
-
Ada Lovelace Institute reposted this
Publication day 🙌 🎉 As part of the Digital Lives strand of Nuffield Foundation's Grown Up? programme, today we have published an expert feature : 'The Monitored Generation: Navigating Autonomy and Independence in the Digital Age' Written by Natalie Foos, Director, VoiceBox and Cliff Manning, Director of Research and Development, Parent Zone, the think piece explores the role and impact family tracking apps can have on the family dynamic, in particular the development of trust, agency, independence and resilience between parents and children. It's been an absolute joy working with Natalie and Cliff. Thank you both 😊 Read the piece, or listen to it read by the authors https://lnkd.in/eRsxgcE4
-
Ada Lovelace Institute reposted this
The European Commission unveiled its plan to overhaul how the EU enforces key tech regulations as part of a ‘Digital Omnibus.’ What is it trying to achieve? Tech Policy Press associate editor Ramsha Jahangir spoke to AI Now Institute's Leevi Saari and Ada Lovelace Institute's Julia Smakman. https://lnkd.in/eF7UqsBF