Otter.ai Enterprise Data Privacy Risks

Explore top LinkedIn content from expert professionals.

Summary

Otter.ai-enterprise-data-privacy-risks refer to the concerns businesses face regarding the privacy and consent of data collected and used by Otter.ai’s AI-powered meeting transcription service. This includes questions about whether all meeting participants know they are being recorded, how their data is used (including for AI training), and who is responsible for ensuring legal compliance and participant consent.

  • Check consent flows: Always verify that every meeting participant is aware of and agrees to the recording and transcription process before using AI-powered notetakers.
  • Review vendor policies: Read privacy policies and terms of service carefully to understand how your data, and that of your guests, may be collected, stored, or used for AI training.
  • Negotiate safeguards: Ask vendors about options to restrict data sharing or training, such as data processing agreements or private-cloud solutions, to better protect confidential business and personal information.
Summarized by AI based on LinkedIn member posts
  • View profile for Cecilia Ziniti

    CEO & Co-Founder, GC AI | General Counsel and CLO | Host of CZ & Friends Podcast

    20,028 followers

    👀 So, is opt-out + relying on your users to get others' consent, enough? Big AI + privacy case to watch: Otter.ai faces a proposed class action in California federal court. Important read for product counsel because the allegations go deep on the user experience of the Otter software - the complaint includes screenshots of the look and wording of the consent screens and even compares Otter to other providers' consent mechanics. The complaint alleges that Otter Notetaker, the AI-powered meeting assistant, records and accesses private conversations *without* consent from all meeting participants… and then uses those recordings to train its models. The allegation is that in two-party consent states like California, taking and using recordings without consent is a wiretap. That raises the question: who’s accountable for consent and data use, the vendor or the business user? Can terms of service be meaningful notice? What could go wrong? See an example below - the meeting recorder stayed in the meeting _after_ its owner left the call. Spoiler: that conversation killed their deal. Some key quotes from the complaint: 💬 “When default account settings are used, Otter does not send a pre-meeting invitation or notification to obtain consent from meeting participants. Instead, Otter accountholders must toggle this setting 'On' for it to apply to pre-scheduled meetings." 💬 "In effect, Otter tries to shift responsibility, outsourcing its legal obligations to its accountholders, rather than seeking permission and consent from the individuals Otter records, as required by law." What do you think? Are vendors doing enough on consent, or will the burden keep falling on legal teams? PS - interesting from a product counsel perspective PPS: we designed GC AI with opt-in and confidentiality from the start. We have zero data retention with our model providers, and information inputted remains privileged. #legalAI #GCAI #otterai #otterlawsuit #productcounsel

  • View profile for James Kavanagh

    Founder and CEO @ AI Career Pro and Hooman AI | AI Safety Engineering & Governance | Writer @ blog.aicareer.pro

    8,361 followers

    Are you risking your company’s IP and customer personal data for the convenience of meeting transcription? Convenience is great, but not at the cost of accidentally donating your crown-jewel knowledge and customer personal data to someone else’s AI lab. AI-powered meeting transcription services are becoming increasingly popular - they offer so much convenience, sometimes even for free. I spent a few days combing through the actual Privacy Policies and Terms of Service for four popular AI notetakers—Otter.ai, Read.ai, Fireflies.ai, and tl;dv—to see whether they train their models on your conversations. I have no association with any of them, but what I found is worrying. Here’s the short version: 🔹 Otter.ai – On by default. Otter trains its speech-recognition models on 'de-identified' audio and text of your conversations. They claim that personal identifiers are stripped, but your confidential data still fuels their AI unless you negotiate a restriction. 🔹 Read.ai – Your choice. By default your data is not used. If you opt in to its Customer Experience Program, your transcripts can help improve the product. 🔹 Fireflies.ai – Aggregated-only. They forbid training on identifiable content, limiting themselves to anonymised usage statistics. No individual transcript feeds their AI. 🔹 tl;dv – Never. They explicitly prohibit using customer recordings for model training. Transcript snippets sent to their AI engine are anonymised, sharded, and not retained. Why it matters: Even “de-identified” data can leak competitive IP or sensitive customer information if models are ever breached or repurposed. Business recordings can contain personal data, meaning you’re still on the hook for consent, minimisation, and transfer safeguards. Your management, board and clients may assume you’ve locked this down; finding out later is awkward at best, non-compliant at worst. By the way - true anonymisation of data is exceptionally difficult, especially in complex data like speech. Claims that only 'deidentified' data is used for training needs to be scrutinised. Not one of the products reviewed provided any meaningful technical information about how they achieve this. What to do next: 1. Read the legal docs—marketing pages are full of assurances, but they don’t tell the full story. Read the privacy policies and terms of service. 2. Decide your red line: zero training, aggregated-only, or opt-in? 3. Configure or negotiate: most vendors offer enterprise DPAs or private-cloud options if you ask. 4. Review the consent flows: it’s not just your rights—your guests’ data is in play too. Have you asked the meeting participants if they are happy to hand their personal data and IP to a third party? Convenience is great, but not at the cost of accidentally donating your crown-jewel knowledge to someone else’s AI lab. I write about Doing AI Governance for real at ethos-ai.org. Subscribe for free analysis and guidance: https://ethos-ai.org #AIGovernance

  • View profile for Luiza Jarovsky, PhD
    Luiza Jarovsky, PhD Luiza Jarovsky, PhD is an Influencer

    Co-founder of the AI, Tech & Privacy Academy (1,300+ participants), Author of Luiza’s Newsletter (87,000+ subscribers), Mother of 3

    121,374 followers

    🚨 BREAKING: An extremely important lawsuit in the intersection of PRIVACY and AI was filed against Otter over its AI meeting assistant's lack of CONSENT from meeting participants. If you use meeting assistants, read this: Otter, the AI company being sued, offers an AI-powered service that, like many in this business niche, can transcribe and record the content of private conversations between its users and meeting participants (who are often NOT users and do not know that they are being recorded). Various privacy laws in the U.S. and beyond require that, in such cases, consent from meeting participants is obtained. The lawsuit specifically mentions: - The Electronic Communications Privacy Act; - The Computer Fraud and Abuse Act; - The California Invasion of Privacy Act; - California’s Comprehensive Computer Data and Fraud Access Act; - The California common law torts of intrusion upon seclusion and conversion; - The California Unfair Competition Law; As more and more people use AI agents, AI meeting assistants, and all sorts of AI-powered tools to "improve productivity," privacy aspects are often forgotten (in yet another manifestation of AI exceptionalism). In this case, according to the lawsuit, the company has explicitly stated that it trains its AI models on recordings and transcriptions made using its meeting assistant. The main allegation is that Otter obtains consent only from its account holders but not from other meeting participants. It asks users to make sure other participants consent, shifting the privacy responsibility. As many of you know, this practice is common, and various AI companies shift the privacy responsibility to users, who often ignore (or don't know) what national and state laws actually require. So if you use meeting assistants, you should know that it's UNETHICAL and in many places also ILLEGAL to record or transcribe meeting participants without obtaining their consent. Additionally, it's important to have in mind that AI companies might use this data (which often contains personal information) to train AI, and there could be leaks and other privacy risks involved. - 👉 Link to the lawsuit below. 👉 Never miss my curations and analyses on AI's legal and ethical challenges: join my newsletter's 74,000+ subscribers. 👉 To learn more about the intersection of privacy and AI (and many other topics), join the 24th cohort of my AI Governance Training in October.

Explore categories