Data Privacy Issues With AI

Explore top LinkedIn content from expert professionals.

  • View profile for Luiza Jarovsky, PhD
    Luiza Jarovsky, PhD Luiza Jarovsky, PhD is an Influencer

    Co-founder of the AI, Tech & Privacy Academy (1,300+ participants), Author of Luiza’s Newsletter (87,000+ subscribers), Mother of 3

    121,372 followers

    🚨 BREAKING: An extremely important lawsuit in the intersection of PRIVACY and AI was filed against Otter over its AI meeting assistant's lack of CONSENT from meeting participants. If you use meeting assistants, read this: Otter, the AI company being sued, offers an AI-powered service that, like many in this business niche, can transcribe and record the content of private conversations between its users and meeting participants (who are often NOT users and do not know that they are being recorded). Various privacy laws in the U.S. and beyond require that, in such cases, consent from meeting participants is obtained. The lawsuit specifically mentions: - The Electronic Communications Privacy Act; - The Computer Fraud and Abuse Act; - The California Invasion of Privacy Act; - California’s Comprehensive Computer Data and Fraud Access Act; - The California common law torts of intrusion upon seclusion and conversion; - The California Unfair Competition Law; As more and more people use AI agents, AI meeting assistants, and all sorts of AI-powered tools to "improve productivity," privacy aspects are often forgotten (in yet another manifestation of AI exceptionalism). In this case, according to the lawsuit, the company has explicitly stated that it trains its AI models on recordings and transcriptions made using its meeting assistant. The main allegation is that Otter obtains consent only from its account holders but not from other meeting participants. It asks users to make sure other participants consent, shifting the privacy responsibility. As many of you know, this practice is common, and various AI companies shift the privacy responsibility to users, who often ignore (or don't know) what national and state laws actually require. So if you use meeting assistants, you should know that it's UNETHICAL and in many places also ILLEGAL to record or transcribe meeting participants without obtaining their consent. Additionally, it's important to have in mind that AI companies might use this data (which often contains personal information) to train AI, and there could be leaks and other privacy risks involved. - 👉 Link to the lawsuit below. 👉 Never miss my curations and analyses on AI's legal and ethical challenges: join my newsletter's 74,000+ subscribers. 👉 To learn more about the intersection of privacy and AI (and many other topics), join the 24th cohort of my AI Governance Training in October.

  • View profile for Katharina Koerner

    AI Governance & Security I Trace3 : All Possibilities Live in Technology: Innovating with risk-managed AI: Strategies to Advance Business Goals through AI Governance, Privacy & Security

    44,368 followers

    This new white paper by Stanford Institute for Human-Centered Artificial Intelligence (HAI) titled "Rethinking Privacy in the AI Era" addresses the intersection of data privacy and AI development, highlighting the challenges and proposing solutions for mitigating privacy risks. It outlines the current data protection landscape, including the Fair Information Practice Principles, GDPR, and U.S. state privacy laws, and discusses the distinction and regulatory implications between predictive and generative AI. The paper argues that AI's reliance on extensive data collection presents unique privacy risks at both individual and societal levels, noting that existing laws are inadequate for the emerging challenges posed by AI systems, because they don't fully tackle the shortcomings of the Fair Information Practice Principles (FIPs) framework or concentrate adequately on the comprehensive data governance measures necessary for regulating data used in AI development. According to the paper, FIPs are outdated and not well-suited for modern data and AI complexities, because: - They do not address the power imbalance between data collectors and individuals. - FIPs fail to enforce data minimization and purpose limitation effectively. - The framework places too much responsibility on individuals for privacy management. - Allows for data collection by default, putting the onus on individuals to opt out. - Focuses on procedural rather than substantive protections. - Struggles with the concepts of consent and legitimate interest, complicating privacy management. It emphasizes the need for new regulatory approaches that go beyond current privacy legislation to effectively manage the risks associated with AI-driven data acquisition and processing. The paper suggests three key strategies to mitigate the privacy harms of AI: 1.) Denormalize Data Collection by Default: Shift from opt-out to opt-in data collection models to facilitate true data minimization. This approach emphasizes "privacy by default" and the need for technical standards and infrastructure that enable meaningful consent mechanisms. 2.) Focus on the AI Data Supply Chain: Enhance privacy and data protection by ensuring dataset transparency and accountability throughout the entire lifecycle of data. This includes a call for regulatory frameworks that address data privacy comprehensively across the data supply chain. 3.) Flip the Script on Personal Data Management: Encourage the development of new governance mechanisms and technical infrastructures, such as data intermediaries and data permissioning systems, to automate and support the exercise of individual data rights and preferences. This strategy aims to empower individuals by facilitating easier management and control of their personal data in the context of AI. by Dr. Jennifer King Caroline Meinhardt Link: https://lnkd.in/dniktn3V

  • View profile for Beth Kanter
    Beth Kanter Beth Kanter is an Influencer

    Trainer, Consultant & Nonprofit Innovator in digital transformation & workplace wellbeing, recognized by Fast Company & NTEN Lifetime Achievement Award.

    521,257 followers

    This Stanford study examined how six major AI companies (Anthropic, OpenAI, Google, Meta, Microsoft, and Amazon) handle user data from chatbot conversations.  Here are the main privacy concerns. 👀 All six companies use chat data for training by default, though some allow opt-out 👀 Data retention is often indefinite, with personal information stored long-term 👀 Cross-platform data merging occurs at multi-product companies (Google, Meta, Microsoft, Amazon) 👀 Children's data is handled inconsistently, with most companies not adequately protecting minors 👀 Limited transparency in privacy policies, which are complex and hard to understand and often lack crucial details about actual practices Practical Takeaways for Acceptable Use Policy and Training for nonprofits in using generative AI: ✅ Assume anything you share will be used for training - sensitive information, uploaded files, health details, biometric data, etc. ✅ Opt out when possible - proactively disable data collection for training (Meta is the one where you cannot) ✅ Information cascades through ecosystems - your inputs can lead to inferences that affect ads, recommendations, and potentially insurance or other third parties ✅ Special concern for children's data - age verification and consent protections are inconsistent Some questions to consider in acceptable use policies and to incorporate in any training. ❓ What types of sensitive information might your nonprofit staff  share with generative AI?  ❓ Does your nonprofit currently specifically identify what is considered “sensitive information” (beyond PID) and should not be shared with GenerativeAI ? Is this incorporated into training? ❓ Are you working with children, people with health conditions, or others whose data could be particularly harmful if leaked or misused? ❓ What would be the consequences if sensitive information or strategic organizational data ended up being used to train AI models? How might this affect trust, compliance, or your mission? How is this communicated in training and policy? Across the board, the Stanford research points that developers’ privacy policies lack essential information about their practices. They recommend policymakers and developers address data privacy challenges posed by LLM-powered chatbots through comprehensive federal privacy regulation, affirmative opt-in for model training, and filtering personal information from chat inputs by default. “We need to promote innovation in privacy-preserving AI, so that user privacy isn’t an afterthought." How are you advocating for privacy-preserving AI? How are you educating your staff to navigate this challenge? https://lnkd.in/g3RmbEwD

  • AI is not failing because of bad ideas; it’s "failing" at enterprise scale because of two big gaps: 👉 Workforce Preparation 👉 Data Security for AI While I speak globally on both topics in depth, today I want to educate us on what it takes to secure data for AI—because 70–82% of AI projects pause or get cancelled at POC/MVP stage (source: #Gartner, #MIT). Why? One of the biggest reasons is a lack of readiness at the data layer. So let’s make it simple - there are 7 phases to securing data for AI—and each phase has direct business risk if ignored. 🔹 Phase 1: Data Sourcing Security - Validating the origin, ownership, and licensing rights of all ingested data. Why It Matters: You can’t build scalable AI with data you don’t own or can’t trace. 🔹 Phase 2: Data Infrastructure Security - Ensuring data warehouses, lakes, and pipelines that support your AI models are hardened and access-controlled. Why It Matters: Unsecured data environments are easy targets for bad actors making you exposed to data breaches, IP theft, and model poisoning. 🔹 Phase 3: Data In-Transit Security - Protecting data as it moves across internal or external systems, especially between cloud, APIs, and vendors. Why It Matters: Intercepted training data = compromised models. Think of it as shipping cash across town in an armored truck—or on a bicycle—your choice. 🔹 Phase 4: API Security for Foundational Models - Safeguarding the APIs you use to connect with LLMs and third-party GenAI platforms (OpenAI, Anthropic, etc.). Why It Matters: Unmonitored API calls can leak sensitive data into public models or expose internal IP. This isn’t just tech debt. It’s reputational and regulatory risk. 🔹 Phase 5: Foundational Model Protection - Defending your proprietary models and fine-tunes from external inference, theft, or malicious querying. Why It Matters: Prompt injection attacks are real. And your enterprise-trained model? It’s a business asset. You lock your office at night—do the same with your models. 🔹 Phase 6: Incident Response for AI Data Breaches - Having predefined protocols for breaches, hallucinations, or AI-generated harm—who’s notified, who investigates, how damage is mitigated. Why It Matters: AI-related incidents are happening. Legal needs response plans. Cyber needs escalation tiers. 🔹 Phase 7: CI/CD for Models (with Security Hooks) - Continuous integration and delivery pipelines for models, embedded with testing, governance, and version-control protocols. Why It Matter: Shipping models like software means risk comes faster—and so must detection. Governance must be baked into every deployment sprint. Want your AI strategy to succeed past MVP? Focus and lock down the data. #AI #DataSecurity #AILeadership #Cybersecurity #FutureOfWork #ResponsibleAI #SolRashidi #Data #Leadership

  • View profile for Dr. Barry Scannell
    Dr. Barry Scannell Dr. Barry Scannell is an Influencer

    AI Law & Policy | Partner in Leading Irish Law Firm William Fry | Member of Irish Government’s Artificial Intelligence Advisory Council | PhD in AI & Copyright | LinkedIn Top Voice in AI | Global Top 200 AI Leaders 2025

    56,756 followers

    HUGE NEWS EVERYONE: OpenAI just launched ChatGPT Enterprise. This is a significant milestone in the intersection of AI and the corporate world. Marketed as an enterprise-grade solution with advanced security, data protection, and unlimited access to GPT-4 functionalities, it is projected to fundamentally reshape work processes within organisations. However, this technological leap raises nuanced legal issues, particularly in the realms of data protection, intellectual property (IP), and the forthcoming AI Act’s foundation model regulatory obligations. ChatGPT Enterprise assures users of robust data protection, stipulating that the model is not trained on business-specific data and that all conversations are encrypted both in transit and at rest. OpenAI claims the platform's SOC 2 compliance adds an additional layer of trust in its security protocols. However, from a legal perspective, questions arise around data ownership and control. OpenAI promises not to train the model on user-specific data, but what about when a company fine-tunes the model on its own data - what are the data protection considerations then? GDPR imposes stringent requirements on data usage, sharing, and deletion, which businesses employing ChatGPT Enterprise must consider. ChatGPT Enterprise's capability to assist in creative work, coding, and data analysis poses tricky questions in relation to ownership. For example, if the AI generates a piece of written content or code, who owns the copyright? The current legal framework, which traditionally recognises human authorship, may not be fully equipped to navigate the nuances of AI-generated IP. The US District Court last week ruled that AI generated work cannot be copyrighted. What if you as a company are engaging third parties to develop code and other work output - if they are using ChatGPT enterprise to generate the outputs, there may be nothing protected by copyright, and no IP rights to assign to you. How will you address that? Then there’s Article 28b of the forthcoming AI Act which imposes strict regulatory obligations on providers of certain foundation models (like GPT4). If you finetune the model enough, that could potentially make YOU the provider with all the regulatory obligations that could bring. And if it doesn’t, you still may have user obligations. Mass adoption of AI across various sectors could draw scrutiny by competition regulators. Could OpenAI’s ubiquity in over 80% of Fortune 500 companies potentially raise concerns about market competition and behaviour? The debut of ChatGPT Enterprise marks an inflection point in the deployment of AI in enterprise environments. While its promise of improved productivity and robust data protection is enticing, businesses and legal experts must pay heed to the complex legal landscape it interacts with. Comprehensive regulation and judicious legal practice are critical in balancing technological advancement with the protection of individual and corporate rights.

  • View profile for Durgesh Pandey

    Chartered Accountant || Professor, Speaker, Trainer & Researcher || Specialisation in the areas of Forensic Accounting and Financial Crime Investigations.

    6,796 followers

    𝑾𝒉𝒆𝒏 𝑨𝑰 𝑲𝒏𝒐𝒘𝒔 𝒀𝒐𝒖 𝑩𝒆𝒕𝒕𝒆𝒓 𝑻𝒉𝒂𝒏 𝒀𝒐𝒖 𝑲𝒏𝒐𝒘 𝒀𝒐𝒖𝒓𝒔𝒆𝒍𝒇 – 𝒕𝒉𝒊𝒔 𝒊𝒔 𝒏𝒐𝒕 𝒔𝒐𝒎𝒆 𝒓𝒉𝒆𝒕𝒐𝒓𝒊𝒄𝒂𝒍 𝒒𝒖𝒆𝒔𝒕𝒊𝒐𝒏 𝒃𝒖𝒕 𝒊𝒕’𝒔 𝒂 𝒓𝒆𝒂𝒍 𝒄𝒉𝒂𝒍𝒍𝒆𝒏𝒈𝒆 𝒐𝒇 𝒕𝒐𝒅𝒂𝒚 Yesterday, my good friend Narasimhan Elangovan raised an important point about privacy with trending, GPU melting, and Ghibli images, I thought to discuss some real concerns with examples that I could think of The problem lies not just in data leaks or breaches – but more so in how AI quietly infers, profiles, and nudges us in ways we barely notice. Some under-discussed scenarios-   1.  𝗜𝗻𝗳𝗲𝗿𝗲𝗻𝘁𝗶𝗮𝗹 𝗣𝗿𝗶𝘃𝗮𝗰𝘆 𝗕𝗿𝗲𝗮𝗰𝗵 You never disclosed your religion, health status, or financial worries. But the AI inferred it—based on the questions you asked, the times you searched, and the tone of your inputs. 𝗥𝗶𝘀𝗸: This silent profiling is invisible to you but available to platforms. In the wrong hands, it enables discrimination, targeted influence, or surveillance—with no transparency. 𝟮.  𝗦𝗵𝗮𝗱𝗼𝘄 𝗣𝗿𝗼𝗳𝗶𝗹𝗶𝗻𝗴 Even if you have never used a particular AI tool, it can still build a profile on you. Maybe a colleague uploaded a file with your comments. Or your name appears in several related chats. 𝗥𝗶𝘀𝗸: You are being digitally reconstructed—without consent. And this profile might be incomplete, outdated, or wrong, yet used in risk scoring, decisions, or content filtering. 𝟯. 𝗕𝗲𝗵𝗮𝘃𝗶𝗼𝘂𝗿𝗮𝗹 𝗠𝗮𝗻𝗶𝗽𝘂𝗹𝗮𝘁𝗶𝗼𝗻 𝘃𝗶𝗮 𝗙𝗲𝗲𝗱𝗯𝗮𝗰𝗸 𝗟𝗼𝗼𝗽𝘀 Imagine an AI financial assistant slowly nudging CFOs toward certain frameworks or partners—not based on merit, but algorithmic incentives. 𝗥𝗶𝘀𝗸: This is not advice. It’s behavioural steering. Over time, professional decisions are shaped not by judgment, but by what the system wants you to believe or do. These aren’t edge cases of tomorrow—they are quietly unfolding in the background of our workflows, and conversations. 𝗜𝘁𝘀 𝗵𝗶𝗴𝗵 𝘁𝗶𝗺𝗲 𝘄𝗲 𝘀𝘁𝗼𝗽 𝘀𝗲𝗲𝗶𝗻𝗴 "𝗽𝗿𝗶𝘃𝗮𝗰𝘆" 𝗮𝘀 𝗮 𝗰𝗵𝗲𝗰𝗸𝗯𝗼𝘅 𝗮𝗻𝗱 𝘀𝘁𝗮𝗿𝘁 𝘀𝗲𝗲𝗶𝗻𝗴 𝗶𝘁 𝗳𝗼𝗿 𝘄𝗵𝗮𝘁 𝗶𝘁 𝗶𝘀. Would love to hear how others are approaching and how do we future-proof this? #AIPrivacy #DigitalEthics #AlgorithmicTransparency #FutureOfAI

  • View profile for Jon Nordmark
    Jon Nordmark Jon Nordmark is an Influencer

    Co-founder, CEO @ Iterate.ai - private AI || prior co-founder, CEO @ eBags - $1.65B of products sold before acquired

    29,443 followers

    𝗣𝘂𝗯𝗹𝗶𝗰 𝗔𝗜 may create the 𝗯𝗶𝗴𝗴𝗲𝘀𝘁 𝗮𝗰𝗰𝗶𝗱𝗲𝗻𝘁𝗮𝗹 𝗱𝗮𝘁𝗮 𝗹𝗲𝗮𝗸 𝗶𝗻 𝗯𝘂𝘀𝗶𝗻𝗲𝘀𝘀 𝗵𝗶𝘀𝘁𝗼𝗿𝘆. Public AI includes ChatGPT, Gemini, Grok, Anthropic, DeepSeek, Perplexity — any chat system that runs on giant 𝘀𝗵𝗮𝗿𝗲𝗱 𝗚𝗣𝗨 𝗳𝗮𝗿𝗺𝘀. And while those public models do amazing things, we need to talk about something most leaders underestimate. — 𝗣𝘂𝗯𝗹𝗶𝗰 𝗔𝗜 𝗶𝘀𝗻’𝘁 𝗮 𝘃𝗮𝘂𝗹𝘁. — 𝗜𝘁’𝘀 𝗮 𝘃𝗮𝗰𝘂𝘂𝗺. People are pasting their identities, internal documents, and even corporate secrets into systems designed for scale — 𝗻𝗼𝘁 𝗽𝗿𝗶𝘃𝗮𝗰𝘆. It feels private. — But — It isn’t. Analogy: Using Public AI is like whispering confidential strategy into a megaphone because you thought it was turned off. Now pause for a moment and watch the video. The colored lines represent 𝗺𝗶𝗹𝗹𝗶𝗼𝗻𝘀 𝗼𝗳 𝗽𝗿𝗼𝗺𝗽𝘁𝘀 from 𝗺𝗶𝗹𝗹𝗶𝗼𝗻𝘀 𝗼𝗳 𝗽𝗲𝗼𝗽𝗹𝗲 working at 𝘁𝗵𝗼𝘂𝘀𝗮𝗻𝗱𝘀 𝗼𝗳 𝗰𝗼𝗺𝗽𝗮𝗻𝗶𝗲𝘀 — running across: — 𝘀𝗵𝗮𝗿𝗲𝗱 𝗵𝗮𝗿𝗱𝘄𝗮𝗿𝗲 (1,000s of NVIDIA GPUs powering Public AI) — 𝘀𝗵𝗮𝗿𝗲𝗱 𝗺𝗼𝗱𝗲𝗹𝘀 (like ChatGPT’s GPT-5) — 𝘀𝗵𝗮𝗿𝗲𝗱 𝗶𝗻𝗳𝗿𝗮𝘀𝘁𝗿𝘂𝗰𝘁𝘂𝗿𝗲 "𝑺𝒉𝒂𝒓𝒆𝒅" is operative word... the word to pay attention to. T.H.A.T. is Public AI: powerful, massive, centralized… but fundamentally risky. Especially for CISOs, Boards, and CEOs responsible for safeguarding PII, HIPAA-sensitive, and financial data. When your data enters a Public LLM, it moves across the world: — It gets logged. — It gets cached. — It gets stored. — And sometimes, it gets trained on. Even when vendors like OpenAI (ChatGPT), Google (Gemini), DeepSeek, Anthropic, and Perplexity say they don’t train on your data, 𝗼𝘁𝗵𝗲𝗿 𝗿𝗶𝘀𝗸𝘀 𝗿𝗲𝗺𝗮𝗶𝗻: — logging — retention — global routing — caching — prompt injection — model leakage — subpoena exposure Training isn’t the only danger. It’s just one of many. You may think you “deleted” something… but think again. That’s why this series exists: To break down the overlooked risks of Public AI — and highlight the safer path. That safer path is Private AI: Models that run behind your firewall, on your hardware, with your controls. You may think you “deleted” something, but think again. Tomorrow we begin the list. 𝗣𝗿𝗶𝘃𝗮𝘁𝗲 𝗔𝗜 🔒 𝗸𝗲𝗲𝗽𝘀 𝘆𝗼𝘂𝗿 𝗱𝗮𝘁𝗮 𝗯𝗲𝗵𝗶𝗻𝗱 𝘆𝗼𝘂𝗿 𝗳𝗶𝗿𝗲𝘄𝗮𝗹𝗹 — 𝗻𝗼𝘁 𝗶𝗻 𝘀𝗼𝗺𝗲𝗼𝗻𝗲 𝗲𝗹𝘀𝗲’𝘀 𝗹𝗼𝗴𝘀. — I’ve added a simple explainer in the comments. (This post is part of my 27-part Public AI Risk Series.) #PrivateAI #EnterpriseAI #CyberSecurity #BoardDirectors #AICompliance

  • View profile for Patrick Sullivan

    VP of Strategy and Innovation at A-LIGN | TEDx Speaker | Forbes Technology Council | AI Ethicist | ISO/IEC JTC1/SC42 Member

    10,336 followers

    ✴ AI Governance Blueprint via ISO Standards – The 4-Legged Stool✴ ➡ ISO42001: The Foundation for Responsible AI #ISO42001 is dedicated to AI governance, guiding organizations in managing AI-specific risks like bias, transparency, and accountability. Focus areas include: ✅Risk Management: Defines processes for identifying and mitigating AI risks, ensuring systems are fair, robust, and ethically aligned. ✅Ethics and Transparency: Promotes policies that encourage transparency in AI operations, data usage, and decision-making. ✅Continuous Monitoring: Emphasizes ongoing improvement, adapting AI practices to address new risks and regulatory updates. ➡#ISO27001: Securing the Data Backbone AI relies heavily on data, making ISO27001’s information security framework essential. It protects data integrity through: ✅Data Confidentiality and Integrity: Ensures data protection, crucial for trustworthy AI operations. ✅Security Risk Management: Provides a systematic approach to managing security risks and preparing for potential breaches. ✅Business Continuity: Offers guidelines for incident response, ensuring AI systems remain reliable. ➡ISO27701: Privacy Assurance in AI #ISO27701 builds on ISO27001, adding a layer of privacy controls to protect personally identifiable information (PII) that AI systems may process. Key areas include: ✅Privacy Governance: Ensures AI systems handle PII responsibly, in compliance with privacy laws like GDPR. ✅Data Minimization and Protection: Establishes guidelines for minimizing PII exposure and enhancing privacy through data protection measures. ✅Transparency in Data Processing: Promotes clear communication about data collection, use, and consent, building trust in AI-driven services. ➡ISO37301: Building a Culture of Compliance #ISO37301 cultivates a compliance-focused culture, supporting AI’s ethical and legal responsibilities. Contributions include: ✅Compliance Obligations: Helps organizations meet current and future regulatory standards for AI. ✅Transparency and Accountability: Reinforces transparent reporting and adherence to ethical standards, building stakeholder trust. ✅Compliance Risk Assessment: Identifies legal or reputational risks AI systems might pose, enabling proactive mitigation. ➡Why This Quartet? Combining these standards establishes a comprehensive compliance framework: 🥇1. Unified Risk and Privacy Management: Integrates AI-specific risk (ISO42001), data security (ISO27001), and privacy (ISO27701) with compliance (ISO37301), creating a holistic approach to risk mitigation. 🥈 2. Cross-Functional Alignment: Encourages collaboration across AI, IT, and compliance teams, fostering a unified response to AI risks and privacy concerns. 🥉 3. Continuous Improvement: ISO42001’s ongoing improvement cycle, supported by ISO27001’s security measures, ISO27701’s privacy protocols, and ISO37301’s compliance adaptability, ensures the framework remains resilient and adaptable to emerging challenges.

  • View profile for Kathy Reid MBA

    R&D Lead - Data, AI and ML | Creating fairer data futures at Mozilla Data Collective

    2,280 followers

    From March 28th, Amazon will send everything you say - all audio utterances - to the cloud, removing support for "processing on device". Billed as necessary to support Voice ID + Alexa+ features, there are significant implications for #privacy. This means everything you say in your home - your domestic environment - is sent to a corporate whose goal is to generate revenue from that #speech #data. This follows moves by Ford to patent in-car advertising technology based upon what's spoken inside a vehicle and failed attempts by Rabbit with the R1 to create a universal voice agent. We're seeing here another inflection point in the development of #VoiceAssistants - where in trying to find product-market fit, users are expected to give up their #privacy. Will this mean people will pay for #privacy with options like Home Assistant? Time will tell. Nuanced and informed reporting by Scharon Harding for Ars Technica. https://lnkd.in/gSsVxb9h

Explore categories