AI training on customer data is a sensitive subject. Here are 3 win/win approaches: 1️⃣ Transparency Clearly stating how you train AI models on confidential information is not negotiable. Sliding changes into your terms and conditions won’t cut it. Companies who are transparent win trust. And their customers get less suspicious. 2️⃣ Favor opt-in Practices vary when it comes to customer consent: - Mandatory: Spam filtering training on Gmail - Opt-Out: ChatGPT Plus and below - Opt-in: Claude (mostly) There is always going to be reputation risk with forcing - or nudging - people into AI training. So the “cleanest” approach is to incentivize opt-ins with cash or free product. 3️⃣ Let customers approve de-identification outputs (h/t to Jonathan Todd) Sometimes you don't need to de-identify data before AI training, like with temperature readings. In others, de-identification will be relatively easy. For example, you could simply remove all columns linked to a person in a SQL database of supermarket sales records. But sometimes de-identification will be both necessary and difficult. Generative AI training on contracts is one such example. You'll need to remove: - Financial projections for merger or acquisition - Descriptions of trade secrets sold - Personal data in images Automating legal document drafting can create a lot of value, though, so this is a high-risk, high-reward situation. The best way forward? Let customers confirm the results of de-identification before AI training on the redacted material. And incentivize them to do so. If the company is using AI for de-identification, that algorithm gets better every time. This will form a virtuous cycle over time, where the - de-identification process improves - customers get compensated - company is more confident 🔳 Bottom line - you can improve trust in AI products by: 1. Transparency 2. Favoring opt-in 3. Letting customers approve de-identification What do you think of this approach?
How to Obtain Consumer Consent for Data Collection
Explore top LinkedIn content from expert professionals.
Summary
Obtaining consumer consent for data collection is essential to protect privacy and build trust, especially as businesses increasingly rely on AI technologies. It involves clear communication, informed choices, and adherence to legal and ethical standards when handling personal information.
- Be transparent: Clearly explain how consumer data will be collected, used, and stored, ensuring privacy notices are accessible and easy to understand.
- Offer clear consent options: Provide consumers with opt-in mechanisms for data collection, and allow them to easily revoke their consent if they change their minds.
- Minimize and anonymize data: Collect only the data necessary for your goals, apply anonymization techniques, and regularly review datasets to remove outdated or unnecessary information.
-
-
The Oregon Department of Justice released new guidance on legal requirements when using AI. Here are the key privacy considerations, and four steps for companies to stay in-line with Oregon privacy law. ⤵️ The guidance details the AG's views of how uses of personal data in connection with AI or training AI models triggers obligations under the Oregon Consumer Privacy Act, including: 🔸Privacy Notices. Companies must disclose in their privacy notices when personal data is used to train AI systems. 🔸Consent. Updated privacy policies disclosing uses of personal data for AI training cannot justify the use of previously collected personal data for AI training; affirmative consent must be obtained. 🔸Revoking Consent. Where consent is provided to use personal data for AI training, there must be a way to withdraw consent and processing of that personal data must end within 15 days. 🔸Sensitive Data. Explicit consent must be obtained before sensitive personal data is used to develop or train AI systems. 🔸Training Datasets. Developers purchasing or using third-party personal data sets for model training may be personal data controllers, with all the required obligations that data controllers have under the law. 🔸Opt-Out Rights. Consumers have the right to opt-out of AI uses for certain decisions like housing, education, or lending. 🔸Deletion. Consumer #PersonalData deletion rights need to be respected when using AI models. 🔸Assessments. Using personal data in connection with AI models, or processing it in connection with AI models that involve profiling or other activities with heightened risk of harm, trigger data protection assessment requirements. The guidance also highlights a number of scenarios where sales practices using AI or misrepresentations due to AI use can violate the Unlawful Trade Practices Act. Here's a few steps to help stay on top of #privacy requirements under Oregon law and this guidance: 1️⃣ Confirm whether your organization or its vendors train #ArtificialIntelligence solutions on personal data. 2️⃣ Validate your organization's privacy notice discloses AI training practices. 3️⃣ Make sure organizational individual rights processes are scoped for personal data used in AI training. 4️⃣ Set assessment protocols where required to conduct and document data protection assessments that address the requirements under Oregon and other states' laws, and that are maintained in a format that can be provided to regulators.
-
Over the past 2.5 years of building Zocks, I’ve talked to many Chief Compliance Officers at large financial firms about how to ensure compliance when using AI. Here are 4 areas I always recommend they cover: 1) Consent Since AI analyzes a lot of data and conversations, I tell them to make sure FAs get consent from their clients. They can get consent in multiple ways: - Pre-meeting email -Have the advisor specifically ask during the meeting (Zocks detects and reports on this automatically) - Include it in the paperwork The key is notifying and getting clear consent that the firm will use AI systems. 2) Output review by FAs AI systems in financial planning are designed to aid advisors – not automate everything. FAs are still responsible for reviewing AI outputs, ensuring that the system only captures necessary data, and checking it before entering it into books and records. That’s why I always emphasize the workflow we developed for Zocks: it ensures advisors review outputs before they’re finalized. 3) Supervising & archiving policy Frankly, FINRA and SEC regulations around AI are a bit vague and open to interpretation. We expect many changes ahead, especially around supervision, archiving, and privacy. What do you consider books and records and is that clear? Firms need a clear, documented policy on supervising and archiving. Their AI system must be flexible enough to adapt as the policy changes, or they’ll need to overhaul it. Spot checks or supervision through the system itself should be part of this policy to ensure compliance. 4) Recommendations Some AI systems offer recommendations. Zocks doesn’t. In fact, I tell Chief Compliance Officers to be cautious around recommendations. Why? They need to understand the data points driving the recommendation, ensure FAs agree with it, and not assume it's always correct. Zocks factually reports instead of recommending, which I think is safer from a compliance perspective. Final thoughts: If you: - Get consent - Ensure FAs review outputs - Establish a supervising and archiving, or books and records policy - Watch out for recommendations It will help you a lot with compliance. And when disputes arise, you’ll have the data to defend yourself, your firm, and your advisors. Any thoughts?
-
We're kicking off our deep dive on AI risks and internal controls by diving into the first privacy concern: 𝘂𝗻𝗮𝘂𝘁𝗵𝗼𝗿𝗶𝘇𝗲𝗱 𝗱𝗮𝘁𝗮 𝗰𝗼𝗹𝗹𝗲𝗰𝘁𝗶𝗼𝗻 𝗮𝗻𝗱 𝘂𝘀𝗮𝗴𝗲. ❌ 𝗧𝗵𝗲 𝗥𝗶𝘀𝗸: AI systems can collect personal or sensitive data without individuals’ knowledge or consent. This includes scraping publicly available information, repurposing data for unintended uses, and failing to inform users about how their data will be processed or stored. ✅𝗧𝗵𝗲 𝗖𝗼𝗻𝘁𝗿𝗼𝗹𝘀: To mitigate this risk, organizations should implement controls across the entire data lifecycle—from collection to processing to secure deletion—using a four-pronged approach: 🧾 𝗣𝗼𝗹𝗶𝗰𝗶𝗲𝘀 & 𝗚𝗼𝘃𝗲𝗿𝗻𝗮𝗻𝗰𝗲 - Establish and enforce clear data collection, usage, and retention policies - Require Data Protection Impact Assessments before deploying AI tools - Mandate transparency documentation for all AI models that use personal data ✒️ 𝗖𝗼𝗻𝘀𝗲𝗻𝘁 𝗠𝗮𝗻𝗮𝗴𝗲𝗺𝗲𝗻𝘁 - Obtain informed, explicit consent for data use - Provide clear, accessible privacy notices at the point of data collection - Allow users to opt out or revoke consent easily 📊 𝗗𝗮𝘁𝗮 𝗠𝗶𝗻𝗶𝗺𝗶𝘇𝗮𝘁𝗶𝗼𝗻 & 𝗔𝗻𝗼𝗻𝘆𝗺𝗶𝘇𝗮𝘁𝗶𝗼𝗻 - Collect only data that is strictly necessary for the AI model’s purpose - Apply de-identification or anonymization techniques - Regularly review data sets to purge unnecessary or outdated information 🔎 𝗢𝘃𝗲𝗿𝘀𝗶𝗴𝗵𝘁 & 𝗠𝗼𝗻𝗶𝘁𝗼𝗿𝗶𝗻𝗴 - Conduct regular audits of data collection practices - Monitor third-party data sources and vendors for compliance - Implement data usage logs and alerts to detect misuse By putting the right controls in place—across policies, consent, data handling, and monitoring—you can reduce the risk of unauthorized data collection and build more trustworthy AI systems. Remember, it’s not just about what your AI can do—it’s about what it 𝙨𝙝𝙤𝙪𝙡𝙙 do with people’s data. 🦦 𝗕𝗲𝗳𝗼𝗿𝗲 𝘆𝗼𝘂 𝗱𝗶𝘃𝗲 𝗯𝗮𝗰𝗸 𝗶𝗻𝘁𝗼 𝘆𝗼𝘂𝗿 𝗱𝗮𝘆, 𝗮𝘀𝗸 𝘆𝗼𝘂𝗿𝘀𝗲𝗹𝗳: - Do we know exactly what data our AI systems are collecting—and why? - Are users fully informed and empowered to control their own data? - Have we reviewed whether the data we store is still necessary—or should it be purged? - What safeguards do we have if a third-party vendor mishandles data? Thoughtful questions today help prevent privacy headlines tomorrow. Stay tuned—next week, we’ll explore the murky waters of 𝗱𝗮𝘁𝗮 𝘀𝘁𝗼𝗿𝗮𝗴𝗲 𝗮𝗻𝗱 𝘀𝗲𝗰𝘂𝗿𝗶𝘁𝘆. #internalaudit #audit #auditforward #swimwithaudie #auditsmarter #AI #ArtificialIntelligence #AuditingAI #AuditTheFuture #AuditingAI