AI training on customer data is a sensitive subject. Here are 3 win/win approaches: 1️⃣ Transparency Clearly stating how you train AI models on confidential information is not negotiable. Sliding changes into your terms and conditions won’t cut it. Companies who are transparent win trust. And their customers get less suspicious. 2️⃣ Favor opt-in Practices vary when it comes to customer consent: - Mandatory: Spam filtering training on Gmail - Opt-Out: ChatGPT Plus and below - Opt-in: Claude (mostly) There is always going to be reputation risk with forcing - or nudging - people into AI training. So the “cleanest” approach is to incentivize opt-ins with cash or free product. 3️⃣ Let customers approve de-identification outputs (h/t to Jonathan Todd) Sometimes you don't need to de-identify data before AI training, like with temperature readings. In others, de-identification will be relatively easy. For example, you could simply remove all columns linked to a person in a SQL database of supermarket sales records. But sometimes de-identification will be both necessary and difficult. Generative AI training on contracts is one such example. You'll need to remove: - Financial projections for merger or acquisition - Descriptions of trade secrets sold - Personal data in images Automating legal document drafting can create a lot of value, though, so this is a high-risk, high-reward situation. The best way forward? Let customers confirm the results of de-identification before AI training on the redacted material. And incentivize them to do so. If the company is using AI for de-identification, that algorithm gets better every time. This will form a virtuous cycle over time, where the - de-identification process improves - customers get compensated - company is more confident 🔳 Bottom line - you can improve trust in AI products by: 1. Transparency 2. Favoring opt-in 3. Letting customers approve de-identification What do you think of this approach?
Tips for Building Transparent AI Models
Explore top LinkedIn content from expert professionals.
Summary
Transparent AI models are systems that are designed to be accessible and understandable, ensuring their design, decision-making, and purposes are clear to users and stakeholders. Promoting transparency in AI is essential for building trust, providing accountability, and managing risks effectively.
- Communicate openly: Clearly explain how AI systems work, their purpose, and their limitations using non-technical language to help users understand their benefits and risks.
- Incorporate user consent: Prioritize methods that allow users to opt-in to data processing and provide clear options for reviewing and managing their information.
- Focus on explainability: Design AI systems that can provide easy-to-understand explanations of how decisions are made to ensure user confidence and accountability.
-
-
If you are a financial institution and you are using AI - your customers need to know - says discussion paper by Quebec Autorité des marchés financiers (Québec) (AMF) Key points related to transparency: 🔹 Consumers should have access to the information they need to assess the benefits and risks associated with the use of AI in the context of procuring a financial product or service, especially when making a product or service decision. 🔹 The information should cover, in particular, the objectives, limitations and functioning of the AIS and the measures in place to mitigate the associated risks. 🔹 Consumers should also have access to all relevant information on the rights and remedies available to them should they incur harm from interacting with the AIS. 🔹 You should use plain, non-technical and concise language 🔹 Design the disclosure interface to encourage consumers to read the information closely rather than respond quickly. 🔹 Consumers who find the disclosed information insufficient should be able to request and receive assistance from a technical expert. 🔹 Consumers should also be informed, by appropriate means (e.g., digital watermarking), that content published by a financial player has been wholly or partly created by a generative AI tool. 🔹 Whenever an AIS could have a high impact on a consumer, the consumer should have the opportunity to request a clear, reliable explanation of the process and main factors that led to the outcomes or decision provided by the AI system 🔹 The consumer should be able to obtain a list of any personal information about them that is used by the AIS and to correct or update such information if it is inaccurate. 🔹 When consumers interact with an AIS, they should be able to get help, at any stage of the process, through an interaction with a competent person. They should also have the option of requesting to have the outcomes or decision of the AIS reviewed by a person #dataprivacy #dataprotection #privacyFOMO #AIprivacy pic by macrovector_official for Freepik https://lnkd.in/e4Wm7Pwd
-
🗺 Navigating AI Impact Assessments with ISO 42005: Essential Areas for Compliance Leaders 🗺 In speaking with compliance, cybersecurity, and AI leaders around the world, one of the most common questions I have been getting of late is, “As we prepare for ISO 42001 certification, what blind spots should we be working to address?” Without hesitation, my response has been, and will continue to be, conducting and documenting a meaningful AI Impact assessment. Fortunately, though still in DRAFT status, ISO 42005 provides a structured framework for organizations to navigate that very concern effectively. As compliance executives, understanding and integrating the key components of this standard into your AI impact assessments is critical; below are the areas I feel are most essential for you to begin your journey. 1. Ethical Considerations and Bias Management: - Address potential biases and ensure fairness across AI functionalities. Evaluate the design and operational parameters to mitigate unintended discriminatory outcomes. 2. Data Privacy and Security: - Incorporate robust measures to protect sensitive data processed by AI systems. Assess the risks related to data breaches and establish protocols to secure personal and proprietary information. 3. Transparency and Explainability: - Ensure that the workings of AI systems are understandable and transparent to stakeholders. This involves documenting the AI's decision-making processes and maintaining clear records that explain the logic and reasoning behind AI-driven decisions. 4. Operational Risks and Safeguards: - Identify operational vulnerabilities that could affect the AI system’s performance. Implement necessary safeguards to ensure stability and reliability throughout the AI system's lifecycle. 5. Legal and Regulatory Compliance: - Regularly update the impact assessments to reflect changing legal landscapes, especially concerning data protection laws and AI-specific regulations. 6. Stakeholder Impact: - Consider the broader implications of AI implementation on all stakeholders, including customers, employees, and partners. Evaluate both potential benefits and harms to align AI strategies with organizational values and societal norms. By starting with these critical areas in your AI impact assessments as recommended by ISO 42005, you can steer your organization towards responsible AI use in a way that upholds ethical standards and complies with regulatory, and market, expectations. If you need help getting started, as always, please don't hesitate to let us know! A-LIGN #AICompliance #ISO42005 #EthicalAI #DataProtection #AItransparency #iso42001 #TheBusinessofCompliance #ComplianceAlignedtoYou
-
In earlier posts, I've discussed the immense promise and major risks associated with the new wave of text-prompted AI analytical tools, e.g., ADA, Open Interpreter, etc. Here are some best practices to avoid these pitfalls... 🔸 Prepare Written Analysis Plans - many Data Analysts are unfamiliar with this approach and even fewer regularly implement it ( < 20% by my estimates). But preparing and sharing a written plan detailing your key questions and hypotheses (including their underlying theoretical basis), data collection strategy, inclusion/exclusion criteria, and methods to be used prior to performing your analyses can protect you from HARKing (hypothesizing after results are known) and generally increase the integrity, transparency and effectiveness of your analyses. Here's a prior post with additional detail: https://lnkd.in/g6VyqCsc 🔸 Split Your Dataset Before EDA - Exploratory Data Analysis is a very valuable tool, but if you perform EDA and confirmatory analyses on the same dataset, you risk overfitting, and expose your analysis to risks of HARKing and p-hacking. Separating your dataset into exploratory and confirmatory partitions allows you to explore freely without compromising the integrity of subsequent analyses, and helps ensure the rigor and reliability of your findings. 🔸 Correct for Problem of Multiple Comparisons - also known as the "Familywise Error Rate", this refers to inflating the probability of a Type I error when performing multiple hypotheis tests within the same analysis. There are a number of different methods for performing this correction, but care should be taken in the selection since they have tradeoffs between likelihoods of Type I (i.e., "false positive) and Type II (i.e., false negative) errors. 🔸 Be Transparent - fully document the decisions you make during all of your analyses. This includes exclusion of any outliers, performance of any tests, and any deviations from your analysis plan. Make your raw and transformed data, and analysis code available to the relevant people, subject to data sensitivity considerations. 🔸 Seek Methodological and Analysis Review - have your analysis plan and final draft analyses reviewed by qualified Data Analysts/Data Scientists. This will help ensure that your analyses are well-suited to the key questions you are seeking to answer, and that you have performed and interpreted them correctly. None of these pitfalls are new or unique to AI analytic tools. However, the power of these tools to run dozens or even hundreds of analyses at a time with a single text prompt substantially increases the risks of running afoul of sound analytical practices. Adhering to the principles and approaches detailed above will help ensure the reliability, validity and integrity of your analyses. #dataanalysis #statisticalanalysis #ai #powerbi
-
“The technology problem (around AI implementation) has largely been solved” because we have available tools at reasonable costs that require attainable skillsets. “It’s a matter of will and vision more than tech at this point, so let’s point our tools to solving business problems.” Justin C. Here are four key pieces of advice from Croft to get started on your AI journey. full article: https://lnkd.in/dkeSwNu5 1. DON’T START WITH AI. It sounds counter-intuitive, but instead, center the business and customers first, and then ask how the technology can support strategy and operations. Croft’s key insight is that “the technology problem has largely been solved” because we have available tools at reasonable costs that require attainable skillsets. “It’s a matter of will and vision more than tech at this point, so let’s point our tools to solving business problems.” 2. CHOOSE YOUR USE CASE TO GET A WIN THAT MATTERS. After aligning on a business-first approach, it is time to narrow further and choose the right initial project. It’s important to choose something manageable enough to succeed and get a win under your belt in order to generate momentum and learning behind your AI program. At the same time, it needs to be something that’s going to move the needle and people will care about. Croft advises to focus on use cases with quantifiable outcomes that align with important metrics and have enough impact to improve to the organization. 3. BUILD YOUR TEAM. Here is the paradox about most AI projects: the people who build it are often not the people who use it. To be successful then requires that you bring into the project the end-users, so they understand what you’re trying to accomplish, how it works and why it matters to them. It is critical to get them on board early and engage them often. Remember the metrics from rule 2? This impacts your team as well. You should be able to answer the following questions: How well is AI solving your use case? How will success be measured? How are we defining efficiency and effectiveness? Be transparent not only about how the AI works, and how it gets to the results. Once people have buy-in to the use case and technology, they'll start changing their behaviors, which will drive your metrics. 4. THE ABILITY TO EXPLAIN IS MORE IMPORTANT THAN ACCURACY. It's worth it to give up some accuracy to have a more explainable model. This idea of a black box is over. You need to be able to stand in front of your CFO and answer the question, “How did the AI come to this conclusion?” And you want your team to critically evaluate the AI output rather than blindly trusting the answers, and people just will not trust or engage with it over the long term. Rather, they should understand the business goal you’re trying to achieve and have some understanding of how these models work. You don’t need everyone to understand how it works at a high level, but you do need everyone to understand how accuracy is measured and reported.