Updates on State Privacy Laws

Explore top LinkedIn content from expert professionals.

Summary

With the rise of artificial intelligence (AI), states like Colorado, Oregon, and California are introducing new privacy laws to ensure ethical and transparent use of technology. These updates aim to address issues like algorithmic discrimination, data protection, and consumer rights, impacting both AI developers and users.

  • Understand new requirements: Stay updated on state-specific laws, such as Colorado’s high-risk AI system regulations and Oregon’s consent requirements for using personal data in AI training.
  • Prioritize consumer rights: Ensure compliance with emerging rules, including consumer notification, opt-out rights, and transparency about AI-generated content or decisions.
  • Conduct assessments: Regularly perform risk and impact assessments for AI systems to identify biases, mitigate risks, and remain compliant with state and federal guidelines.
Summarized by AI based on LinkedIn member posts
  • View profile for Victoria Beckman

    Associate General Counsel - Cybersecurity & Privacy

    31,581 followers

    Yesterday, Colorado’s Consumer Protections for #ArtificialIntelligence (SB24-205) was sent to the Governor for signature. If enacted, the law will be effective on Feb. 1, 2026, and Colorado would become the first U.S. state to pass broad restrictions on private companies using #AI. The bill requires both developer and deployer of a high-risk #AI system to use reasonable care to avoid algorithmic discrimination. A High-Risk AI System is defined as “any AI system that when deployed, makes, or is a substantial factor in making, a consequential decision.” Some computer software is exempted, such as AI-enabled video games, #cybersecurity software, and #chatbots that have a user policy prohibiting discrimination. There is a rebuttable presumption that a developer and a deployer used reasonable care if they each comply with certain requirements related to the high-risk system, including Developer: - Disclose and provide documentation to deployers regarding the high-risk system’s intended use, known or foreseeable #risks, a summary of data used to train it, possible biases, risk mitigation measures, and other information necessary for the deployer to complete an #impactassessment. - Make a publicly available statement summarizing the types of high-risk systems developed and available to a deployer. - Disclose, within 90 days, to the attorney general and known deployers when algorithmic discrimination is discovered, either through self-testing or deployer notice. Deployer: - Implement a #riskmanagement policy that governs high-risk AI use and specifies processes and personnel used to identify and mitigate algorithmic discrimination. - Complete an impact assessment to mitigate potential abuses before customers use their products. - Notify a consumer of specified items if the high-risk #AIsystem makes a consequential decision concerning a consumer. - If the deployer is a controller under the Colorado Privacy Act (#CPA), it must inform the consumer of the right to #optout of profiling in furtherance of solely #automateddecisions. - Provide a consumer with an opportunity to correct incorrect personal data that the system processed in making a consequential decision. - Provide a consumer with an opportunity to appeal, via human review if technically feasible, an adverse consequential decision concerning the consumer arising from the deployment of the system. - Ensure that users can detect any generated synthetic content and disclose to consumers that they are engaging with an AI system. The law contains a #safeharbor providing an affirmative defense (under CO law in a CO court) to a developer or deployer that: 1) discovers and cures a violation through internal testing or red-teaming, and 2) otherwise complies with the National Institute of Standards and Technology (NIST) AI Risk Management Framework or another nationally or internationally recognized risk management #framework.

  • View profile for Sam Castic

    Privacy Leader and Lawyer; Partner @ Hintze Law

    3,764 followers

    The Oregon Department of Justice released new guidance on legal requirements when using AI. Here are the key privacy considerations, and four steps for companies to stay in-line with Oregon privacy law. ⤵️ The guidance details the AG's views of how uses of personal data in connection with AI or training AI models triggers obligations under the Oregon Consumer Privacy Act, including: 🔸Privacy Notices. Companies must disclose in their privacy notices when personal data is used to train AI systems. 🔸Consent. Updated privacy policies disclosing uses of personal data for AI training cannot justify the use of previously collected personal data for AI training; affirmative consent must be obtained. 🔸Revoking Consent. Where consent is provided to use personal data for AI training, there must be a way to withdraw consent and processing of that personal data must end within 15 days. 🔸Sensitive Data. Explicit consent must be obtained before sensitive personal data is used to develop or train AI systems. 🔸Training Datasets. Developers purchasing or using third-party personal data sets for model training may be personal data controllers, with all the required obligations that data controllers have under the law. 🔸Opt-Out Rights. Consumers have the right to opt-out of AI uses for certain decisions like housing, education, or lending. 🔸Deletion. Consumer #PersonalData deletion rights need to be respected when using AI models. 🔸Assessments. Using personal data in connection with AI models, or processing it in connection with AI models that involve profiling or other activities with heightened risk of harm, trigger data protection assessment requirements. The guidance also highlights a number of scenarios where sales practices using AI or misrepresentations due to AI use can violate the Unlawful Trade Practices Act. Here's a few steps to help stay on top of #privacy requirements under Oregon law and this guidance: 1️⃣ Confirm whether your organization or its vendors train #ArtificialIntelligence solutions on personal data.  2️⃣ Validate your organization's privacy notice discloses AI training practices. 3️⃣ Make sure organizational individual rights processes are scoped for personal data used in AI training. 4️⃣ Set assessment protocols where required to conduct and document data protection assessments that address the requirements under Oregon and other states' laws, and that are maintained in a format that can be provided to regulators.

  • View profile for Katharina Koerner

    AI Governance & Security I Trace3 : All Possibilities Live in Technology: Innovating with risk-managed AI: Strategies to Advance Business Goals through AI Governance, Privacy & Security

    44,368 followers

    Yesterday, the California Department of Justice, Attorney General’s Office (AGO), issued an advisory to provide guidance to consumers and entities that develop, sell, and use AI about their rights and obligations under California law. The "Legal Advisory on the Application of Existing California Laws to Artificial Intelligence" outlines: 1) Unfair Competition Law (Bus. & Prof. Code, § 17200 et seq.): Requires AI systems to avoid deceptive practices such as false advertising of capabilities and unauthorized use of personal likeness, making violations of related state, federal, or local laws actionable under this statute. 2) False Advertising Law (Bus. & Prof. Code, § 17500 et seq.): Prohibits misleading advertisements about AI products' capabilities, emphasizing the need for truthfulness in the promotion of AI tools/services. 3) Competition Laws (Bus. & Prof. Code, §§ 16720, 17000 et seq.): Guard against anti-competitive practices facilitated by AI, ensuring that AI does not harm market competition or consumer choice. 4) Civil Rights Laws (Civ. Code, § 51; Gov. Code, § 12900 et seq.): Protect individuals from discrimination by AI in various sectors, including employment and housing. 5) Election Misinformation Prevention Laws (Bus. & Prof. Code, § 17941; Elec. Code, §§ 18320, 20010): Regulate the use of AI in elections, specifically prohibiting the use of AI to mislead voters or impersonate candidates. 6) California's data protection laws ensuring oversight of personal and sensitive information: The California Consumer Privacy Act (CCPA) and the California Invasion of Privacy Act (CIPA) set strict guidelines for transparency and the secure handling of data. These regulations extend to educational and healthcare settings through the Student Online Personal Information Protection Act (SOPIPA) and the Confidentiality of Medical Information Act (CMIA). In addition, California has enacted several new AI regulations, effective January 1, 2025: Disclosure Requirements for Businesses: - AB 2013: Requires AI developers to disclose training data information on their websites by January 1, 2026. - AB 2905: Mandates disclosure of AI use in telemarketing. - SB 942: Obligates AI developers to provide tools to identify AI-generated content. Unauthorized Use of Likeness: - AB 2602: Ensures contracts for digital replicas include detailed use descriptions and legal representation. - AB 1836: Bans use of deceased personalities’ digital replicas without consent, with hefty fines. AI in Elections: - AB 2355: Requires disclosure for AI-altered campaign ads. - AB 2655: Directs platforms to identify and remove deceptive election content. Prohibitions on Exploitative AI Uses: - AB 1831 & SB 1381: Expand prohibitions on AI-generated child pornography. - SB 926: Extends criminal penalties for creating nonconsensual pornography using deepfake technology. AI in Healthcare: - SB 1120: Requires licensed physician oversight on AI healthcare decisions.

  • View profile for Zinet Kemal, M.S.c

    I help families & educators keep kids safe online | Senior Cloud Security Engineer | Multi-award winning cybersecurity practitioner | TEDx Speaker | Author | LinkedIn Instructor | Mom of 4

    34,953 followers

    US AI state & city laws As artificial intelligence continues to integrate into various sectors, several U.S. states & cities have enacted laws to ensure its ethical & transparent use. Here's an overview of notable current AI regulations 📍 California 1. Generative AI: Training Data Transparency (AB 103) developers are required to disclose the data used to train AI models, promoting transparency in AI development. 2. California AI Transparency Act (SB 942) targets providers producing Generative AI with over 1 million monthly users, mandatory. requires clear labeling of AI-generated content. provision of free AI detection tools to the public. California BOT Act (SB 1001) requires disclosure when bots are used in commercial or political interactions, ensuring users are aware they're interacting with automated system. 📍 Colorado AI Act (SB 205) aims to prevent algorithmic discrimination by requiring developers & deployers of high-risk AI systems to exercise reasonable care & maintain transparency. This is US AI comprehensive legislation 👏🏽 📍 Utah AI Policy Act (SB 149) establishes liability for the misuse of AI that violates consumer protection laws, emphasizing responsible AI development & deployment. 📍 New York City Local Law 144 regulates the use of Automated Employment Decision Tools (AEDT by + mandating bias audits before deployment in hiring process. + requiring public availability of audit results. + ensuring notifications are provided to employees or job candidates regarding the use of such tools. Staying informed about such laws is essential for both developers and users to navigate the evolving AI landscape responsibly. Alright the study on AI Governance continues … P.s.What else came out since my last reading? #artificialintelligence #AI #AIgovernance

Explore categories