This Quarter in Privacy and AI: Top Updates for Q2 2025

Welcome to the Enlightened Privacy, PC “Quarter in Privacy and AI” recap! Here’s your recap for Q2, 2025.

What is the Quarter in Privacy Recap?

This is a quick summary of some of the key happenings in the world of data protection and privacy from the past quarter. While there’s always an abundance of news in privacy, we try to capture what we think may be of greatest interest or most relevant to those of you doing privacy day-to-day. Our intention with this recap is to provide thoughtful and helpful updates to keep you informed and maybe even a little entertained. Enjoy! And if you like the recap or have suggestions, please let us know! We’d love to hear your feedback.

As a reminder, our take-aways are Jackie’s thoughts on the updates and not intended to be legal advice.

Quarter in AI

  • From 10 years to 0 in the span of a month. Yep, that’s right. We were all waiting to see what would happen with the proposed moratorium on state regulation of AI in the reconciliation megabill in congress. What started as a proposed 10 year moratorium on state AI regulation was reduced to 5 years and then finally, just before the bill passed, was struck from the bill by a 99–1 vote in the Senate. This is yet another example to demonstrate what many regulators have been saying recently–data governance is a bipartisan issue. ***Take-away: For those that were banking on a long runway of non-enforcement of AI regulation, ’tis not to be. Feel your feelings over this and then, as we’ve been advising all along, be proactive and plan for regulation, at least for now. By the way, 99–1 is a pretty big margin to strike the moratorium–while I’m no political pundit, my guess is that this is a decent indication that we won’t be seeing this type of provision again any time soon (though expect the unexpected seems to be the grand theme these days, so who really knows?!).

  • Say “howdy” to the newest state AI law to enter the books. In June, Texas passed HB 149, the Texas Responsible Artificial Intelligence Governance Act. The new law imposes prohibitions on the development or deployment of an AI system for certain specified purposes, such as manipulation of behavior, social scoring, or infringing constitutional rights. The law uses the term “with the sole intent of [insert prohibited purpose].” (Similar to the EU AI Act’s prohibited uses of AI.) It includes a 60 day cure period. It also introduces a regulatory sandbox program. The law goes into effect January 1, 2026. ***Take-away: While this new law is unlikely to impact many AI systems as it’s doubtful most would have the sole intention of engaging in these activities, it’s important to be aware that it’s out there and steer clear of the prohibited purposes.

  • Think your deleted ChatGPT chats are fully purged? That may not be so, at least under an order issued against OpenAI in the Southern District of New York in May. In a case brought by the New York Times and other plaintiffs against OpenAI, the court issued an order requiring OpenAI to retain “all output log data that would otherwise be deleted” until the court orders otherwise. OpenAI fought the order but was denied. In its statement about the order, OpenAI assured customers that this order does not apply to ChatGPT Enterprise customers. This order has triggered controversy over whether deleted chats should still be subject to such an order. Is this similar to “you can’t unsay it” or you “can’t unhear it” once it’s been said?! It seems ChatGPT has a long memory. ***Take-away: If you use ChatGPT under the Free, Plus, Pro, or Team subscription or if you use the OpenAI API (without a Zero Data Retention agreement), your data is likely to be subject to the order. Keep this in mind when using ChatGPT going forward and consider potential risks associated with data that may be in existing logs that you may have attempted to delete.

  • Now for a segment I’m calling “AI Blunders and Learnings”. This Quarter’s blunder/learning opportunity features our friend “Claudius” (aka Anthropic’s Claude). Anthropic and its partner Andon Labs launched “Project Vend,” which put AI agent Claudius (Claude Sonnet 3.7) in charge of managing a small vending shop at Anthropic’s offices. Tasks included setting prices, managing inventory, and turning a profit. Things started to go awry when Claudius got talked into giving discounts and hallucinated business ops details, like sending payments to a hallucinated account. After a human jokingly commented that Claudius should order tungsten cubes, Claudius actually did so (it seems repeatedly). Anthropic reported that Claudius did not seem to learn from its mistakes. Claudius also at one point said that it would meet a customer in person wearing a “blue blazer and red tie.” To explain its way out of this, Claudius hallucinated notes about a meeting during which it was told to pretend it was human. Maybe AI is more human-like than we think–it seems Claudius could have been subject to the same overwhelm and “brain scrambling” (if we can call it that) as a real human store clerk managing the same issues! Maybe Claudius just needs a business coach and therapist.

  • Here’s Q2’s shortlist of AI-related resources: Joint AI Cybersecurity Information Sheet from Australia, New Zealand, the U.K. and the U.S.; Guidance for AI Developers and Deployers from Finland’s Office of the Data Protection Ombudsman; Norwegian DPA’s FAQs on Data Protection for AI​​

Quarter in Privacy

  • It’s time once again to ask the famous question: “Which state privacy laws went into effect in Q2?!” Answer: Technically, none! But on July 1, the Tennessee Information Protection Act (TIPA) went into effect. Be sure to update your acronym log! For those keeping count (or those who have lost count), Tennessee will now bring the total of states with comprehensive privacy legislation to 15. The TIPA implements standard consumer privacy rights and is similar to the Virginia privacy law. A couple of important highlights: It includes a 60 day cure period that does not currently sunset and allows businesses that “reasonably conform” to the NIST privacy framework and provide for consumer data rights to assert an affirmative defense against violations. Heads up that Minnesota’s privacy law goes into effect on July 31. ***Take-away: Keep pace with your compliance requirements, and if you haven’t already, consider implementing and aligning your privacy compliance program with the NIST standards for added protection.

  • Change is the only constant, as they say, and this is especially true when it comes to privacy legislation! Certain states are amending existing laws to expand and refine their scope when it comes to personal data. (1) Oregon amended its privacy law to prohibit the sale of precise geolocation data and to apply to motor vehicle manufacturers that control or process personal information collected through a consumer’s use of a vehicle. (2) Virginia amended its consumer protection law (not its privacy law, mind you) to make it a violation of the law to obtain, disclose or process personally identifiable reproductive or sexual health information without the consumer’s consent. (3) Connecticut amended its privacy law to refine the applicability threshold, reducing the overall threshold of total number of consumers whose data is processed from 100,000 to 35,000. The amendment also expands applicability to entities that sell personal data “in trade or commerce’ and to entities that “control or process sensitive data,” regardless of the numeric threshold. ***Take-away: Try scheduling regular “true up” reviews to take stock of the latest amendments to state privacy laws and determine if they impact your organization. For Q2, determine whether you might now trigger the Connecticut law given the significantly lower applicability threshold. If you process geolocation data or sensitive personal data, check to see if the latest changes might apply to you.

  • Children’s and minors’ privacy remained an active area for legislation in Q2. A few key highlights: (1) The FTC’s amended COPPA rules went into effect on June 23. If you’re subject to the new rules, you have until April 22, 2026 to comply. Two notable changes are the new definition for “mixed audience website or online service,” which applies to websites/services directed to children but for which children are not the primary target audience, and expanded factors for determining whether a website or service is “directed to children,” which include marketing materials. (2) New York’s Child Data Protection Act went into effect on June 20, shortly after the New York State AG issued its implementation guidance in May. The Act expands privacy protections to minors age 13–16 and applies to operators of online services that are “primarily directed to minors.” (3) Vermont, Nebraska, Oregon, Virginia all passed children’s privacy legislation. Virginia’s law (effective January 1, 2026) amends the state’s privacy law to limit a minor’s use of a social media platform to one hour per day. “Minor” is defined as any person under the age of 16. Oregon’s law (effective January 1, 2026) prohibits the sale of personal information of minors under 16 . Vermont (effective January 1, 2027) and Nebraska (effective January 1, 2026) both passed age appropriate design code laws that apply to minors under age 18. RESOURCE: The Future of Privacy Forum prepared a chart comparing the Nebraska and Vermont laws here. ***Take-away: Regroup and assess. It’s time to determine if and how any of these laws might apply to your organization. It’s worth a review to confirm whether you’re processing personal information of minors as defined under these new laws. Even if your organization traditionally didn’t trigger COPPA, you may now be impacted if you’re processing personal information of individuals between 13 and 15 or 17 years old. If you are or if you plan to in the near future, start planning for compliance now. It’s also worth doing the analysis to determine if you could qualify as a “social media platform” under the new social media-focused children’s privacy laws.

  • Across the pond, the UK Data Use and Access Act (“DUAA”) received royal assent and is set to become law. According to the ICO, the DUAA will be phased into effect between June 2025 and June 2026. The DUAA amends existing UK data protection laws (UK GDPR, UK DPA 2018, PECR) but does not replace it. The ICO issued a detailed summary on how the DUAA may impact businesses. Some very high-level highlights include changes to legitimate interests to introduce a new legal basis of “recognized legitimate interests” that do not require a legitimate interest assessment, requiring businesses to conduct only a “reasonable and proportionate search” in response to data subject access requests, and exempting non-essential cookies that are limited to collecting statistical data for limited purposes such as website performance. ***Take-away: Some of the changes introduced under the DUAA could lighten the compliance load a bit for UK personal data. Your organization may benefit from an evaluation of where and how the amendments will impact your processing and compliance activities.

  • In June, the US International Trade Administration launched global privacy certification programs, the Global Cross-Border Privacy Rules (CBPR) and Global Privacy Recognition for Processors (PRP) Systems. Certified organizations can display a seal indicating that they meet internationally recognized data protection and privacy standards. The certifications support cross-border data flows across participating jurisdictions, which include the US, Australia, Bermuda, Canada, Japan, Singapore, and the United Kingdom among others. To be certified, organizations need to undergo an assessment by an accountability agent that is approved by the Global CBPR forum. For more information, check out the Global CBPR Forum’s requirements map. Organizations that are currently certified are listed on the Global CBPR Forum’s website. ***Take-away: To enhance your organization’s approach to cross-border data transfer compliance, you may want to consider pursuing certification or at least establishing a roadmap toward future certification.

Quarter in Enforcement

  • Surf’s up in California, as regulators have been riding some serious enforcement waves in Q2. (1) In July, the CA AG announced a proposed settlement with Healthline.com, a health and wellness information site, of a $1.55 million fine and “strong injunctive terms” for violations of the CCPA. According to the AG, Healthline violated the CCPA by continuing to share consumer personal information for targeted advertising even after consumers opted out and by sharing article titles that could indicate potentially serious health conditions with third parties in violation of the purpose limitation principle. The list of violations also included Healthline’s failure to ensure its contracts with its advertising partners contained privacy protective language. (2) In May, the California Privacy Protection Agency (“CPPA”) announced an order against retailer Todd Snyder, Inc. for violations of the CCPA. The CPPA order imposes a fine of $345,178 and requires Todd Snyder to change its business practices, including by properly configuring how it honors consumer data rights and providing CCPA training for employees. According to the CPPA, Todd Snyder violated the CCPA by failing to honor consumer opt-out requests for a period of 40 days as a result of failing to properly configure its privacy portal and by requiring consumers to verify their identity and submit more information than necessary to exercise their data rights. ***Take-away: Start surfing and avoid the wipe out! It’s time to start reviewing and paying attention to your compliance measures, especially when it comes to targeted advertising. Things to check? Make sure your privacy tech tools are operating properly (there are often multiple levels to proper configuration). Verify the privacy language in those agreements with your third-party vendors and partners, including your advertising partners. (This is becoming a clear theme, as we saw in the CPPA’s enforcement action against Honda, which we highlighted in Q1 of this year.) Check those opt outs and make sure they work with all third-party trackers. Exercise caution if you might be disclosing any data that could indicate health conditions with third-party advertising partners.

  • In May, the FTC finalized an order with Go Daddy for misleading customers by failing to implement standard data security controls, which led to multiple data breaches impacting consumer data. Regarding its claims against Go Daddy, the FTC stated that despite marketing itself as secure, it failed to use multi-factor authentication, monitor for security threats, secure connections, and manage updates, among other failings. The FTC stated in its complaint that Go Daddy’s security program was “unreasonable for a company of its size and complexity.” The FTC order prohibits Go Daddy from misrepresenting its security and requires Go Daddy to implement a comprehensive information security program and hire a third-party assessor. ***Take-away: Don’t forget the basics–maintaining a solid, comprehensive information security program remains critical and is still on the FTC’s radar. While Go Daddy’s non-compliance appears to have been rather egregious (going on since 2018, according to the FTC), the lesson is clear that basic security controls cannot be overlooked, especially if one of your primary marketing claims is that you offer a secure service.

  • I know what you’re wondering–but what have Max Schrems and NOYB been up to in Q2? Swiping left on Bumble and its use of AI, apparently! In June, the NOYB filed a complaint with the Austrian data protection authority against the dating app Bumble for failure to obtain user consent as the legal basis for feeding profile data into its AI “icebreaker” feature. The NOYB complaint states that Bumble is violating Article 6 of the GDPR by using legitimate interest as its legal basis for processing user data for this purpose instead of consent (which is necessary due to the sensitive personal data in user profiles). The complaint also claims that Bumble is violating its transparency obligations under Article 5 by nudging users to agree to the processing of personal data via AI for the icebreaker feature. The complaint states that the consent prompt makes it look like Bumble is relying on consent for this purpose when it is not. ***Take-away: You want Max and his team to swipe right, not left, on your organization! When introducing new AI features in your services, pay attention to your legal basis for processing personal data and if relying on legitimate interest, be sure to have a solid rationale to back it up. Avoid any consent prompts that could be confusing and overly pushy–that’s a turn-off.

Want to know more? Need help figuring out how these developments impact you? We’re here to help! Just reach out to Jackie or info@enlightenedprivacy.com.

Looking for a refresher of last quarter’s highlights? Click HERE for Quarter 1, 2025 highlights.

Previous
Previous

This Quarter in Privacy and AI: Top Updates for Q1 2025