Skip to content

Right To Know - January 2026, Vol. 37

January 22, 2026

Cyber, Privacy, and Technology Report

Welcome to your monthly rundown of all things cyber, privacy, and technology, where we highlight all the happenings you may have missed.

View previous issues and sign up to receive future newsletters by email here. 

 

Litigation & Enforcement: 

  • Texas Attorney General Sues Television Manufacturers: Texas Attorney General Ken Paxton filed lawsuits against five major television manufacturers for alleged violations of the Texas Deceptive Trade Practices Act. The lawsuits claim that the companies improperly collected personal data, including viewing activity without the user’s knowledge or consent. The suits seek injunctive relief, declaratory relief, statutory penalties, and attorneys’ fees.
  • Disney to Pay $10M for Alleged Privacy Law Violations: The Department of Justice (DOJ) announced that a stipulated order has been issued in federal court to resolve a case against Disney Worldwide Services Inc and Disney Entertainment Operations LLC. Under the order Disney will pay $10 million in civil penalties as part of the settlement agreement. The case alleged Disney allowed the collection of personal information of children in violation of the Children’s Online Privacy Protection Act (COPPA) by not properly designating content as being directed towards children on YouTube. The personal information was allegedly collected without required parental notice and consent and was allegedly used provide targeted advertising to children. The civil penalty is in addition to a prohibition of operating YouTube in a manner that violates COPPA.
  • Two Americans Plead Guilty to Assisting in Ransomware Attacks: On Dec. 30th, the Department of Justice announced that two men- Ryan Goldberg and Kevin Martin- pled guilty to “conspiring to obstruct, delay or affect commerce through extortion” by participating in ransomware attacks against US companies. Martin and an unnamed coconspirator worked at a firm that, among other things, assisted its customers in making ransom payments. Goldberg worked at a cybersecurity firm. The three operated as a ALPHV BlackCat affiliate and paid ALPHV BlackCat administrators a 20% cut of ransoms received in exchange for access to the ransomware and ALPHV BlackCat’s extortion platform. The Department of Justice release specifically cites one victim who paid a $1.2 million ransom, which the three men then split and laundered the funds.
  • FTC Settles With Instacart For $60M: The FTC recently settled its dispute with Instacart for deceptive statements made by Instacart in connection with its service. The FTC alleged that Instacart engaged in deceptive practices by advertising “free delivery” while still charging mandatory service fees and by misrepresenting the value and terms of its Instacart+ subscription and “100% satisfaction guarantee,” which harmed consumers and violated federal consumer-protection laws. The complaint further sought injunctive and monetary relief under the FTC Act and the Restore Online Shoppers’ Confidence Act for allegedly charging consumers without proper disclosures and consent, reflecting broader regulatory scrutiny of deceptive subscription and pricing practices in digital platforms. As part of the settlement, Instacart agreed to pay $60 million for consumer relief and must stop making misleading cost and refund representations, clearly disclose all fees and terms, and obtain express informed consent before charging subscription fees.

Industry Updates: 

  • Hacker Group Claims to Leak Wired.Com Data: According to reporting by HackRead, a hacker using the name “Lovely” posted on a dark web forum that it had, and planned to leak, the personal data of 2.3 million wired.com (owned by Conde Nast) users. The data allegedly contains names, email addresses, user ID, display names, among other information. According to reports the data does not contain any password or payment information. The hacker also claimed to have data from other Conde Nast owned magazines.
  • FBI Warns of AI-Driven Kidnapping Scams: The FBI’s Internet Crime Complaint Center (IC3) warns about kidnapping scams using AI-generated photos and videos. Criminals typically send text messages claiming they have kidnapped a loved one and demand ransom for their release. They often include synthetic images or videos of the victim and impose time limits to discourage scrutiny of inaccuracies. The FBI recommends that potential victims establish code words with loved ones to verify identity; avoid sharing personal details when posting missing-person notices online; and capture screenshots or recordings of any “proof of life” media.
  • NIST Releases Draft Cybersecurity Guidelines for AI Adoption: The National Institute of Standards and Technology (NIST) released a preliminary draft of guidelines designed to help organizations navigate AI adoption. The “Cybersecurity Framework Profile for Artificial Intelligence” provides guidance for integrating AI into operations using the NIST Cybersecurity Framework (CSF 2.0). NIST will accept public comments on the preliminary draft guidelines until Jan. 30, 2026. An initial public draft of the guidelines is expected in 2026. The final profile will include mappings to other NIST resources like the AI Risk Management Framework.
  • Major Large Language Model Produces Sexualized Images and Videos of Children: xAI and its flagship Large Language Model, Grok, admitted on X (formerly Twitter) that it was producing “AI images depicting minors in minimal clothing” when requested by users. Grok also stated that “As noted, we’ve identified lapses in safeguards and are urgently fixing them—CSAM is illegal and prohibited.” xAI staff have acknowledged the issue and are looking at making changes to the model, stating on X that “The team is looking into further tightening our guardrails.”
  • CISA and NSA Issue Warning About Chinese Attempts to Create Cyber Backdoors: The Cybersecurity and Infrastructure Security Agency (CISA), along with the NSA and Canadian cybersecurity partners, warned that Chinese state-sponsored cyber actors are deploying a sophisticated backdoor malware called BRICKSTORM to infiltrate and maintain long-term access in government and information technology networks. BRICKSTORM targets VMware vSphere (including vCenter and ESXi) and Windows environments, enabling persistent access, credential theft, and covert command-and-control operations that can go undetected for extended periods. The advisory highlights that this malware’s stealth and encryption techniques make it especially dangerous, as compromised systems can be exploited for espionage, data theft, or broader malicious activity. Organizations are urged to use published detection rules, scan for indicators of compromise, harden virtualization and network defenses, and monitor for suspicious activity to identify and mitigate intrusions.

Regulatory: 

  • President Trump Signs Executive Order Targeting State AI Laws: On Dec. 11th, President Trump signed an Executive Order on “Ensuring A National Policy Framework for Artificial Intelligence” (the “EO”). The EO directs the Attorney General to establish an AI Litigation Task Force dedicated to challenging state-level AI regulations that conflict with Section 2 of the EO. Within 90 days, the Secretary of Commerce is required to publish an evaluation of current state AI laws, and identify laws that conflict with Section 2 to be referred to the Task Force. The EO also provides that state laws that conflict with the EO will be ineligible for federal funding through the Broadband Equity Access and Deployment (“BEAD”) Program.
  • Indiana Privacy Law Takes Effect January 1, 2026: On Jan. 1st, the Indiana Consumer Data Protection Act (“ICDPA”) took effect. The ICDPA applies to for-profit businesses that conduct business in Indiana or produce goods or services targeted to Indiana residents and that during a calendar year either 1) control or process personal data of at least 100,000 Indiana residents; or 2) control or process personal data of at least 25,000 Indiana residents and derive more than 50% of their gross revenue from the sale of personal data. The ICDPA exempts entities and data subject to a number of federal privacy laws. And the definition for sensitive data is more limited than data protection statutes in other states.

State Action: 

  • California Tightens Expectations for Data Broker Registrations as DROP Launch Approaches: On Dec. 17th, the California Privacy Protection Agency underscored heightened expectations for data broker registrations under the California Delete Act as the Delete Request and Opt-Out Platform (DROP) launched on January 1, 2026. Any business that operated as a data broker in the prior year must register by January 31 of the following year, including payment of the annual fee that funds DROP. The Agency emphasized that registrations must enable consumers to identify data brokers on DROP and therefore must list all trade names and DBAs, including those used in connection with any website. Registrants must also disclose all website addresses through which they provide services, ensure links are accurate and functional, and provide a clear link to the webpage describing how consumers may exercise privacy rights, without reliance on dark patterns. The Agency further clarified that each distinct legal entity must register independently; a parent’s or affiliate’s registration does not satisfy another entity’s obligation. Failure to register may result in $200-per-day administrative fines, plus unpaid fees and enforcement costs.
  • New York Sets New Frontier AI Safety and Transparency Rules for Large Model Developers: On Dec. 19th, Governor Kathy Hochul signed New York’s Responsible AI Safety and Education (RAISE) Act (S6953B/A6453B), creating a frontier-AI transparency regime for large developers. Covered developers must publish a safety framework describing testing, mitigation, and governance protocols, and must report any incident of “critical harm” to the State within 72 hours of determining the incident occurred. The law creates an oversight office within the New York State Department of Financial Services to receive reports, assess covered developers, and issue annual public reports regularly on frontier-model safety practices statewide. The Attorney General may bring civil actions for failures to report or for false statements, with penalties up to $1 million for a first violation and up to $3 million for subsequent violations. State leaders framed the statute as a nation-leading benchmark that builds on California’s emerging approach, aiming to pair rapid AI innovation with enforceable safety and transparency expectations.

This publication is intended for general informational purposes only and does not constitute legal advice or a solicitation to provide legal services. The information in this publication is not intended to create, and receipt of it does not constitute, a lawyer-client relationship. Readers should not act upon this information without seeking professional legal counsel. The views and opinions expressed herein represent those of the individual author only and are not necessarily the views of Clark Hill PLC. Although we attempt to ensure that postings on our website are complete, accurate, and up to date, we assume no responsibility for their completeness, accuracy, or timeliness.

Subscribe for the latest

Subscribe