Skip to content

Right To Know - March 2026, Vol. 39

March 25, 2026

Cyber, Privacy, and Technology Report

Welcome to your monthly rundown of all things cyber, privacy, and technology, where we highlight all the happenings you may have missed.

View previous issues and sign up to receive future newsletters by email here. 

 

Litigation & Enforcement: 

  • Marquis Sues SonicWall Over Data Breach: Marquis Software Solutions sued SonicWall, Inc. claiming SonicWall was at fault for Marquis’ August 2025 cybersecurity incident. That incident impacted the information of more than four hundred thousand (400,000) individuals. Marquis claims that the threat actor used credentials stolen in a previous SonicWall incident to bypass Marquis’ firewall resulting in Marquis’ incident. Marquis alleges claims for negligence, unjust enrichment, negligent misrepresentation, and seeks contribution and indemnity.
  • DNY Rules Chats with Public AI Platforms Are Not Protected by Attorney-Client Privilege: In a first-of-its-kind ruling, the U.S. District Court for the Southern District of New York held that a criminal defendant’s chats with a public generative AI platform are not protected by attorney-client privilege or the work product doctrine. In a decision by Judge Jed S. Rakoff, the court found that communications with Anthropic’s AI tool Claude were not confidential, were not made for the purpose of obtaining legal advice and were not prepared at the direction of counsel. The court emphasized that public AI platforms’ data-sharing and training practices undermine any reasonable expectation of privacy, and that materials created independently by a client do not qualify as protected work product. The ruling serves as a caution to companies and individuals: sharing sensitive or litigation-related information with public AI tools may waive privilege, highlighting the need for clear AI use policies and careful legal oversight.
  • NetChoice Challenges South Carolina Age-Appropriate Code Design on Constitutional Grounds: On Feb. 9th, NetChoice filed suit challenging South Carolina’s Age-Appropriate Code Design (“SC AACD”), signed into law by South Carolina’s Governor on Feb. 5th, alleging violations of the First and Fourteenth Amendments. The SC AACD, which took immediate effect, imposes broad design, data minimization, default-setting, parental control, and audit requirements on online services “reasonably likely to be accessed by a minor,” and authorizes enforcement without a cure period, including treble damages and potential personal liability. This action follows similar challenges that the group has filed against age-appropriate design laws in other states, including California (where enforcement has been enjoined).

Industry Updates: 

  • Ransomware Trends: Why Security Hygiene Failures Continue to Drive Rising Threats: Infosecurity Magazine reports that ransomware remains one of the most persistent and costly cybersecurity threats, with average ransom demands rising to $1.3 million in 2025—vastly higher than a decade ago. According to the article, attackers continue to succeed because organizations often fail at basic security hygiene. Most breaches stem from familiar issues, such as unpatched vulnerabilities, weak or reused passwords, lack of multi factor authentication (MFA), and excessive user permissions that allow attackers to move laterally once inside a network. The article further notes that, at the same time, modern IT environments have become far more complex, expanding the attack surface through cloud platforms, AI systems, and remote work tools. Criminals increasingly rely on social engineering tactics like “ClickFix,” tricking users into running malicious code on their own machines, thereby bypassing security controls. AI is also helping threat actors to customize lures, write code, and scale attacks more easily. Experts emphasize that ransomware persists largely because victims continue to pay ransoms, fueling the ecosystem. Strong preventive measures (e.g., patching, MFA, proper configuration, and well-resourced security teams) remain the most effective way to reduce the risk and impact of these attacks.
  • President Trump Orders Federal Agencies to Cease Use of Anthropic AI Models in Truth Social Post: Following a rift between the Department of War and Anthropic over Anthropic’s ethical safeguards regarding use of its artificial intelligence models, President Trump took to Truth Social to declare that he was “… directing EVERY Federal Agency in the United States Government to IMMEDIATELY CEASE all use of Anthropic’s technology.” Despite this declaration on Feb. 27th, it has been widely reported that Anthropic’s Claude was utilized by the Department of War in the strikes on Iran on Feb. 28th. Anthropic has filed a lawsuit challenging its designation as a supply chain risk by the Department of Defense, aka the Department of War.
  • FTC Policy Clarification Regarding Age-Verification Technologies and COPPA: The FTC has issued a policy statement clarifying how it will treat age-verification technologies under the Children’s Online Privacy Protection Act (COPPA) to encourage their use in protecting children online. Under COPPA, commercial websites and online services directed to children under 13 or those that know they are collecting personal information from a child under 13, must give notice to parents and obtain verifiable parental consent before collecting, using, or disclosing that information. However, some age-verification technologies require collecting personal information to determine a user’s age, which could trigger those COPPA obligations. To address this, the FTC said it will not bring enforcement actions under the COPPA Rule against those operators that collect, use, or disclose personal information solely for the purpose of determining a user’s age without first obtaining parental consent, so long as they meet specific conditions. These conditions include restricting use of the data only to age verification, not retaining it longer than necessary, disclosing it only to trusted third parties with adequate protections, providing clear notice about the information collected, employing reasonable security measures, and ensuring the age-verification methods are reasonably accurate.
  • Academic Study Finds Serious Weaknesses In Cloud-Based Password Managers: Researchers at ETH Zurich recently revealed that several popular cloud-based password managers are less secure than their marketing suggests, finding serious vulnerabilities in their security architecture. In tests on platforms like Bitwarden, LastPass, and Dashlane, the team was able to view and even modify stored passwords by simulating a compromised server environment, highlighting weaknesses in how these services handle encrypted data. The study challenges the common “zero-knowledge encryption” promise — that even the provider can’t access your vault — showing that certain features (like recovery or sharing) can open attack vectors that undermine that guarantee. While vendors have been notified and are working on fixes, the findings underscore the need for improved cryptographic practices and clearer communication about actual security guarantees.
  • FTC Releases Second Congressional Report on Cyberattacks: On Feb. 6th, the Federal Trade Commission (FTC) unanimously approved and issued its second report to Congress under the RANSOMWARE Act, detailing its efforts to combat ransomware and other cyberattacks. The report updates the FTC’s 2023 overview of activities related to threats from China, Russia, North Korea, and Iran, and highlights its data security enforcement program, which requires companies to implement reasonable safeguards for personal information. The FTC reports bringing more than 90 enforcement actions and describes additional efforts to combat tech support scams and educate consumers and businesses through guidance and alerts on malware and cybersecurity best practices.
  • NIST is Seeking Public Comment On AI Agent Security: The National Institute of Standards and Technology’s Center for AI Standards and Innovation has issued a Request for Information seeking input on the security risks and safeguards associated with autonomous AI agent systems. Unlike traditional AI tools, AI agents can independently plan, make decisions, and interact with external systems such as APIs, identity platforms, databases, and enterprise infrastructure, creating new attack surfaces and governance challenges. NIST develops technical standards, frameworks, and guidance, including the Cybersecurity Framework and AI Risk Management Framework. Its initiatives are important to monitor because NIST guidance frequently becomes the baseline for federal procurement requirements, regulatory expectations, and industry best practices, effectively shaping how organizations design and govern emerging technologies.

Regulatory: 

  • OCR’s Latest HIPAA Security Rule Settlement Underscores Ongoing Risk Analysis Enforcement: HHS OCR’s Feb. 19th settlement with Top of the World Ranch Treatment Center (“TWRTC”) is the 11th enforcement action under its Risk Analysis Initiative and another clear reminder that OCR continues to treat HIPAA Security Rule risk analysis compliance as a core enforcement priority. The matter arose from a March 2023 breach report involving a phishing attack that resulted in unauthorized access to a workforce member’s email account and the compromise of ePHI affecting 1,980 patients. Following its investigation, OCR found evidence that TWRTC failed to conduct an accurate and thorough risk analysis addressing risks and vulnerabilities to the confidentiality, integrity, and availability of its ePHI. OCR emphasized that regulated entities cannot adequately protect ePHI without first identifying where their risks and vulnerabilities are. As part of the resolution, TWRTC agreed to a corrective action plan monitored by OCR for two years, and a $103,000 payment. The corrective action plan requires a compliant risk analysis, a risk management plan, updated HIPAA policies and procedures, and annual workforce training.
  • HHS OCR Launches Civil Enforcement for Part 2 Substance Use Disorder Records, Raising the Stakes for Privacy and Breach Compliance: HHS OCR’s Feb. 13th, announcement is a major development for providers and other entities handling substance use disorder (SUD) patient records subject to 42 C.F.R. Part 2. Effective February 16, 2026, OCR began civil enforcement of Part 2 confidentiality requirements, including accepting complaints alleging violations and breach notifications involving SUD patient records. This is the first time OCR has made HIPAA-style civil enforcement mechanisms available in this context, including resolution agreements, monetary settlements, corrective action commitments, and civil money penalties. The announcement operationalizes the CARES Act’s Part 2 reforms and continues HHS’s broader effort to align Part 2 more closely with HIPAA and HITECH while preserving stronger protections for SUD records in key respects. OCR also released a model patient notice and updated model HIPAA Notices of Privacy Practices to help regulated entities implement the new requirements.
  • FTC Issues Letter Putting Data Brokers on Notice of Possible PADFAA Violations: The Federal Trade Commission (“FTC”) set a letter to thirteen data brokers putting them on notice of their obligations under the Protecting Americans’ Data from Foreign Adversaries Act of 2024 (“PADFAA”). The letter flagged that several of the recipients appeared to be marketing data products that were tied to individuals’ military status and would fall under the scope of PADFAA. The law protects a wide range of data, such as health, financial, biometric, genetic, geolocation, and behavioral information from disclosure to foreign adversary countries or foreign adversary connected entities.
  • Treasury Sanctions Foreign Exploit Broker Network Over Theft and Sale of U.S. Government Cyber Tools: On Feb. 24th, the U.S. Department of the Treasury’s Office of Foreign Assets Control announced sanctions against Sergey Sergeyevich Zelenyuk, his firm Matrix LLC (operating as Operation Zero), and multiple associated individuals and entities for trafficking in stolen cyber tools and software exploits that pose threats to U.S. national security. The action marks the first enforcement under the Protecting American Intellectual Property Act, which authorizes sanctions on parties that knowingly engage in or benefit from significant theft of U.S. trade secrets with implications for national security, foreign policy, or economic stability. Operation Zero acted as a broker in acquiring and selling “exploits” and proprietary U.S. government cyber tools, including by paying bounties for vulnerabilities and offering these capabilities to unauthorized users. Designations also extend to affiliated companies and individuals, including those linked to other malicious cyber activities. The Department of the Treasury’s action blocks property interests of designated parties and generally prohibits transactions with U.S. persons unless authorized.

State Action: 

  • Virginia AG to Enforce Provisions Restricting Minors’ Use of Social Media: On Feb. 18th, 2026, Virginia Attorney General Jay Jones announced that his office will fully enforce new provisions of the Virginia Consumer Data Protection Act (VCDPA) restricting minors’ social media use, following a motion to dismiss a lawsuit by NetChoice seeking to block the rules. Effective January 1, 2026, the law requires social media platforms to use “commercially reasonable” age verification methods to identify users under 16 and to limit minors’ use to one hour per day unless verifiable parental consent is obtained for additional time. The Attorney General’s office will issue notices of non-compliance with a 30-day cure period, after which it may seek civil penalties of up to $7,500 per violation, emphasizing the state’s commitment to protecting children and giving parents greater control over their children’s social media use.
  • Connecticut’s AG Outlines Its Approach for Addressing AI Risks Using Existing Laws: On Feb. 25th, Connecticut’s Attorney General William Tong issued a memorandum detailing how existing state laws apply to the rapidly expanding use of artificial intelligence (AI). While acknowledging AI’s benefits, the memorandum highlights significant risks, including discrimination, privacy violations, misuse of personal data, deceptive business practices, and algorithmic collusion. The Attorney General emphasizes that Connecticut’s longstanding civil rights, consumer protection, privacy, data security, and antitrust laws fully apply to AI systems used in areas such as housing, employment, credit, insurance, healthcare, and advertising. Connecticut residents are reminded of their rights, including data access, deletion, and opt‑out options under the Connecticut Data Privacy Act, and are encouraged to report AI‑related harms. The memorandum also outlines the responsibilities of businesses developing or using AI, including limits on data collection, disclosure requirements, protection of sensitive data, and heightened safeguards for minors. It reinforces that companies deploying AI must avoid unfair practices, deceptive marketing, price manipulation, and anticompetitive behavior. The Attorney General pledged continued vigilance and legislative advocacy to ensure that AI evolves responsibly and does not harm residents of Connecticut, “In addition to creating additional protections for families through legislative action, the Office of the Attorney General has at its disposal the most effective tools available to ensure the safety of Connecticut families and accountability for wrongdoers.”

International Updates: 

  • UK Fines Reddit £14.5 Million Over Children’s Data and Age-Check Failures: Britain’s data protection regulator, the Information Commissioner’s Office (ICO), has fined Reddit £14.47 million (~$20 million USD) for unlawfully processing children’s data and failing to properly verify users’ ages. The ICO said the platform did not implement effective age checks until July 2025, despite banning children under 13 in its terms of service, leaving a significant number of children potentially exposed to harmful content. UK Information Commissioner John Edwards said children’s personal information was collected in ways they could not understand or control. Reddit plans to appeal, arguing that stronger age verification would require collecting more private data and undermine user privacy. The ruling comes as Prime Minister Keir Starmer’s government weighs stricter social media rules for young people, amid a broader global push for tighter age limits and online safety protections.
  • Office of the Australian Information Commissioner Elevates Enforcement Posture and Tightens Screening of Individual Privacy Complaints: The Office of the Australian Information Commissioner (OAIC) announced its shift to a greater focus on enforcement, prioritizing systemic investigations and civil penalty litigation over routine complaint resolution. Citing recent high-profile outcomes, including a $5.8 million penalty against Australian Clinical Labs, pending civil penalty proceedings against Optus and Medibank, and a $50 million settlement with Meta, the OAIC has underscored its intent to leverage new statutory tools, expand Commissioner-initiated investigations, and develop a Children’s Online Privacy Code. The OAIC will also apply more robust threshold assessments to individual complaints, may decline to investigate lower-severity matters, and will pause complaints that overlap with notifiable data breach investigations or active litigation. With a current backlog resulting in estimated delays of six to twelve months for new complaints, entities should except fewer individualized determinations and greater regulatory focus on market-wide practices, emerging technologies, and Commissioner-initiated investigations in 2026.
  • Dutch Data Watchdog Fines 10 Municipalities €250,000 for Data Gathering on Muslim Communities: The NL Times reported that the data supervisory authority for the Netherlands (the AP) has fined 10 municipalities €25,000 each in respect of privacy violations arising from secret investigations into the Muslim community. The probes included sensitive information, including religious and political views, that was illegally processed and done so without the data subjects’ knowledge.  The data was also shared with the authorities and the national government.  Going to the core historical imperative for data privacy law in the EU and over-reach / monitoring of citizens by governments, the case highlights a strong focus of not just the AP but supervisory authorities throughout Europe, aligning with the European Data Protection Board’s focus on transparency for 2026.
  • European Data Supervisors Raise Concerns on Proposed “Simplification” of Privacy Laws: The European Data Protection Board., comprised of the EU bloc’s data privacy watchdogs, has met proposals for simplification of the EU’s digital regulatory framework including GDPR (the “Digital Omnibus regulation”) with a lukewarm response. While raising concerns about a fundamental watering down of the key definition of “personal data”, the EDPB was supportive of raising the threshold for notification and reducing the administrative burden.  Other measures to include derogations for biometric authentication were also supported.

This publication is intended for general informational purposes only and does not constitute legal advice or a solicitation to provide legal services. The information in this publication is not intended to create, and receipt of it does not constitute, a lawyer-client relationship. Readers should not act upon this information without seeking professional legal counsel. The views and opinions expressed herein represent those of the individual author only and are not necessarily the views of Clark Hill PLC. Although we attempt to ensure that postings on our website are complete, accurate, and up to date, we assume no responsibility for their completeness, accuracy, or timeliness.

Subscribe for the latest

Subscribe