Skip to content

From Algorithms to Deepfakes: AI Risks Every Employer Must Confront

December 1, 2025

AI enables threat actors to improve their methods, identify targets, and infiltrate hiring systems through fake resumes, deepfakes, and manipulated interviews. Companies need to be prepared to detect and respond to these threats.

Employers increasingly use AI in recruiting and hiring—through resume screening, video interviews, and predictive analytics. These tools promise speed and efficiency but can also replicate bias, create discrimination risk, invite new compliance obligations, and can even increase risks from criminals. Employers are best served by vetting and monitoring AI systems before using them and utilizing human oversight before final decisions are made.

Even as the law lags behind technology, existing employment discrimination principles still apply. Employers remain responsible for bias, regardless of whether it stems from an algorithm or a third‑party vendor. Liability cannot be outsourced.

Employers also face cybersecurity threats. AI enables threat actors to improve their methods, identify targets, and infiltrate hiring systems through fake resumes, deepfakes, and manipulated interviews. Companies need to be prepared to detect and respond to these threats.

How Can AI Discriminate?

AI learns from data—and biased data produces biased outcomes, creating unacceptable legal risk. Some common examples include:

  • Resume filters that prioritize candidates who resemble prior hires. If these prior hires are predominantly one gender or race, or skew toward under the age of 40, that data may create a filter that has “learned” to discriminate on the basis of age, gender, or race—even without those demographics being provided.
  • Video-interview platforms that evaluate tone, cadence, or facial movement, can disadvantage individuals with disabilities, accents, or cultural communication styles.
  • Predictive analytics that assess “stability” or “fit” using proxies like zip code, commute time, educational institutions or social media behavior, indirectly correlating to race, age, or socioeconomic status.

The Equal Employment Opportunity Commission (EEOC) in published guidance as well as its litigation behavior, has consistently argued that existing civil rights laws—Title VII, the Americans With Disabilities Act (ADA) and the Age Discrimination in Employment Act (ADEA)—apply equally to AI tools. In EEOC v. iTutorGroup (E.D.N.Y. 2023), the commission alleged that the employer’s algorithm automatically rejected applicants who were female and age 55 or older, and males who were age 60 and older. The EEOC claimed that the algorithm was intentionally programmed to screen out older applicants, violating the ADEA. The case was settled through a consent decree requiring the employer to pay $365,000, reform its hiring systems, and undergo EEOC monitoring. This case provides a clear warning that the EEOC views algorithmic tools as falling squarely within existing discrimination laws.

Employer Liability: Intent Is Not Required

Discrimination need not be purposeful. A facially neutral practice with a disparate impact still violates Title VII, and similar state statutes, unless justified by business necessity. The EEOC rejects ignorance as a defense. Using third‑party AI tools does not insulate employers from liability.

The EEOC’s Strategic Enforcement Plan (2023-2027) highlights AI and automated decision-making as a priority area. And several states have joined in addressing the potential risk and bias posed by the use of AI.

The Emerging Regulatory Landscape

Beyond federal law, states and municipalities are beginning to impose specific compliance duties related to AI in hiring:

  • New York City: Local Law 144 (effective 2023) – Requires annual bias audits of automated employment decision tools (AEDTs), advance notice to applicants, and public posting of audit summaries. The audit must assess at least race, ethnicity, and sex/gender impact (the law’s primary protected-group benchmarks). The audit results must be made publicly available (on the employer’s website) in a “clear and conspicuous manner.”
  • Illinois: AI Video Interview Act (820 ILCS 42) – Mandates notice, explanation, and consent before using AI to evaluate video interviews; demographic data for non-selected applicants must be reported to the state. Illinois was the first state to regulate employer use of AI in video interview screening. At least one court has found that this Act does not preempt claims under Illinois’ Biometric Information Privacy Act (BIPA), making employers potentially liable under both statutes.
  • Maryland: Lab. & Empl. Section 3-717 – Prohibits use of facial-recognition technology in interviews without written consent. It does not cover all AI tools, but instead, only those that use facial-recognition or extraction of facial templates during the interview.
  • California: Civil Rights Council Draft Regulations (2024) – Clarifies that employers and vendors are jointly responsible under FEHA for discriminatory effects of AI tools. These regulations will establish obligations for employers using automated decision-making tools (ADS): bias testing, record-keeping, oversight of third-party vendors, and transparency around use of ADS.
  • Colorado: SB 24-205 (effective February 2026) – Imposes a duty of reasonable care to avoid algorithmic discrimination in “high-risk” AI systems, requiring documentation, transparency, and impact assessments. High risk systems are those that make, or are a substantial factor in making, a “consequential decision” about a person. A “consequential decision” includes decisions with a material legal or similar effect on employment or an employment opportunity; education; financial or lending services; housing; insurance; essential government services; or legal services. The statute targets not only developers of these systems, but also those companies using them. Thus, employers that use an AI system for hiring, screening, performance evaluation, promotion, etc., that makes or influences decisions about employees or applicants, then the employer must comply.
  • New Jersey (Pending): A. 3911 – If enacted, an employer using an AI-enabled video interview tool for screening must prepare before the interview: a clear disclosure to applicants, a plain-language explanation of how the AI evaluates and must obtain consent. Additionally, the employer must build processes for data retention and deletion and make sure any service providers comply. Notably, the employer must collect race/ethnicity data for applicants who pass through AI-screened video interviews and for hires/offers and submit that data annually to the Department of Labor.

These laws reflect a growing nationwide trend toward transparency and accountability in AI-assisted hiring, and similar laws may be passed in other states. For multi‑state employers, the patchwork of laws creates complexity, but adopting uniform, high‑standard policies reduces risk. Additionally, employers must ensure AI tools do not disadvantage applicants with disabilities and that reasonable accommodations are provided.

Practical Steps to Reduce Risk

Employers can act now to manage risk and prepare for upcoming regulation:

  • Inventory all AI tools used in recruiting, screening, and performance evaluation.
  • Determine how the AI tool is used, and whether the tool is covered by applicable statutes
  • Review vendor contracts, applicant consent forms, data-sharing arrangements, and video-interview workflows to align processes with statutory obligations.
  • Request documentation from vendors on data sources, training models, and bias-testing results. Ensure contracts include audit rights and indemnification provisions.
  • Conduct and document internal bias audits or validation studies to detect disparate impact.
  • Ensure human oversight of AI. Algorithms may assist, but human decision-makers must make the final call.
  • Protect applicant data. Many tools capture biometric, audio, or demographic data; employers must comply with applicable privacy and cybersecurity requirements.
  • Maintain documentation—policies, training, audit results, and vendor communications—to demonstrate good-faith compliance if challenged.

The Cybersecurity and Privacy Connection

Threat actors have begun using AI to improve their methodologies including improving the language in phishing emails, identifying targets and improving malware code. Further, if an AI tool stores personal information, that makes the repository a prime target for threat actors to try and breach, which, if successful, can create compliance and litigation risk It is important that HR, IT, and legal stakeholders coordinate to:

  • Improve training of employees to spot and prevent cyber attacks;
  • Verify that vendors use secure data-storage practices;
  • Limit access to applicant data through role-based permissions; and
  • Incorporate incident-response plans that address potential breaches involving AI systems.

Additionally, the collection and storage of personal information, even if done by or for use with an AI tool or system, is subject to already existing federal and state privacy laws. Therefore, when collecting employee or applicant personal information, employers should ensure that they comply with required disclosure and consent laws and are transparent about the purposes of the collection.

Misuse of AI to Exploit

Foreign adversaries, including North Korea and China, are increasingly using AI to infiltrate U.S. companies. The U.S. Department of Justice and cybersecurity agencies have warned that operatives are using AI-generated resumes, deepfake profiles and videos, and live video filters to deceive hiring managers—particularly in remote hiring contexts. Once these foreign actors succeed in obtaining employment in a US company they can utilize provided system access, employer provided devices and internal communication systems to steal information, leverage relationships with and infiltrate other companies, and/or cripple the employer’s systems.

Some best practices to thwart these efforts include:

  • Require real-time verification steps during video interviews.
  • Use multi-factor identity verification before granting access to internal systems.
  • Train hiring teams to recognize signs of AI-manipulated visuals or speech latency.
  • Maintain strong endpoint and identity access controls for any remote onboarding processes.
  • Confirm geolocation of access for remote employees.

The Path Forward

AI can enhance efficiency, but only when used with transparency, oversight, and human judgment. Employers that act responsibly will be able to leverage the advantages of AI while building fairer, legally defensible systems and reducing exposure.

This publication is intended for general informational purposes only and does not constitute legal advice or a solicitation to provide legal services. The information in this publication is not intended to create, and receipt of it does not constitute, a lawyer-client relationship. Readers should not act upon this information without seeking professional legal counsel. The views and opinions expressed herein represent those of the individual author only and are not necessarily the views of Clark Hill PLC. Although we attempt to ensure that publications are complete, accurate, and up to date, we assume no responsibility for their completeness, accuracy, or timeliness.

Reprinted with permission from the November 24 edition of the “New Jersey Law Journal” © 2025 ALM Global Properties, LLC. All rights reserved. Further duplication without permission is prohibited, contact 877-256-2472 or asset-and-logo-licensing@alm.com.

Subscribe for the latest

Subscribe