Recently, California’s Civil Rights Department (CRD) finalized AI regulations to prevent any negative impact associated with automated decision making in the workplace. Specifically, this includes hiring, firing, and recruitment decisions. In addition to any state laws created to prevent hiring discrimination, employers must comply with similar regulations established by the U.S. Department of Labor (DOL). Basically, hiring, wages, and overtime regulations are governed by the Fair Labor Standards Act (FLSA). The FLSA is one of the most essential labor laws that affects most workplaces. Failure to comply with any labor or employment laws can result in substantial fines and penalties. In October 2024, on the federal level, the DOL announced the release of guidance on using AI for hiring and recruitment decisions.
Overview of California’s Artificial Intelligence (AI) Regulations
Specifically, since the new rule is intended to limit discrimination in automated decision making, the regulations provide clear definitions. For example, an “automated decision system” (ADS) is considered any computational process that makes or assists in making employment decisions. These decisions include:
- hiring,
- promotions,
- assignments for training programs, or
- similar activities.
Additionally, the AI regulations state that they cover a broader scope of decision making tools, including those that extend beyond “machine learning” AI. Explicitly, the rules also cover “selection criteria” systems. According to Jackson Lewis, however, businesses can still use or start to use regulated ADS to:
- Screen resumes for particular terms or patterns;
- Direct job advertisements or recruiting materials to targeted groups;
- Assess applicants’ or employees’ skills through questions, puzzles, games, or challenges; and
- Analyze audio or video recordings to evaluate, categorize, or recommend applicants or employees.
The California AI regulations take effect on October 1st, 2025.
Prohibited Actions Under the California AI Regulations
In order to comply with the new laws, employers are prohibited from using ADS or selection criteria that discriminate against applicants or employees based on protected categories. Markedly, these categories are defined under California’s Fair Employment and Housing Act (FEHA). Specifically, the categories include: age (40 and over), ancestry, color, creed, denial of family and medical care leave, disability (mental and physical), including HIV and AIDS, marital status, medical condition (cancer and genetic characteristics), national origin, race, religion, sex, and sexual orientation.
The regulations also impose requirements for data collection and retention. Notably, employers must retain ADS-related records, including dataset descriptors, scoring outputs, and audit findings, for a period of four years. These records can demonstrate compliance and respond to any regulatory or legal challenges.
The Importance of Auditing Automated Decision Systems
Finally, the AI regulations emphasize the importance of conducting bias audits to avoid unlawful discrimination. These audits should include any AI tools used in employee screening, hiring, promotions, selection, and evaluation. For the purpose of an audit, employers should contact their vendors who provide the software and request their specific anti-bias testing instructions. Meanwhile, all employers should be aware of the vendor’s data-use practices and make sure that the partner is aware of possible ADS-related liability. Bias audits should be conducted regularly, both before and after the system is instituted.
States with Current, Proposed, or Pending Legislation
In conclusion, although this blog post focuses on California AI regulations, it is worth noting that this is the first overarching law of its kind. As a result, many states across the United States will likely enact their own laws regarding AI decision making in hiring.
Currently, most likely influenced by California’s earlier drafts of the AI regulations, New York, New Jersey, Vermont, and Massachusetts all have similar pending or proposed legislation. (Illinois, on a smaller scale, already prohibits the use of AI in recruitment, hiring, promotion, etc., where it results in discrimination based on protected traits.) It would not be surprising to see other states create similar legislation within the next few legislative cycles. Due to this, all employers considering the use of AI in specific hiring or recruitment practices should consult with their legal counsel to verify the obligations they must follow.
To ensure full compliance with evolving employment and AI-related laws, businesses should prioritize proactive education for their leaders. Implementing regular compliance training for managers can help supervisors understand and apply the latest regulatory standards, including fair hiring practices, anti-discrimination policies, and responsible AI usage. This training not only minimizes legal risks but also fosters a workplace culture rooted in transparency, accountability, and ethical decision-making.