California employers have embraced AI hiring tools over the past few years. Resume screening software, automated interview platforms, and performance evaluation algorithms have become standard practice across industries. But California lawmakers are paying attention, and they’re not thrilled with what they’re seeing.
Four significant bills are moving through the legislature right now, plus the California Civil Rights Department has finalized regulations that are set to go into effect on October 1, 2025. If your company uses AI for any employment decisions, these regulations will affect how you operate.
Why the State is Acting Now
AI isn’t theoretical anymore. Studies continue to show that these systems can systematically exclude qualified candidates based on protected characteristics, resulting in discrimination and other prohibited actions. The pattern is clear enough that California, always quick to regulate workplace issues, decided to step in before problems get worse.
California led the way in implementing laws addressing sexual harassment, family leave policies, and workplace safety standards. Now they’re applying the same mindset to artificial intelligence in employment.
What’s Coming Down the Pipeline
SB 7: The “No Robo Bosses Act”
This bill, approved by the California Senate and currently in the Assembly, would amend the Labor Code to require disclosures and oversight of the use of AI via automated decision systems (ADS) when they play a role in employment decisions. Employers would be required, for example, to:
- Provide a written notice (subject to certain timelines) that an ADS is used for making employment-related decisions (not including hiring) to workers foreseeably directly affected;
- Maintain an updated list of ADS in use;
- Notify job applicants that the employer utilizes an ADS in hiring decisions, where the ADS is used in making decisions for the position;
- Allow workers to access their ADS collected or used data and correct errors in the data; and
- Provide workers affected by a discipline, termination, or deactivation decision made by an ADS with notice and appeal rights.
The bill will also prohibit certain ADS with certain functions or limit the way in which those ADS may be used in decision-making, among other requirements.
The transparency and notice requirements go beyond simple disclosure. Companies must explain their AI systems in plain language and demonstrate how the technology influenced specific decisions, as well as identify a wide range of other information and data, and employee rights. More challenging for employers: anyone affected by an AI decision can demand human review of that choice.
This bill will also prevent a wide range of actions by employers, including using AI for conducting predictive behavior analysis and relying primarily on an ADS when making promotion, discipline, or termination decisions.
Companies will need to ensure AI systems don’t produce unfair outcomes for protected groups. This won’t be a one-time compliance exercise – it will be an ongoing obligation that requires real technical expertise.
AB 1018: ADS Documentation and Audits
Currently, in the Senate, this bill regulates the development and deployment of ADS used to make “consequential” decisions (including employment-related decisions) and focuses on record-keeping and accountability. The bill would require certain ADS developers to, for example, conduct ADS performance evaluations and disclose the results of those evaluations to parties using the ADS. Under this bill, AI employment decisions will require documentation, for example, the data the system used, the reasoning behind its conclusion, the alternatives it considered, and any human oversight involved.
Third-party audits will also be required for AI employment systems and independent reviewers must examine whether these tools work fairly across different demographic groups and meet basic performance standards. Subjects of ADS decisions (for example, job applicants) will have disclosure rights, and employers using ADS for decisions will have to provide individuals with written disclosures covering a variety of information relating to the use of the ADS.
AB 1221 and AB 1331: Workplace Surveillance and Privacy
These two bills work together to address data protection and employee complaints. AB 1221 (Workplace Surveillance Tools, in committee in the Assembly) regulates the use of workplace surveillance tools and employers’ use of worker data, limiting what information AI systems can collect and process. In addition, for example, to providing notice to workers 30 days before introducing workplace surveillance tools, employers will be limited in their ability to share or sell worker data, will be required to retain data used to make employment-related decisions for 5 years, and may only collect, use, and retain worker data reasonably necessary and proportionate to achieve the permitted and disclosed purposes for which is was collected. Companies can’t just vacuum up every available data point and will be required to disclose a great deal about their data collection systems.
AB 1331, in the Senate currently, limits employers’ use of workplace surveillance tools in employee-only, employer-designated areas (such as bathrooms, breakrooms, locker rooms, cafeterias), and during employees’ off-duty hours. Violations by employers will result in a $500 civil penalty per employee per violation.
California Civil Rights Department Rules
The CRD Council is securing final approval for new rules set to go into effect on October 1, 2025. These rules, approved on June 27, 2025, aim to protect against employment discrimination resulting from AI, algorithms, and ADS, will fill in the gaps left by legislation to apply anti-discrimination laws to decisions made or supported by AI, specifying exactly how anti-bias testing should work, what documentation standards apply, and how much human oversight is required. Under these rules, employers can be held liable for discrimination, even where caused by AI and ADS provided by third-party vendors.
The Civil Rights Council’s regulations aim to:
- Make it clear that the use of an automated decision system may violate California law if it harms applicants or employees based on protected characteristics, such as gender, race, or disability.
- Ensure employers and covered entities maintain employment records, including automated-decision data, for a minimum of four years.
- Affirm that automated-decision system assessments, including tests, questions, or puzzle games that elicit information about a disability, may constitute an unlawful medical inquiry.
- Add definitions for key terms used in the regulations, such as “automated-decision system,” “agent,” and “proxy.”
The timeline is tight. Companies have less than two years to figure out compliance for rules that don’t exist in final form yet.
Getting Ready for Compliance
1. Start with an Inventory
First, figure out what AI tools you’re using. This is trickier than it sounds since many HR software platforms include AI features that aren’t labeled, and third-party vendors may also be using AI or ADS. Applicant tracking systems, scheduling software, and performance management tools often make algorithmic decisions behind the scenes.
Document each system’s purpose, how it makes decisions, and what data it uses. Request information from your vendors about their use of AI. This information becomes essential for compliance reporting and helps identify potential problem areas before regulators come calling. This will help you prepare for compliance with new AI-related regulations and laws.
2. Build Internal Knowledge
These new AI laws require technical expertise that most HR departments currently lack. Staff need to understand concepts like algorithmic bias, statistical significance, and data validation. Not at PhD level, but enough to ask intelligent questions and evaluate vendor claims. For example, “How do these systems evaluate individuals and on what basis? How are you measuring fairness? Which fairness metric do you use? Please expand.”
Some companies are hiring data analysts or partnering with consultants who understand both AI systems and employment law. Others are training existing HR staff on technical concepts. Either way, building this expertise takes time.
3. Rework Vendor Contracts
Standard software agreements don’t address AI-specific compliance risks. New contracts should include provisions covering bias auditing, algorithmic transparency, compliance support, and indemnity and liability in the event of errors or malfunctions.
Vendors vary widely in their readiness for these requirements. Some offer robust compliance features and welcome transparency demands. Others resist sharing algorithmic details, citing trade secrets as the reason. This split will likely determine which AI employment tools survive in California.
4. Establish Oversight Procedures
Human review requirements mean companies need clear policies about when and how people should review and evaluate AI decisions. This creates practical challenges: How do you train managers to review algorithmic choices meaningfully when they don’t fully understand them? What constitutes adequate human oversight?
Documentation becomes crucial throughout this process. Compliance audits will focus on whether companies can demonstrate that they have adhered to their policies and maintained effective oversight of their AI systems.
The Bigger Picture
California’s moves will influence other states. If these regulations do not hinder innovation, expect similar laws to emerge elsewhere. Federal legislation may follow, although it is likely to occur only after state experiments provide more guidance.
The business case for getting this right goes beyond legal compliance. Companies that eliminate AI bias might access talent pools that competitors miss. Those who maintain meaningful human oversight can make better decisions than fully automated systems.
But the risks of getting it wrong are substantial. AI-related discrimination claims could impact hundreds or thousands of employment decisions simultaneously, creating significant potential liability on a class-wide basis. Reputation management from algorithmic bias stories spreads quickly in today’s social media environment.
The technology will continue to evolve, and so will the regulations. What matters now is building the internal capabilities and vendor relationships needed to adapt as requirements become clearer. Companies that start preparing now will handle the transition much better than those who wait for the final rule language.
Bottom line: AI isn’t going away from employment decisions, but the days of using these tools without oversight and accountability are over. California employers who embrace that reality and plan accordingly will be ahead of the curve when the new rules take effect. If you are a California business owner who needs legal guidance on navigating AI and the employment laws surrounding it, reach out to us today.
DISCLAIMER: Content within this post should not be considered legal advice and is for informational purposes only. Communications made through this post do not create an attorney-client relationship. Hackler Flynn & Associates is not responsible for any content that you may access from third-party resources that may be accessed through or linked to this post. Hackler Flynn & Associates is only licensed to practice in California.
DISCLAIMER
Content on the website should not be considered legal advice and is for information purposes only. Communications made through the website do not create an attorney-client relationship. Hackler Flynn and Associates is not responsible for any content that you may access from third-party resources that may be accessed through or linked to this website. Hackler Flynn & Associates is only licensed to practice in California.
We would also like to inform you that the Hackler Flynn and Associates website uses cookies to enhance your experience. By continuing to browse, you acknowledge this.
Communications made through the website do not create an attorney-client relationship. Information transmitted to the attorney or through the website may not remain confidential.