AI in Hiring: Powerful Tool or Legal Liability?
- East West General Counsel
- Apr 21
- 3 min read

A fast-growing tech company recently reached out to our legal counsel team after implementing an AI-powered hiring tool. While the system saved them hours of manual screening, concerns quickly arose when one of the candidates raised questions about how their data had been processed. The company wanted legal advice on whether its use of AI could become a liability.
The Rise of AI in Recruitment
Many employers are now turning to AI tools to streamline hiring. These tools can screen applications, assess interviews, and rank candidates faster than any human ever could. But how AI is used matters just as much (if not more) than the fact that it's being used at all.
However, AI-based hiring tools are now facing increased legal scrutiny, especially when they lead to discriminatory outcomes or infringe on privacy laws.
Case Law to Watch
While we have not yet seen any meaningful court guidance, litigation tied to AI hiring practices have set off alarm bells for employers everywhere:
🔹 CVS Health Corp. Settlement (2023)CVS settled a class action lawsuit alleging that an AI tool assessed job candidates based on their facial expressions, assigning an “employability score” that factored in traits like willingness to learn and personal stability. The lawsuit argued that this amounted to an illegal lie detector test. Although no legal ruling was issued because the case settled, it raised serious concerns around consent and transparency.
🔹 Workday Class Action (Ongoing)In another class action, a California court allowed a case to move forward against Workday, a company offering HR systems to major employers. Plaintiff Derek Mobley alleges that Workday’s AI system discriminated against him based on his race (Black), age (over 40), and disabilities (anxiety and depression), causing him to be rejected for over 100 jobs.
These cases illustrate that legal exposure from AI use is not just theoretical, it’s very real.
Questions Every Employer Should Ask
Before deploying AI in your hiring process, consider these key questions:
What problem are we solving with this AI tool?
Does the system create fairness or potentially amplify bias?
Will applicants feel deceived or distrusted if they later learn AI was analyzing their behavior or data?
Should we disclose our use of AI tools up front to maintain trust?
Will qualified candidates opt out of applying if they distrust AI evaluations?
Discrimination Risks in AI-Powered Hiring
What is the root cause of many AI failures? Poor training data. It’s a principle known in computer science as “Garbage In, Garbage Out.” A famous example: Amazon scrapped a hiring algorithm that favored male candidates because it was trained on resumes from male-dominated teams. The system “learned” to prefer men, simply by observing patterns in past hiring, regardless of merit.
If AI is trained on biased or incomplete data, it can unintentionally discriminate by gender, race, age, or disability, even if no one intended that result.
In Derek Mobley’s case, the court may eventually reveal whether the AI system operated with a discriminatory impact. But unless a full trial happens, the inner workings of that tool may never come to light.
The Takeaway for Employers
Don’t trust AI blindly. Use it thoughtfully. Companies must apply human judgment, ask how AI is trained, and monitor outcomes. AI can be a powerful tool in recruitment, but without practical legal safeguards, it can expose businesses to lawsuits, regulatory action, and reputational damage.
If your organization is considering or already using AI hiring tools, it’s critical to seek legal advice. A knowledgeable corporate lawyer can help evaluate risks, ensure compliance, and protect your brand.
Need help implementing AI tools the right way? Our team at East West General Counsel offers legal counsel that blends tech fluency with real-world business sense.
Comments