The European Union's Artificial Intelligence Act is the world's first comprehensive legal framework for AI. For recruiting and HR teams, it carries significant implications: AI systems used in hiring decisions are classified as high-risk, triggering strict transparency, documentation, and oversight requirements.
This guide breaks down what the regulation says, when it takes effect, and what recruiting teams need to do to prepare.
Timeline: Key Dates for Recruiting
| Date | Milestone | Impact on Recruiting |
|---|---|---|
| Aug 1, 2024 | AI Act enters into force | Grace period begins; companies should start assessments |
| Feb 2, 2025 | Prohibited practices banned | Social scoring, manipulative AI, and emotion recognition in workplaces restricted |
| Aug 2, 2025 | General-purpose AI rules apply | LLM-based recruiting tools must meet transparency requirements |
| Aug 2, 2026 | High-risk system rules apply | Full compliance required for AI recruiting tools |
| Aug 2, 2027 | Full enforcement | Penalties for non-compliance: up to 35M EUR or 7% of global turnover |
Source: Regulation (EU) 2024/1689 (EU AI Act), Official Journal of the European Union.
Key insight: August 2, 2026, is the critical date for most recruiting teams. By this date, any AI system used for CV screening, candidate ranking, interview analysis, or automated hiring decisions must meet the full set of high-risk system requirements.
Risk Categories: Where Recruiting AI Falls
The EU AI Act classifies AI systems into four risk tiers. Recruiting applications fall squarely in the high-risk category.
| Risk Level | Examples | Requirements |
|---|---|---|
| Unacceptable | Social scoring, subliminal manipulation, emotion recognition at work (certain uses) | Banned outright |
| High | CV screening, candidate ranking, automated interview analysis, hiring decision support | Conformity assessment, documentation, human oversight, data governance, registration |
| Limited | Chatbots (must disclose AI), job ad optimization | Transparency obligations |
| Minimal | Spam filters, basic scheduling tools | No specific requirements |
Source: EU AI Act, Annex III (High-Risk AI Systems), Category 4: Employment, workers management and access to self-employment.
What Recruiters Must Do
If your organization uses AI tools that influence hiring decisions, here is what the regulation requires:
1. Conduct a conformity assessment
Before deploying a high-risk AI system, you must assess whether it meets the requirements set out in the regulation. For most recruiting tools, this will be a self-assessment rather than a third-party audit, but it must be documented thoroughly.
2. Maintain technical documentation
You need clear documentation of how the AI system works, what data it was trained on, its intended purpose, known limitations, and the metrics used to evaluate its performance. This applies whether you built the tool in-house or purchased it from a vendor.
3. Ensure human oversight
AI systems cannot make fully autonomous hiring decisions. A qualified human must be able to understand, review, and override the AI's outputs. This means keeping a human in the loop for shortlisting, scoring, and rejection decisions.
4. Implement data governance
Training data must be relevant, representative, and free from errors. Organizations must document data sources, pre-processing methods, and any measures taken to detect and mitigate bias in the training dataset.
5. Provide candidate transparency
Candidates must be informed when AI is used in the recruitment process. They have the right to know that an AI system is being used, what it does, and how to request human review of automated decisions.
6. Register in the EU database
High-risk AI systems must be registered in the EU's publicly accessible database before being placed on the market or put into service.
Compliance Readiness: Where Companies Stand
| Compliance Activity | % of Companies Started | % Completed |
|---|---|---|
| AI system inventory (recruiting) | 62% | 28% |
| Risk classification of tools | 48% | 19% |
| Vendor compliance review | 44% | 15% |
| Technical documentation | 35% | 11% |
| Bias testing and audits | 31% | 9% |
| Candidate transparency notices | 41% | 22% |
| Human oversight procedures | 52% | 24% |
Sources: PwC EU AI Act Readiness Survey 2025, Mercer Global Talent Trends 2026, CIPD estimates.
Key insight: Despite the August 2026 deadline, fewer than 1 in 5 companies have completed risk classification of their recruiting AI tools. Bias testing and technical documentation are the least advanced areas, creating potential exposure for organizations that delay action.
Penalties for Non-Compliance
The EU AI Act imposes significant penalties for violations:
- Prohibited AI practices: Up to 35 million EUR or 7% of annual global turnover, whichever is higher
- High-risk system violations: Up to 15 million EUR or 3% of annual global turnover
- Providing incorrect information: Up to 7.5 million EUR or 1% of annual global turnover
- SME reductions: Lower caps apply for small and medium enterprises
Related data
Related execution benchmarks: To connect AI Act compliance with real hiring outcomes, review Recruiter Productivity Benchmarks in Europe and Candidate Response Rate Benchmarks in Europe.
