EU AI Act – What HR Needs to Know


The EU AI Act will come into effect in February and affect any company that develops, or deploys, large language models tools, including UK employers. James Flint explains.

In the past 18 months, the excitement surrounding the advancements in AI, coupled with an uncertain economic climate has prompted a variety of HR departments across the board to adopt this technology quickly.

Chatbots are able to answer candidate questions, schedule interviews and give updates on the status of applications in a way that is more personal than a website.

AIs can search job boards, professional and social networking sites and databases within companies to find candidates who have specific skill sets. An AI can more efficiently screen and rank CVs than a human team that is overworked.

This technology can automate onboarding, assist in designing and running training, analyze internal surveys, spot trends and identify employee sentiment. It can even remove unconscious bias from performance reviews.

Coming legislation

The GDPR’s Article 5 outlines the principles of data protection, including accuracy, fairness and transparency, and Article 23 covers the provisions relating to “automated decisions”.

The EU AI Act will be in effect on 2 February 2025. It defines all AI activities that are performed within the HR, recruitment or worker management areas as being “high-risk”.

The Act will apply to UK businesses that develop AI systems that are used in Europe or that deploy them.

This will bring HR departments that are forward-thinking into the realm of extra legal overheads.

This technology should prompt questions, such as: if AI is so easily used in all of these areas, then what happens when the AI starts to underperform or go wrong?

The results for employers and employees can be disastrous if the AI fails to identify patterns in cohort data, incorrectly analyses cohort data, or adds bias.

It is possible to hire the wrong people, reprimand the wrong teams, or post the wrong ads. All of these can have a negative impact on the company culture. They can also lead to expensive litigations, PR disasters, and even worse, once the effects reach people’s daily lives.

High-risk activities

The EU AI Act chose to include AI applications in recruitment, HR, and worker management to its “high risk” category. These are now included with AI applications in autonomous vehicles, law, biometric identification, critical infrastructure, and medical devices.

You are an “AI Provider” if you create AI systems to address these use cases. AI providers have a number of obligations including, but not limited to:

  • Implementing a risk management system that is appropriate
  • Maintaining technical documentation, human oversight and data sets that are suitable for the purpose and without bias
  • Completing a “conformity assessment”, which is similar to a DPIA (data protection impact assessment), but for AI
  • Registering your AI model in the official EU database for high-risk AI systems
  • After deployment, you can monitor and adjust your system’s performance and safety.

You will still be classified as a provider of AI and given all these responsibilities if you are simply taking an existing model – say, an open-source large language model (LLM), like Mistral or Llama – and fine-tuning with your own datasets, or retrieval-augmented generation (RAG).

Beware, if your team has been working hard to build you some fancy tools using all the new technology, you may be facing more paperwork than expected.

The Act holds you accountable for a number of requirements, including purpose limitation, human supervision, monitoring of inputs, record keeping, incident reporting and transparency to the affected parties.

This is all before we get to the ethical issues that users and providers must address and adhere to.

Good AI Governance

AI is different from traditional software. Software is traditionally deterministic. It does exactly what you tell it to do, and if it doesn’t then it’s a bug that can (theoretically) be corrected by changing the program. But AI systems by their very nature are probabilistic.

You are an AI provider if you create AI systems to be used in HR. AI providers have a long list of regulatory requirements.

These systems are based on statistical analysis of large data sets. While this makes them more robust in dealing with uncertainty, they also produce outputs that are inherently unreliable.

The EU AI Act is a good way to contain this uncertainty and keep it under constant review.

Focus should be placed on ensuring full data oversight, encouraging transparency and designing processes to benefit both the company and its candidates.

This includes conducting thorough audits on existing AI systems, implementing rigorous conformity by design practices for new ones and training teams in the ethical use AI.

By implementing good AI governance and taking a proactive approach, HR departments will avoid the pitfalls that come with rushed implementation. They can ensure this exciting technology is a boon to their organisation, and not a threat.

Subscribe to our weekly HR news and guidance

Every Wednesday, receive the Personnel Today Direct newsletter.

Personnel Today offers a number of opportunities for change management.


Browse Change Management Jobs

Don’t Stop Here

More To Explore

Enhancing Your Employer Brand with CBRE

CBRE, the largest commercial real estate company in the world, understands the importance of an employer brand. How can a B2B firm like CBRE attract

Inizia chat
1
💬 Contatta un nostro operatore
Scan the code
Ciao! 👋
Come possiamo aiutarti?