EU human rights agency says we should tread carefully with AI

The EU Agency for Fundamental Rights (FRA) has issued a report warning of confusion about the impact AI and automation can have of people’s rights.

Scott Bicheno

December 14, 2020

3 Min Read

The EU Agency for Fundamental Rights (FRA) has issued a report warning of confusion about the impact AI and automation can have of people’s rights.

As if the whole topic of AI wasn’t already dystopian enough, the report is titled ‘getting the future right’, as if the FRA reckons it already has. It warns that, while AI might be handy at times, it can also lead to discrimination and be hard to challenge. It calls on policymakers to provide more guidance on how existing rules apply to AI and ensure any future AI laws protect fundamental rights.

“AI is not infallible, it is made by people – and humans can make mistakes,” said FRA Director Michael O’Flaherty. “That is why people need to be aware when AI is used, how it works and how to challenge automated decisions. The EU needs to clarify how existing rules apply to AI. And organisations need to assess how their technologies can interfere with people’s rights both in the development and use of AI. We have an opportunity to shape AI that not only respects our human and fundamental rights but that also protects and promotes them.”

Here are the specific things it wants all EU stakeholders to have a think about:

  • Make sure that AI respects ALL fundamental rights – AI can affect many rights – not just privacy or data protection. It can also discriminate or impede justice. Any future AI legislation has to consider this and create effective safeguards.

  • Guarantee that people can challenge decisions taken by AI – people need to know when AI is used and how it is used, as well as how and where to complain. Organisations using AI need to be able to explain how their systems take decisions.

  • Assess AI before and during its use to reduce negative impacts – private and public organisations should carry out assessments of how AI could harm fundamental rights.

  • Provide more guidance on data protection rules – the EU should further clarify how data protection rules apply to AI. More clarity is also needed on the implications of automated decision-making and the right to human review when AI is used.

  • Assess whether AI discriminates – awareness about the potential for AI to discriminate, and the impact of this, is relatively low. This calls for more research funding to look into the potentially discriminatory effects of AI so Europe can guard against it.

  • Create an effective oversight system – the EU should invest in a more ‘joined-up’ system to hold businesses and public administrations accountable when using AI. Authorities need to ensure that oversight bodies have adequate resources and skills to do the job.

That all seems fairly sensible, which begs the question of why this report was considered necessary? What safeguards are currently being put in place before we hand over our lives to some pitiless, amoral machine? Most of the time automation is used to make human being redundant and thus save money. While the morality of doing so is, in itself, worthy of further examination, it should certainly not be used to shield those that employ it from liability if it results in negative outcomes.

About the Author(s)

Scott Bicheno

As the Editorial Director of, Scott oversees all editorial activity on the site and also manages the Intelligence arm, which focuses on analysis and bespoke content.
Scott has been covering the mobile phone and broader technology industries for over ten years. Prior to Scott was the primary smartphone specialist at industry analyst Strategy Analytics’. Before that Scott was a technology journalist, covering the PC and telecoms sectors from a business perspective.
Follow him @scottbicheno

Get the latest news straight to your inbox.
Register for the newsletter here.

You May Also Like