Site navigation

EU Watchdog Calls on Nations to Curb use of Artificial Intelligence

Ross Kelly

,

artificial intelligence

The use of artificial intelligence in predictive polling, healthcare and advertising is raising serious concerns.

The European Union’s top civil rights watchdog has raised concerns over the increased use of artificial intelligence and automated systems.

In a report published by the European Union Agency for Fundamental Rights (FRA), the watchdog warns that the use of artificial intelligence in medicine, targeted advertising and predictive polling could pose a threat to civil rights if not curtailed.

The report is part of the FRA’s project on artificial intelligence and big data, and draws upon more than 100 interviews with public and private organisations already using AI.

“AI is not infallible, it is made by people – and humans can make mistakes. That is why people need to be aware when AI is used, how it works and how to challenge automated decisions,” says FRA Director Michael O’Flaherty.

“The EU needs to clarify how existing rules apply to AI. And organisations need to assess how their technologies can interfere with people’s rights both in the development and use of AI.”

Additional guidance and the future establishment of a clear-cut framework on how artificial intelligence is deployed must be a key priority for EU states, the report says.

“The EU should further clarify how data protection rules apply to AI. More clarity is also needed on the implications of automated decision-making and the right to human review when AI is used,” it states.

Recommended

Similarly, the EU should focus on developing a more ‘joined-up’ oversight system which aims to boost transparency when using artificial intelligence. This proposed oversight system would work closely with public and private organisations to ensure compliance and accountability.

The report also calls for greater investment in research to assess the “potentially discriminatory effects” of artificial intelligence and automated systems.

In the report, the FRA’s full list of recommendations calls on EU countries to:

  • Make sure that AI respects ALL fundamental rights
  • Guarantee that people can challenge decisions taken by AI
  • Assess AI before and during its use to reduce negative impacts
  • Provide more guidance on data protection rules
  • Assess whether AI discriminates
  • Create an effective oversight system

Critics warn that the use of automated systems, in particular automated facial recognition technology, impinge on civil rights and privacy.

In recent years, calls have been made to curtail the use of AI-based systems in law enforcement amid concerns over mass surveillance and discrimination.

UK-based privacy rights campaigners, including the Open Rights Group and Big Brother Watch, have frequently criticised the use of AFR by both the Metropolitan and South Wales police services.

Ross Kelly

Staff Writer

Latest News

%d bloggers like this: