Mediaroom

EC: Policy and investment recommendations for trustworthy Artificial Intelligence

The High-Level Expert Group on AI (AI HLEG) from the European Commission has published its second deliverable of the year. This document contains their proposed Policy and Investment Recommendations for Trustworthy AI, addressed to EU institutions and Member States. Building on their first deliverable, they put forward 33 recommendations that can guide Trustworthy AI towards sustainability, growth and competitiveness, as well as inclusion – while empowering, benefiting and protecting human beings.

The recommendations of the report focus on four main areas where the experts believe Trustworthy AI can help achieving a beneficial impact, starting with humans and society at large (A), and continuing then to focus on the private sector (B), the public sector (C) and Europe’s research and academia (D). In addition, they also address the main enablers needed to facilitate those impacts, focusing on availability of data and infrastructure (E), skills and education (F), appropriate governance and regulation (G), as well as funding and investment (H).

 

 

Background:

In its various communications on artificial intelligence (AI) the European Commission has set out its vision for AI, which is to be trustworthy and human-centric. Three pillars underpin the Commission’s vision:

  • increasing public and private investments in AI to boost its uptake
  • preparing for socio-economic changes
  • ensuring an appropriate ethical and legal framework to protect and strengthen European values

To support the implementation of this vision, the Commission established the High-Level Expert Group on Artificial Intelligence (AI HLEG), an independent group mandated with the drafting of two deliverables: a set of AI Ethics Guidelines and a set of Policy and Investment Recommendations.

In the first deliverable, the Ethics Guidelines for Trustworthy AI published on 8 April 2019 (Ethics Guidelines), they stated that AI systems need to be human-centric, with the goal of improving individual and societal well-being, and worthy of our trust. In order to be deemed trustworthy, they put forward that AI systems – including all actors and processes involved therein – should be lawful, ethical and robust. Those Guidelines therefore constituted a first important step in identifying the type of AI that we want and do not want for Europe, but that is not enough to ensure that Europe can also materialise the beneficial impact that Trustworthy AI can bring.

Source: European Commission

 

Attachments


    In: CLEPA News, Growth & Competitiveness, Research & Innovation, Skills
    • By Topics

    • Reset