The European Commission's AI Ethics Guidelines - A Personal Summary


Recently the AI High Level Expert Group (AI-HLEG) assigned by the European Commission presented its proposal for AI Ethics Guidelines, which is the first of two documents it was assigned to deliver. The second document, which is currently still in production, is on "Policy and Investment Recommendations".

I presented the content of the proposal recently at the Kiel.AI Meetup (you can download the presentation slides here) and would like to share with you here my personal take home messages from studying the proposed Ethics Guidelines:

  • The AI-HLEG wants to set apart European AI systems from American and Asian approaches by fostering the development and use of trustworthy AI.
  • The AI-HLEG promotes a regulation towards trustworthy AI implementations that is to some extent similar to the General Data Privacy Regulation (GDPR), including a apossible desifnation of an AI ethics officer or board in each company (similar to GDPR officers), and transparency rules to inform all users about the capabilities and restrictions of each AI system (similar to data privacy policies).
  • The proposed AI Assessment List to verify the trustworthy of an AI system shows strong similarities to check lists in classical ethics approval processes for medical trials.
  • The AI-HLEG already explicitely mentions areas of opportunities for AI: Climate action and sustainable infrastructure, health and well-being, and quality education and digital transformation. Which might be an indication for its recommendations for future investments in AI.

Having read the Ethics Guidelines, I am curious now to read the upcoming Policy and Investment Recommendations from the AI-HLEG.