Artificial Intelligence

Mastering the challenge of trustworthy AI

10th May 2019
Alex Lynn
0

From treating chronic diseases and reducing fatality rates in traffic accidents, to fighting climate change and anticipating cyber security threats, Artificial Intelligence (AI) is no longer considered a futuristic construct - it is already a reality and is helping humanity solve pressing global challenges.

It significantly improves peoples’ lives, helps with day-to-day tasks and benefits society and the economy. Nevertheless, AI applications should not only be consistent with the law, but also adhere to ethical principles. The ethical dimension of AI is not a luxury feature or an add-on: it needs to be an integral part of AI development.

The European Commission recognises AI as one of the 21st century’s most strategic technologies and is therefore increasing its annual investment in AI by 70% as part of the research and innovation programme Horizon 2020, reaching €1.5 billion for the period 2018 to 2020. The Commission aims to foster cross-border cooperation and mobilise all players to increase public and private investments to at least €20 billion annually over the next decade. 

AI is just like any other tool, it is here to help people. It is this perspective that underpins the EU’s approach and commitment to putting it at the service of citizens and the economy. To make the most of the opportunities which AI offers and address these challenges, the Commission published a European strategy in April 2018. 

The strategy places people at the centre of the development of AI, ensuring a human-centric approach: AI is not an end in itself, but a tool that has to serve people’s well-being.

Europe’s approach to Artificial Intelligence shows how economic competitiveness and societal trust must start from the same fundamental values and mutually reinforce each other. The EU has a strong regulatory framework that will set the global standard for human-centric and trustworthy AI. 

To this end, the Commission has set up a high-level expert group on AI representing a wide range of stakeholders (Member States, industry, societal actors and citizens) and has tasked it with drafting AI ethics guidelines as well as preparing a set of recommendations for broader AI policy. According to the guidelines, three components are necessary in order to achieve ‘trustworthy AI’: (1) it should comply with the law, (2) it should fulfil ethical principles and (3) it should be safe and technically robust since, even with good intentions, AI systems can cause unintentional harm.

By stepping up investment at the European level, preparing a framework of future actions, and supporting efforts of Member States to prepare for the changes and build trust in human-centric AI, Europe and its citizens should be able to shift perspective ´from fear to opportunity´. They will also be equipped to take advantage of AI and use it to co-create a society full of opportunities.

Featured products

Upcoming Events

No events found.
Newsletter
Latest global electronics news
© Copyright 2024 Electronic Specifier