artificial intelligence

Will coordination and trust be the recipe for Artificial Intelligence ‘Made in Europe’?

On 18 February 2019, the European Competitiveness Council approved the Coordinated Plan for Artificial Intelligence (AI) in the European Union as part of its two-day policy debate on competitiveness of the EU’s internal market and industry, based on conclusions of the Council of the European Union (at COREPER level) adopted on 6 February 2019.

Following the adoption of these conclusions, Romanian Minister of Economy, Niculae Bădălău, stated that “today’s conclusions on Artificial Intelligence serve as [a policy direction] for future EU actions in this field with the aim to place the European Union among the drivers of AI at the global level.” This statement accurately reflects the EU’s converging political ambitions on AI. Meanwhile, there were some other specific accents in the plan that warrant attention.

public spending

Public spending on AI is a critical driver of its development in Europe. There is an urgent need to close the gap with the large-scale investments of the leading AI regions – the United States and China. That’s why Commissioner Bienkowska stressed the need to continue the dialogue among EU Finance Ministers on formal public procurement rules. Furthermore, an efficient system for knowledge transfer is also pivotal to leverage investments and to unlock the benefits of AI for the industry, markets, the public sector, and consumers. In that regard, member states are called upon to develop their own AI strategies where possible.

cross-industry collaboration

Regarding the private sector, the coordinated plan calls for cross-industry partnerships and synergies, especially among SMEs. They also highlight the need to leverage further the Copernicus project across several industries (e.g. health, environment, mobility and security). The priority should be on innovative products and services, including those to be used in the fight against climate change.

For us at Logos where we specialise in the areas where industries converge, this is not only a high-level remark but a substantive priority. Collaboration across industry pools expertise and experience from the different contexts in which AI will be applied, creating large positive spillovers including new novel applications that would not have been conceived as part of a ‘silo’ approach. Moreover, a greater range of requirements from adopting sectors can be fed back to developers to make a range of solutions with a far more compelling and appropriate business case.

One example area, in which logos is active, is the development of 5G technology where multi-sector collaboration across vertical industries (e.g. mobility, industry automation, IoT and health) is leading to dialogue between once isolated sectors, now effectively collaborating in the development of a connected future enabled by 5G.

Actions taking place without human interaction – i.e. in all areas of the Digital Single Market involving automation – will require legal clarity for implementation.

capacity development

Meanwhile, the coordinated plan drew attention to:

 The evident lack of European skills in the field of AI, and ICT more broadly by incorporating digital skills with a particular focus on AI at all possible levels of education, including the tertiary sector, where AI is of concern in terms of its impact on labour requirements, as well as through the retention for specialists in the field;
 The need to establish a safe and secure system for citizens, upon which AI, and other connected and automated technologies can be implemented. For example, there is a clear need to develop techniques, platforms and policies that can support central data hubs to respond to “the need to build up and strengthen core Artificial Intelligence capacities”.

upcoming legislative reforms

The coordinated plan has also called for a review of current laws to determine if they are fit for purpose, and to identify opportunities and challenges ahead. Alongside safety, privacy, and cybersecurity, liability will be another major theme. Actions taking place without human interaction – i.e. in all areas of the Digital Single Market involving automation – will require legal clarity for implementation. Accountability, responsibility, and trust – where AI, machine learning, or automation are involved – will become the cornerstone of an upcoming socio-technical debate. In that regard, the current Draft Ethics Guidelines’ (link below) many references to EU ethics laws signalled the EU’s intent to differentiate itself from China or the United States.

To deliver this European approach, the three-step method outlined in the European Commission’s High-Level Expert Group on artificial intelligence’s (AI HLEG) Draft Ethics Guidelines published in December 2018 consists of:

1) Ensuring ethical purpose by maintaining rights, principles and values;
2) Realising trustworthy AI through design requirements (e.g. accountability, data governance, design for all, governance of AI autonomy (human oversight), non-discrimination, respect for human autonomy, respect for privacy, robustness, safety, transparency); and
3) Operationalising AI through an agreed set of assessment criteria, cases studies, and (possible) standards.

The AI HLEG will present the final version of its Ethics Guidelines to the European Commission in March 2019 and to the public in April 2019. The Council will review the implementation of the coordinated plan annually. Overall, AI will gain prominence as part of the EU’s industrial agenda, with many initiatives forthcoming in the next semester as per the EU’s AI Roadmap available via the Commission’s AI portal.

To fuel progress in this area, logos will foster dialogue on the many outstanding issues. Please contact us to discuss the recipe for European AI further.

make change. join us.