Anyone who has ever heard of Tesla’s annual ‘AI Day’ knows that the hype around artificial intelligence in the automotive sector is real.
Though, of course, Tesla is not the only one making major moves in advanced machine-learning like this. Almost all vehicle manufacturers are scaling up investments in AI solutions, and scrambling for partnerships with start-ups in the race to develop self-driving cars that rely on these technologies.
Toyota has invested 400 million dollars in Chinese-American start-up Pony.ai, and Volkswagen has put a staggering 2.6 billion dollars into US autonomous driving start-up Argo AI. Even outsiders like Google’s parent company Alphabet are joining – or indeed leading – the race. One of its subsidiaries, Waymo, offers self-driving taxi services in the US and is valued at about 30 billion dollars.
All this excitement is understandable given the potential of AI-equipped autonomous vehicles to improve user experience and significantly reduce transport congestion and emissions while promising to improve overall driving safety.
Although commercially available, fully autonomous cars (level 4+ according to SAE standards) are still many years away, it is estimated that by 2030 highly autonomous vehicles could account for 10 to 15 percent of new car sales and that almost all cars will include some type of AI technology by then. What’s clear is that car OEMs and others know they have to spend money to make money, and on the consumer side, they see trust is steadily growing.
The market has mostly taken the lead on AI and autonomous vehicles, with regulators playing catch-up. Within the European Union, things are moving forward a bit quicker than in other regions. Europe’s regulators are working towards at least three laws affecting AI and autonomous driving over the coming years. Member States are also active in their own right. Germany approved an autonomous vehicle law earlier this year, for example.
ban certain uses of AI and establish human oversight and transparency requirements for AI systems considered to be of high risk
Probably the most well-known of these initiatives is the so-called AI Act, proposed in April 2021 by the European Commission. The Act is considered to be the first-ever AI legislation in the world. If approved as written, it would ban certain uses of AI and establish human oversight and transparency requirements for AI systems considered to be of high risk (biometric identification, safety components of products, management of critical infrastructure).
What does this mean for the automotive sector? Although many intelligent driving applications using AI would, in theory, fall under the category of high risk, the proposed law favours sector-specific legislation. This basically means that the European Commission prepares an Implementing Act to Regulation 2018/858 on vehicle-type approval including the main provisions of the AI Act. But this would have to wait until the European Parliament and Council have finished their co-legislative process.
Industry associations like ACEA and CLEPA have welcomed this for the most part. It gives the automotive sector more time to brace for what the Implementing Act will bring, avoiding duplication of administrative requirements, and most importantly, it will provide more opportunity to discuss its contents with the Commission. Technical input is crucial in this context, and is usually discussed in automotive-specific expert groups, rather than larger horizontal AI ones.
Despite the upcoming Implementing Act being perhaps the more relevant piece of legislation, such automobile industry players are also naturally interested in, and working on, the AI Act. ACEA has provided feedback on the proposal, calling for a clearer definition of users and providers of AI, and for a rethink on the requirements on human oversight, among other recommendations.
The other major piece of AI legislation doing the rounds in Brussels, and potentially the more relevant one to the auto sector, is an initiative known as ‘Adapting liability rules to artificial intelligence’. Though still at a very early stage, this proposes to revise the Product Liability Directive, a piece of legislation dating back to 1999 that regulates liability claims for defective products in the Union.
Some of the ideas the Commission is floating in its Inception Impact Assessment include expanding the scope of the Directive to include software and artificial intelligence as products, and non-material types of damages, as well as reversing the burden of proof towards the producer (instead of it falling on the consumer).
CLEPA and ACEA are lukewarm about this revision. They believe the Directive is still fit for purpose. It has worked properly for over 20 years, so they question why it needs a major overhaul now. Some adjustments, such as adding software to the Directive’s scope, could be justified, but these associations argue that the current framework already adequately covers damages done by a defective AI product.
The proposed revision is still in its very early stages and stakeholders are invited to provide their views in a public consultation open until January, with the legislative proposal not expected until the third quarter of 2022. Despite this initiative being overshadowed by the AI Act, experts are predicting that it may well end up being the ‘bigger battle’ for the automotive sector and others, aware of the risks AI-powered self-driving cars could pose in terms of liability in road accidents.
Lastly, the Commission is also working on a separate Implementing Act on Regulation 2019/2144, known as the General Safety Regulation, to create a type-approval framework for fully autonomous. A draft of the Implementing Act is being discussed with representatives of civil society and industry in a dedicated expert group. This framework will set some administrative and safety requirements for driverless autonomous vehicles, which means it will mainly apply to robot-taxis and autonomous shuttles. This proposal will likely face some stakeholder opposition which may result in a longer regulatory process.
strike a balance between protecting consumers and users of AI, while leaving enough space for innovation, entrepreneurship and market penetration.
The big decisions are still some months away, and some heated debate can be expected concerning these different regulatory paths. What is certain is that they will have a big impact on the automotive sector and on different technical prospects and perspectives affecting self-driving technologies and artificial intelligence. At best, these changes will protect consumers and their safety on the road, boost consumer trust in these technologies, and create legal certainty for companies. At worst, these changes could hamper EU innovation and tie companies up in red tape.
Despite being a first-mover on the regulatory side, the EU is lagging behind the US and China in terms of investment and development of AI technologies. Commentators are even claiming Europe would struggle to take bronze in this race!
Altogether though, this new package of measures can help Europe catch up and enable car-OEMs not only to cash in but also to unlock the societal benefits these technologies can bring to everyone. The key element will be for legislators to strike a balance between protecting consumers and users of AI, while leaving enough space for innovation, entrepreneurship and market penetration. The auto industry and other actors should help them find this fine balance.