The agricultural machinery industry calls upon legislators to be more restrictive in scope and to leave the door for innovation open by letting the sectors proof their expertise.

***

The AI Regulation puts in place a clear regulatory framework for several types of AI, including high-risk AI. In the high-risk AI it also includes safety critical systems, meaning systems where the safety can be endangered by the use of AI. The Regulation leaves the decision on what is a “safety critical system due to the use of AI”, to more directly applicable product safety legislation, such as the currently-in-revision Machinery Directive. Such system, is also subject to third party assessment.

The European Commission, in order to cover the topic of AI, included in scope of the annex of ‘High-risk machinery products’ safety functions using AI, placed on the market within a product or independently.

With this provision, the field of application would be too large and too burdensome for the risks to be covered, as was already noted by the two other decision bodies in the EU.

The agricultural machinery industry calls upon the legislators not to look only at the novelty of the technology but to the specific application, the related possible hazardous situations and the possibility for the manufacturer to keep in control. Therefore, the scope should be limited to ‘software using machine learning for automated decision making’. In plain words it has to be restricted to (safety) component using data sets to learn from experience and to make decisions without being explicitly programmed to do so, and programmed to update its prediction capability from data collected after the machine has been put on the market. This would avoid targeting software that is uncapable to learn or evolve independently and that is programmed only to optimally execute certain automated functions of machinery. This optimal execution of functions could also be further improved by batch learning under the supervision of the manufacturers and be implemented by software updates.

But even with such limited scope, there are many examples of low-risk situations where the stringent requirements are disproportionate and the legislation seems unfit to further distinguish. We therefore call upon reintroduction in Annex I of the possibility for self-certification, under the condition of having a harmonized standard that covers all risks. With a successful harmonized standard, industry proves it has the experience to use AI safely. And if they do, one of the main reasons to introduce additional third-party assessment,  i.e. a lack of expertise, becomes obsolete. Industry should be allowed to use self-certification.

To quote from the European Parliament AIDA - special committee on Artificial Intelligence in a Digital Age: If the ethical risks are tackled, the huge potential of AI can be unleashed. Further, the AI regulation should focus on the level of risk associated with specific uses.

For safety critical systems this advice seems also fit for the Machinery Directive in revision. Currently, the latter closes the door for innovation rather than being future proof, as it claims to be. For such novel technologies legislation should guide, not dictate.

It is not too late to make the changes, based on the growing evidence being put forward.