First, the document is divided into two. From page 1 to page 18, there is something I would call the "spirit of the text", where the authors present the global motivations of the text, as well as some definitions and proposals. The rest of the document is the more "article x alinea y [insert law here]" kind of stuff.

The interested but hurry reader can probably read the first part.

The first thing is that the proposal acknowledge the costs of AI, as well as its potential to harm people. Mention of social bias, direct and indirect discrimination are numerous. It emphasize a lot on AI systems put on the "EU Market", and on "Market Overseers".

The proposal define AI systems as "designed for various degree of autonomy", for "a given set of human-defined objectives, generate outputs, such as content, predictions, recommendations, or decisions influencing real or virtual environment". That's quite a vast definition, probably for keeping in touch with the volatily of the AI field. Some examples are for instance logistics and dispatching software using AI, or software assisting judges during trials.

In particular, the notion of high-risk AI system is emphasized, and it is on those that most of the regulation will focus on.

Is defined as an high-risk system some software which malfunction can result in harm of persons, property, damage to the environment, disruption of democratic process and so on.

Note that the notion of "reasonably foreseable misuse" that are malfunction that do not occur within the system nominal use, but that could have been foreseen considering human behaviour

Follow

High-risk systems can either be asserted by entities outside of the AI system provider (different from the manufacturer), or within the provider depending on the circumstances.

The exemples of high-risk AI systems are quite diverse: a tax calculating software, software used in border control, judiciary advises and support.

· · Web · 1 · 0 · 0

Some AI systems are straightforwardly considered illegal: "systems designed or use in a manner that manipulates human behaviour", "that exploits informations on a group of person to target their vulnerabilities", mass-surveillance and social scoring.

The first three are "of course" authorized for the sake of public safety if proper safeguards exist: I think this means that a legal framework will be mandatory/required.

Also, this regulation does not apply on military AI (why?!)

A strong emphasis is made on the transparency of systems. Targeted people should be aware of when an AI system is used, to which extend, to have access to some metrics on its inner working and design.

Sanctions are proposed to ensure compliance of actors: 4% of annual worldwide annual turnover for companies (and yes, this draft is extraterritorial to the EU) if compliance is not met after a case is opened by Market Surveilance authorities. I have not find anything allowing citizens to build up such a case though.

There are some rules to preserve innovation, especially for startups. They mostly consist on "AI regulatory sandboxing schemes", basically some settings where the regulation can be overcomed under legal control.

And finally, a European Artificial Intelligence Board, made of experts and Council representative, shall be created. Its purpose is to ensure the application of the regulation by the member-states, partake in the development of shared standards, provide advices and expertises on the topics adressed by the regulation, among other things

To sum up!

It's a good baseline, from an AI safety scientist point of view. Responsability of humans is acknowledged and requirements of robustness, transparency and opting-out are proposed.

There is a lack of precision on the definition of "users" though. An advertising company is using a recommendation engine, but the targeted persons are the ones which data is compromised. Some protections are proposed, but I would have liked a stronger emphasis on this.

Especially, if we take "consent" as a base just like GDPR does, the "user" consent is of little use here. What we want is that the targeted persons are aware that an AI system is currently running on their data, be them biometric or digital.

Sign in to participate in the conversation
Scholar Social

Scholar Social is a microblogging platform for researchers, grad students, librarians, archivists, undergrads, academically inclined high schoolers, educators of all levels, journal editors, research assistants, professors, administrators—anyone involved in academia who is willing to engage with others respectfully.