Especially, if we take "consent" as a base just like GDPR does, the "user" consent is of little use here. What we want is that the targeted persons are aware that an AI system is currently running on their data, be them biometric or digital.

Show thread

To sum up!

It's a good baseline, from an AI safety scientist point of view. Responsability of humans is acknowledged and requirements of robustness, transparency and opting-out are proposed.

There is a lack of precision on the definition of "users" though. An advertising company is using a recommendation engine, but the targeted persons are the ones which data is compromised. Some protections are proposed, but I would have liked a stronger emphasis on this.

Show thread

And finally, a European Artificial Intelligence Board, made of experts and Council representative, shall be created. Its purpose is to ensure the application of the regulation by the member-states, partake in the development of shared standards, provide advices and expertises on the topics adressed by the regulation, among other things

Show thread

There are some rules to preserve innovation, especially for startups. They mostly consist on "AI regulatory sandboxing schemes", basically some settings where the regulation can be overcomed under legal control.

Show thread

Sanctions are proposed to ensure compliance of actors: 4% of annual worldwide annual turnover for companies (and yes, this draft is extraterritorial to the EU) if compliance is not met after a case is opened by Market Surveilance authorities. I have not find anything allowing citizens to build up such a case though.

Show thread

A strong emphasis is made on the transparency of systems. Targeted people should be aware of when an AI system is used, to which extend, to have access to some metrics on its inner working and design.

Show thread

Some AI systems are straightforwardly considered illegal: "systems designed or use in a manner that manipulates human behaviour", "that exploits informations on a group of person to target their vulnerabilities", mass-surveillance and social scoring.

The first three are "of course" authorized for the sake of public safety if proper safeguards exist: I think this means that a legal framework will be mandatory/required.

Also, this regulation does not apply on military AI (why?!)

Show thread

High-risk systems can either be asserted by entities outside of the AI system provider (different from the manufacturer), or within the provider depending on the circumstances.

The exemples of high-risk AI systems are quite diverse: a tax calculating software, software used in border control, judiciary advises and support.

Show thread

In particular, the notion of high-risk AI system is emphasized, and it is on those that most of the regulation will focus on.

Is defined as an high-risk system some software which malfunction can result in harm of persons, property, damage to the environment, disruption of democratic process and so on.

Note that the notion of "reasonably foreseable misuse" that are malfunction that do not occur within the system nominal use, but that could have been foreseen considering human behaviour

Show thread

The proposal define AI systems as "designed for various degree of autonomy", for "a given set of human-defined objectives, generate outputs, such as content, predictions, recommendations, or decisions influencing real or virtual environment". That's quite a vast definition, probably for keeping in touch with the volatily of the AI field. Some examples are for instance logistics and dispatching software using AI, or software assisting judges during trials.

Show thread

The first thing is that the proposal acknowledge the costs of AI, as well as its potential to harm people. Mention of social bias, direct and indirect discrimination are numerous. It emphasize a lot on AI systems put on the "EU Market", and on "Market Overseers".

Show thread

First, the document is divided into two. From page 1 to page 18, there is something I would call the "spirit of the text", where the authors present the global motivations of the text, as well as some definitions and proposals. The rest of the document is the more "article x alinea y [insert law here]" kind of stuff.

The interested but hurry reader can probably read the first part.

EU Council leaked/published/made available a draft of a future legal framework on AI. I will make some comments along my reading. Full text is available here:

Disclaimer: I am not trained in law, so I will certainly make mistakes while interpreting the text. Also keep in mind that this is not a definitive version, the source itself is not confirmed.

With all of those in mind, let's jump in

My ex-intern is about to present at a small workshop. He made the presentation and worked hard to get a grasp on the talk, I'm sure it will be okay for him :D

My advising style was initially quite paternalistic, and trying to get out of that mindset is difficult, yeah. But I realized the tremendous importance of enabling autonomy in individuals, and this is something you can't do by just being "the comforting dude guiding all the way through". You need to give space for experimentations, errors and mistakes. And if your planned paper is not ready... well, it's just a paper v. the development of a human being and a future scientist.

Show thread

mental health 

Even during my depression last year, all my advisors and I were able to develop his self-confidence. He is presenting his first scientific contribution in two days, and I'm damn proud of him.

Show thread

Even during the pandemic, one of my best experience as a PhD student was to oversee an intern. Seeing him grow and acquire autonomy both technically and humanly is probably one of the thing I am the most proud of.

I really appreciate that I have Never had a single student who didn't appreciate libraries.
Now, they may not have gone to the library physically, but they all used their resources and they all appreciated them in principle.

Libraries are the best.

Mais c'est qu'il était super cool le séminaire de Paola Tubaro (et le petit moment social à la fin)!!!

quelques références:


Why are there still so many jobs? The history and future of workplace automation

Show older
Scholar Social

Scholar Social is a microblogging platform for researchers, grad students, librarians, archivists, undergrads, academically inclined high schoolers, educators of all levels, journal editors, research assistants, professors, administrators—anyone involved in academia who is willing to engage with others respectfully.