First, the document is divided into two. From page 1 to page 18, there is something I would call the "spirit of the text", where the authors present the global motivations of the text, as well as some definitions and proposals. The rest of the document is the more "article x alinea y [insert law here]" kind of stuff.
The interested but hurry reader can probably read the first part.
The proposal define AI systems as "designed for various degree of autonomy", for "a given set of human-defined objectives, generate outputs, such as content, predictions, recommendations, or decisions influencing real or virtual environment". That's quite a vast definition, probably for keeping in touch with the volatily of the AI field. Some examples are for instance logistics and dispatching software using AI, or software assisting judges during trials.
In particular, the notion of high-risk AI system is emphasized, and it is on those that most of the regulation will focus on.
Is defined as an high-risk system some software which malfunction can result in harm of persons, property, damage to the environment, disruption of democratic process and so on.
Note that the notion of "reasonably foreseable misuse" that are malfunction that do not occur within the system nominal use, but that could have been foreseen considering human behaviour
High-risk systems can either be asserted by entities outside of the AI system provider (different from the manufacturer), or within the provider depending on the circumstances.
The exemples of high-risk AI systems are quite diverse: a tax calculating software, software used in border control, judiciary advises and support.
A strong emphasis is made on the transparency of systems. Targeted people should be aware of when an AI system is used, to which extend, to have access to some metrics on its inner working and design.
And finally, a European Artificial Intelligence Board, made of experts and Council representative, shall be created. Its purpose is to ensure the application of the regulation by the member-states, partake in the development of shared standards, provide advices and expertises on the topics adressed by the regulation, among other things
To sum up!
It's a good baseline, from an AI safety scientist point of view. Responsability of humans is acknowledged and requirements of robustness, transparency and opting-out are proposed.
There is a lack of precision on the definition of "users" though. An advertising company is using a recommendation engine, but the targeted persons are the ones which data is compromised. Some protections are proposed, but I would have liked a stronger emphasis on this.
Sanctions are proposed to ensure compliance of actors: 4% of annual worldwide annual turnover for companies (and yes, this draft is extraterritorial to the EU) if compliance is not met after a case is opened by Market Surveilance authorities. I have not find anything allowing citizens to build up such a case though.