First, the document is divided into two. From page 1 to page 18, there is something I would call the "spirit of the text", where the authors present the global motivations of the text, as well as some definitions and proposals. The rest of the document is the more "article x alinea y [insert law here]" kind of stuff.
The interested but hurry reader can probably read the first part.
The proposal define AI systems as "designed for various degree of autonomy", for "a given set of human-defined objectives, generate outputs, such as content, predictions, recommendations, or decisions influencing real or virtual environment". That's quite a vast definition, probably for keeping in touch with the volatily of the AI field. Some examples are for instance logistics and dispatching software using AI, or software assisting judges during trials.
High-risk systems can either be asserted by entities outside of the AI system provider (different from the manufacturer), or within the provider depending on the circumstances.
The exemples of high-risk AI systems are quite diverse: a tax calculating software, software used in border control, judiciary advises and support.
Sanctions are proposed to ensure compliance of actors: 4% of annual worldwide annual turnover for companies (and yes, this draft is extraterritorial to the EU) if compliance is not met after a case is opened by Market Surveilance authorities. I have not find anything allowing citizens to build up such a case though.
And finally, a European Artificial Intelligence Board, made of experts and Council representative, shall be created. Its purpose is to ensure the application of the regulation by the member-states, partake in the development of shared standards, provide advices and expertises on the topics adressed by the regulation, among other things
To sum up!
It's a good baseline, from an AI safety scientist point of view. Responsability of humans is acknowledged and requirements of robustness, transparency and opting-out are proposed.
There is a lack of precision on the definition of "users" though. An advertising company is using a recommendation engine, but the targeted persons are the ones which data is compromised. Some protections are proposed, but I would have liked a stronger emphasis on this.
Some AI systems are straightforwardly considered illegal: "systems designed or use in a manner that manipulates human behaviour", "that exploits informations on a group of person to target their vulnerabilities", mass-surveillance and social scoring.
The first three are "of course" authorized for the sake of public safety if proper safeguards exist: I think this means that a legal framework will be mandatory/required.
Also, this regulation does not apply on military AI (why?!)