For all women in cybersecurity, here is a workshop that might be of interest: https://casa.rub.de/en/wisc-workshop
Hello Fédi, je vais visiter des appartements sur Tour le 1er juin, comme j’ai pas mal de temps de trajet en train, ça m’arrangerait de pouvoir dormir à Tours ou alentours (tant que je peux rejoindre la ville par transports en commun assez rapidement) la veille et le soir même.
Est-ce qu’il y aurait des personnes d’accord pour m’héberger sur ce petit laps de temps? Je peux offrir un repas ou quoi en échange
Un petit #boost pourrait m’aider.
Merci beaucoup
PS : je suis vaccinée et j’ai encore mes immunités de février. Je suis très prudente de l’hygiène, je pense que @kaerhon et @gaelle pourront confirmer.
Je fais ça aussi la dame dit vrai.
RT @TiphaineAnna@twitter.com
L’avantage de rédiger les chapitres de la thèse dans le désordre ? Il est plus aisé de corriger certaines formulations pour harmoniser l’ensemble, car l’écriture évolue entre le début et la fin de thèse.
🐦🔗: https://twitter.com/TiphaineAnna/status/1394276973623525379
If you can read English and if you're interested in #artificialIntelligence, I strongly recommend this paper written by Melanie Mitchell. She proposes an interesting perspective on the history of AI winters and springs, and the fallacies that are generating them. She also shows why we may see a new winter coming, and how we haven't learned from History. The paper is really accessible to anyone interested by AI, no need to be a researcher yourself.
#SummerSchool will be in session soon!
May 12 we are going to be opening up the forms for y'all to submit your research proposals!
It is an interdisciplinary academic conference. We are so excited for the fediverse to come together and learn and share together.
So keep your eyes peeled and your minds a'thinking of your proposal.
Graphics designed by Csepp! @cspp@merveilles.town
Still not sure why the organised moved the deadline... four days later, including a weekend? It is absolutely out of question that I work during weekends, and I believe researchers with children don't have this luxury either.
Aaaand it's done, my paper to FM 2021 is submitted. I will probably put it on arxiv or HAL soon once the publication dust has settled down, as well as the code repository.
I don't know if it will be accepted, frankly. But I have been on this for almost six month, and I already missed one deadline, so I wanted it to go out in the wild, to see.
Especially, if we take "consent" as a base just like GDPR does, the "user" consent is of little use here. What we want is that the targeted persons are aware that an AI system is currently running on their data, be them biometric or digital.
To sum up!
It's a good baseline, from an AI safety scientist point of view. Responsability of humans is acknowledged and requirements of robustness, transparency and opting-out are proposed.
There is a lack of precision on the definition of "users" though. An advertising company is using a recommendation engine, but the targeted persons are the ones which data is compromised. Some protections are proposed, but I would have liked a stronger emphasis on this.
And finally, a European Artificial Intelligence Board, made of experts and Council representative, shall be created. Its purpose is to ensure the application of the regulation by the member-states, partake in the development of shared standards, provide advices and expertises on the topics adressed by the regulation, among other things
There are some rules to preserve innovation, especially for startups. They mostly consist on "AI regulatory sandboxing schemes", basically some settings where the regulation can be overcomed under legal control.
Sanctions are proposed to ensure compliance of actors: 4% of annual worldwide annual turnover for companies (and yes, this draft is extraterritorial to the EU) if compliance is not met after a case is opened by Market Surveilance authorities. I have not find anything allowing citizens to build up such a case though.
A strong emphasis is made on the transparency of systems. Targeted people should be aware of when an AI system is used, to which extend, to have access to some metrics on its inner working and design.
Some AI systems are straightforwardly considered illegal: "systems designed or use in a manner that manipulates human behaviour", "that exploits informations on a group of person to target their vulnerabilities", mass-surveillance and social scoring.
The first three are "of course" authorized for the sake of public safety if proper safeguards exist: I think this means that a legal framework will be mandatory/required.
Also, this regulation does not apply on military AI (why?!)
High-risk systems can either be asserted by entities outside of the AI system provider (different from the manufacturer), or within the provider depending on the circumstances.
The exemples of high-risk AI systems are quite diverse: a tax calculating software, software used in border control, judiciary advises and support.
PhD Student in formal method and deep learning. Aim to bring trust on deep learning based software.
Also, I cook when I'm not doing any OCaml.