@Wolf480pl dogs are better than cats so I don't really see where all that doomsday will come from

@Wolf480pl I am a PhD student on deep learning verification, so I am most aware of the issues you pointed :D
The problem with the "AI will kill us all" discourse is that it somewhat place the technology (AI) as an strawman, without considering the obvious fact that humans design and use those algorithms. Which inject our biases and errors inside of algorithm, that are presented as "different" than us in their motivations

@Wolf480pl what is interesting is that in France, most of that discourse is produced by people that have a strong visibility (Laurent Alexandre, Cedric Villani). Depolitizing AI could be beneficial for them. Thank you for helping me pointing out that issue :)

@rhapsodos @Wolf480pl Isn't most of the more sober concern regarding AI to do with self-improving general intelligences being inherently unpredictable and difficult to create safety measures for that couldn't be circumvented by their self-modfications? Granted, that kind of AI is presumably some way off, but it makes sense to consider the dangers now so that we have a good basis of theory by the time they do become a practical reality (if they ever do). But, if I'm not mistaken, it would be a lot more difficult to steer or limit an AGI in the same way that you can do for limited-scope AIs, because the design is by definition open-ended and malleable.

I'm *not* a PhD student of any description, so perhaps I'm way off, but that seems like the angle with the greatest potential threat. But I'm sure a lot of people are way out regarding realistic timescales for that kind of thing; for all I know, it may never be feasible.

@rhapsodos @Wolf480pl a: cats are definitely better. I know because that's how they behave.

b: fairly sure the application of machine learning to the problem of animal ranking is a step towards the aipocalypse.

"You think I should farm sheep? No way, sortnet says velociraptors/people are better" (delete as applicable to choose your own sci-fi horror scenario)

@captainbland @Wolf480pl

\begin{itemize}
\item no, from a representative sample formed with me, it has been shown to be false
\item pretty sure the AI system will provide autobuses as targets then: a lot of people inside, cannot go far without fuel... lot of smart correlations going under the hood
\end{itemize}

@rhapsodos is it maybe because of the green square around the cat's face that says dog?

that would confuse me too

@hirojin would you rather trust some labels outputted by a complex automaton with some yet-to-explain unexpected behavior, or your own good sense?

People tend to think that AI as somewhat sensient, where it's just a bunch of additions, multiplications (and an occasional "if then else"), and people using that bunch of operations to do stuff. Some of that may or may not be shady

@rhapsodos It happens to the best. When I search for "Cat" in Google Photos, I also see some Dogs.

@lerk you probably misspelled "good boys" in your research query

@rhapsodos I tried that - Google Photos found nothing and thus tried to convince me in doing unpaid work* for them.

*"Crowdsourcing"

@lerk come on, you could always put your Amazon Mechanical Turk participation on your resume, it will certainly help you finding a meaningful**, well paid* job.

* that is, engineer at Google or Facebook or any company of that kind
** in merit, money may come as a bonus since you are doing that out of passion

@AIchkat3r @rhapsodos Ah, i see you performed what computer vision research calls "instance segmentation".

Sign in to participate in the conversation
Scholar Social

The social network of the future: No ads, no corporate surveillance, ethical design, and decentralization! Own your data with Mastodon!