@rhapsodos it will destroy the world _precisely_ because it's classifying a cat as a dog.

@Wolf480pl dogs are better than cats so I don't really see where all that doomsday will come from

@rhapsodos when people use AI for recommendation algorithms, the AI will start tricking people into getting cats instead of dogs, because it doesn't know the difference. And then all humans will end up equipped with cats, and nobody will have dogs anymore. If that isn't a start of a doomsday, I don't know what is.

@rhapsodos
More seriously:

The issue is that the AI will make weird mistakes that we don't notice and can't predict. It will seem that the AI is doing a good job, so it will be trusted to make important decisions, but then it'll make its very weird mistakes once every 1000 very important decisions, and that may have catastrophic effects.

We know what kind of mistakes people make, and we can't account for that.
But not only do we not know what kind of mistakes AI will make, we're actively trying to pretend it doesn't make mistakes. And that is IMO a recipe for disaster.

@Wolf480pl I am a PhD student on deep learning verification, so I am most aware of the issues you pointed :D
The problem with the "AI will kill us all" discourse is that it somewhat place the technology (AI) as an strawman, without considering the obvious fact that humans design and use those algorithms. Which inject our biases and errors inside of algorithm, that are presented as "different" than us in their motivations

@Wolf480pl what is interesting is that in France, most of that discourse is produced by people that have a strong visibility (Laurent Alexandre, Cedric Villani). Depolitizing AI could be beneficial for them. Thank you for helping me pointing out that issue :)

@rhapsodos
I'm happy to hear that a field like "deep learning verification" exists.

I agree that it's wrong to say that AI is an evil technology. AI is just a tool that humans use. And often misuse.

There are only two complaints I have about machine learning as a tool:

1. It's a heuristic. It's not meant to give you correct answer 100% of the time, and it never will.

2. It's hard to predict how it will behave. With a hand-written heuristic algorithm, you have a set of hand-picked rules that give you predictable results, including predictable failures. With ML, you don't know what heuristics the matrix of numbers represents.

Now, because of (2), some people are like "it wasn't me, the algorithm did that", and then the reaction is "well then the algorithm is evil".

So I guess if ML users didn't use ML to shield themselves from responsibility, you wouldn't see all these "AI is evil" stuff.
IOW, by blaming the AI, people fall into the trap set by those who deploy the AI.

@pjm @rhapsodos
I'm no PhD either, but IMO, the core problem is the approach people (and corporations) using Machine-Learning have.

Even if we develop ways to control AGI, we can be sure that someone with a lot of money will be careless enough to ignore all of them and deploy an unrestricted AGI. Unless we teach the devs and/or executives to behave responsibly or sth.

@rhapsodos @Wolf480pl a: cats are definitely better. I know because that's how they behave.

b: fairly sure the application of machine learning to the problem of animal ranking is a step towards the aipocalypse.

"You think I should farm sheep? No way, sortnet says velociraptors/people are better" (delete as applicable to choose your own sci-fi horror scenario)

@captainbland @Wolf480pl

\begin{itemize}
\item no, from a representative sample formed with me, it has been shown to be false
\item pretty sure the AI system will provide autobuses as targets then: a lot of people inside, cannot go far without fuel... lot of smart correlations going under the hood
\end{itemize}

@rhapsodos is it maybe because of the green square around the cat's face that says dog?

that would confuse me too

@hirojin would you rather trust some labels outputted by a complex automaton with some yet-to-explain unexpected behavior, or your own good sense?

People tend to think that AI as somewhat sensient, where it's just a bunch of additions, multiplications (and an occasional "if then else"), and people using that bunch of operations to do stuff. Some of that may or may not be shady

@rhapsodos It happens to the best. When I search for "Cat" in Google Photos, I also see some Dogs.

@lerk you probably misspelled "good boys" in your research query

@rhapsodos I tried that - Google Photos found nothing and thus tried to convince me in doing unpaid work* for them.

*"Crowdsourcing"

@lerk come on, you could always put your Amazon Mechanical Turk participation on your resume, it will certainly help you finding a meaningful**, well paid* job.

* that is, engineer at Google or Facebook or any company of that kind
** in merit, money may come as a bonus since you are doing that out of passion

Sign in to participate in the conversation
Scholar Social

Scholar Social is a microblogging platform for researchers, grad students, librarians, archivists, undergrads, academically inclined high schoolers, educators of all levels, journal editors, research assistants, professors, administrators—anyone involved in academia who is willing to engage with others respectfully. Read more ...