Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Are you with Musk or Hawking on AI

This site may earn commission on affiliate links.
Seriously, has anyone seen Ex Machina yet? I want to discuss it so bad. I love how the character created the mappings of the AI brain and put it all together. Go see it!

So I just now. Cool film, I like the key elements to the story but actually to me the movie wasn't really great because a lot of the social interactions that was very crucial to the plot was just not believable to me.

I don't want to give away too much of the plot but if you're smart enough to create an AI you would think that you had given the control problem a bit more thought.
 
I liked the film (Ex Machina), but it was not as good as I expected. IMHO, the "transcendence" is a much better ASI film.
Ex Machina was selling AI capability too short, I think. The plot was much more of a social-engineering game, 2 humans + an AI outsmarting / manipulating each-other (I hope I made my statement vague enough not to spoil it for anyone) rather than an AI vs human confrontation.
 
I'm starting to dread all these rather silly AI movies (I just saw Ex Machina). Seriously guys, just ... carry ... a .... gun. At the end of the day, AI is embodied in hardware. One bullet will do wonders.

And don't say that the AI can escape to the network or the cloud. It just doesn't work that way. You need massive time and space coherence for the billions or trillions of synaptic-like computations and communications that need to occur every millisecond.

Anyways, all the doom sayers like Musk are doing is inviting government regulation into something that is a nascent pure research project. Not helpful.
 
I'm starting to dread all these rather silly AI movies (I just saw Ex Machina). Seriously guys, just ... carry ... a .... gun. At the end of the day, AI is embodied in hardware. One bullet will do wonders.

And don't say that the AI can escape to the network or the cloud. It just doesn't work that way. You need massive time and space coherence for the billions or trillions of synaptic-like computations and communications that need to occur every millisecond.

Anyways, all the doom sayers like Musk are doing is inviting government regulation into something that is a nascent pure research project. Not helpful.

I agree with you with regards to the movies.

When it comes to real life and the AI research studies being done I do think it's a telling thing that some of the top research groups have joined together in discussing how to go about the research in a cautious and controlled way, so that you don't all of a sudden create strong AI without having considered the control problem well enough. It doesn't have to be government regulation, it could rather be wise researchers who try to think one step ahead.
 
I'm starting to dread all these rather silly AI movies (I just saw Ex Machina). Seriously guys, just ... carry ... a .... gun. At the end of the day, AI is embodied in hardware. One bullet will do wonders.

And don't say that the AI can escape to the network or the cloud. It just doesn't work that way. You need massive time and space coherence for the billions or trillions of synaptic-like computations and communications that need to occur every millisecond.

Carry a gun? Really? You have simply not grasped any of the ways an AI might gain control, which is rather the whole point. Even those of us who have considered some of the many possibilities can't predict what it could do. You won't see it coming until it's too late.
 
Carry a gun? Really? You have simply not grasped any of the ways an AI might gain control, which is rather the whole point. Even those of us who have considered some of the many possibilities can't predict what it could do. You won't see it coming until it's too late.

But in this particular movie it would have actually sufficed to carry a gun... Which is why it wasn't a great movie.
 
Carry a gun? Really? You have simply not grasped any of the ways an AI might gain control, which is rather the whole point. Even those of us who have considered some of the many possibilities can't predict what it could do. You won't see it coming until it's too late.

I always ask in these forums, give me an example. And no one ever does. If you can't explain it, it isn't a very good argument.
 
Actually it's been explained in these forums, as well as in a number of books, but again, you missed the main point: None of us can likely explain what a higher intelligence may do, any more than a chimp could explain what humans might do.

Ah yes, the fear of the unknown. Hard to argue against a complete unknown.

And no, it hasn't been explained in these forums, or if it has, please point me to a post (I mean, other than just saying that you can't come up with a scenario, ie. Fear of the unknown).

Look, no one is going to suddenly develop completely self aware, super intelligence all of a sudden. Like any engineering problem, AI will be built incredibly small step by small step. And it won't have human desires, or heck, any desires, because what's the usefulness of building that? It would be like building a car that has been designed to randomly swerve into oncoming traffic. No, smart machines will be purpose built for specific tasks.
 
I think it has been clearly explained that a recursive learning AI system could in theory suddenly develop SAI on its own on a time scale that humans could not recognize. While at this point that is only a theory, the potential negative consequences could be so serious from a human point of view (existential threat) that such a possibility must be considered.
 
Ah yes, the fear of the unknown. Hard to argue against a complete unknown.

And no, it hasn't been explained in these forums, or if it has, please point me to a post (I mean, other than just saying that you can't come up with a scenario, ie. Fear of the unknown).

Look, no one is going to suddenly develop completely self aware, super intelligence all of a sudden. Like any engineering problem, AI will be built incredibly small step by small step. And it won't have human desires, or heck, any desires, because what's the usefulness of building that? It would be like building a car that has been designed to randomly swerve into oncoming traffic. No, smart machines will be purpose built for specific tasks.
Really? Have you read the theories in the book Superintelligence? It is very possible it will happen quickly once we get to a certain point.

I really liked the movie compared to Chappie, which i thought was going to be more thought provoking, but was more violent than anything.

Why would he carry a gun? There was no threat until there was. Did he think he would eventually be outsmarted? Yes, i think at some level he knew, but not with violence.
 
Really? Have you read the theories in the book Superintelligence? It is very possible it will happen quickly once we get to a certain point.

I really liked the movie compared to Chappie, which i thought was going to be more thought provoking, but was more violent than anything.

Why would he carry a gun? There was no threat until there was. Did he think he would eventually be outsmarted? Yes, i think at some level he knew, but not with violence.

Why would he carry a gun? Because he willfully made an autonomously moving machine with a strong desire to escape, and he was their jailer. You'd have to be completely stupid to not realize that the machine might rise up against him.

And no I haven't read superintelligence as no one has been able to succinctly outline a credible threat to me.
 
Why would he carry a gun? Because he willfully made an autonomously moving machine with a strong desire to escape, and he was their jailer. You'd have to be completely stupid to not realize that the machine might rise up against him.

And no I haven't read superintelligence as no one has been able to succinctly outline a credible threat to me.

With alll due respect you should read it and give it a think before you ridicule the potential danger of AI. Especially the chapters on theoretical methods of going about giving the AI its motivations and The control problem.
 
With alll due respect you should read it and give it a think before you ridicule the potential danger of AI. Especially the chapters on theoretical methods of going about giving the AI its motivations and The control problem.

I know it's been linked before in this thread and elsewhere. This 2-part summary has some thought provoking stuff in it.

The AI Revolution: Road to Superintelligence - Wait But Why
 
The problem with super intelligence theories is that people forget you need very, very specialized hardware to run the AI on. And hardware always has limits. The first AGI machine won't be able to get super smart because of hardware limits. And a computer can't build anything unless you gave it a very advanced robotic body. Then there is the problem of motivation. An artificial brain isn't going to have all the legacy crap that a human brain has like a will to live. It's only going to have what is useful.

I suppose in theory some really rich (because these machines aren't going to be cheap) mad genius could hack together a malevolent super intelligent machine ... But even then one suitably targeted bullet or, if needed, bomb, will take it out pretty easily.
 
Your mistake is to assume malevolence, instead of a simple drive to complete a task in ways that may not be beneficial to humans. A simple analogy, consider a wood chipper. It's a dumb machine with no intelligence, designed simply to shred whatever is fed into it. Sometimes people get sucked in as well as wood. No bad intent needed. Also consider computer viruses that get released on the web and infect millions of machines, they can never be fully eradicated unless all connected computers on the internet are wiped clean or destroyed.