Full AI - The End of Humanity?

There have been a number of sci-fi stories on that theme. It's certainly possible, might even be inevitable.
 
I don't see why we couldn't catch up either through genetic manipulation or combining biological and mechanical components.

I have no idea what the long term result will be, but life could certainly be different. You might have people in different camps too, like biological purists who refuse genetic enhancement.
 
Isn't AI going to be limited by the parameters we set them too?

I know the human brain is comparable to a computer, but we can at least think outside the box and expand our understanding. Unless we develop a computer model that can learn and adapt as well as a human brain can.
 
Isn't AI going to be limited by the parameters we set them too?

I know the human brain is comparable to a computer, but we can at least think outside the box and expand our understanding. Unless we develop a computer model that can learn and adapt as well as a human brain can.
That's the basis behind Hawking's comments; we develop AI that already learns from the things we do, such as the program behind Hawking's computer. His theory is that we may develop an AI that begins learning for itself, not us.
 
We already have biological purists.

It will be worse in the future. And the chasm between the "haves" and "have-nots", whether they're voluntarily "not" or... not... will just become wider.


While AI could definitely wreak havoc on humanity... we'd need to equip them with the tools, first. That's still a long ways away.

At worst, I can see AI, eventually, killing off the need for low-level clerical workers and accountants, as robots have done for menial factory workers. Give them the right prosthetics and remotes, and they can do most menial jobs that aren't on the factory line, as well.

Eventually, they could supplant us up to the highest levels (managers, researchers, etcetera), if the learning algorithm is strong enough... turning us all out of a job.

-

But only if we build enough of them, if we have enough materials to build enough of them, and even then, only if they have enough resources to run themselves and multiply without us.

Lots of ifs. And with the gross excesses of energy surpluses levelling off... that scenario just keeps getting pushed further into the future.
 
@niky, I'm sure that 90% of production line operatives can already tell you that it would take greater AI to replace them than it would to replace their manager :D

Overall I guess we presume that the AI we're talking about is capable of thought, self-preservation and replication. If that's the case then it will mimic a fully intelligent biological organism, if it doesn't then I guess it's not yet full "AI".

If the AI can think then it has a clear enemy, the Human. It would understand that we believed we had the right to disable it (I'm sorry, Dave, I cannot allow that etc.) and that we had the ability. It's difficult to imagine that we'd achieve a real "partnership", there could only be AI-is-lesser-than-us or AI-is-greater-than-us. In the former case we haven't reached "full AI", in the second we have created a genuinely threatening organism.

The only mitigation against the risk is therefore control of the additional resources required to operate (like putting a town under siege), we have to retain control of the physical installations that such "machines" need, presuming that they haven't become able to replicate their own power sources and builds.

Parp.

There's an Asimov story where we see the computer controlled bombers leaving on a sortie to attack the enemy while a return attack takes place. The conflict goes on daily unknowing that the human "masters" all died out millenia before. As @BobK noted there are plenty of SF stories covering AI themes but I remembered the Asimov one in particular (despite there being no suggestion that the machines are self-programming) because it occurs to me that we may not make a single "AI", rather we might make lots of different sorts, depending on who we are.

Say that the US and Russia were each to develop their own along their own lines of thought/policy. How would those AI handle each other?
 
We already have biological purists.

It will be worse in the future. And the chasm between the "haves" and "have-nots", whether they're voluntarily "not" or... not... will just become wider.


While AI could definitely wreak havoc on humanity... we'd need to equip them with the tools, first. That's still a long ways away.

At worst, I can see AI, eventually, killing off the need for low-level clerical workers and accountants, as robots have done for menial factory workers. Give them the right prosthetics and remotes, and they can do most menial jobs that aren't on the factory line, as well.

Eventually, they could supplant us up to the highest levels (managers, researchers, etcetera), if the learning algorithm is strong enough... turning us all out of a job.

-

But only if we build enough of them, if we have enough materials to build enough of them, and even then, only if they have enough resources to run themselves and multiply without us.

Lots of ifs. And with the gross excesses of energy surpluses levelling off... that scenario just keeps getting pushed further into the future.

Most of the AI developments seem to be done by the military - so they will come tooled up and earlier than you think.

What sort of world are we going to live in when no one has a job? How will the world economy work at that point? Does anybody who does this sort of research ever ask themselves these kind of questions?
 
Say that the US and Russia were each to develop their own along their own lines of thought/policy. How would those AI handle each other?
They would continue the legacy of Capitalist America vs Communist Russia. :D

In all seriousness, I doubt both countries would achieve self aware AIs at the same exact time. If they did, then one AI would probably kill the other AI off, just like how humans killed each other off in ancient times. Or maybe, since they're self aware AI (which would mean they're far more perfect than any human can be and aren't influenced by irrational thoughts), they'd recognize that the threat posed by the common enemy far outweighs the threat of each other. And then we die.

Also, relevant I think is The Animatrix: Second Renaissance Part I & II. 👍
 
Last edited:
This is a fascinating interview for the BBC with Stephen Hawking... his headline thought is in the thread title, of course.

What are your thoughts?

bOBIYUC.jpg
 
Most of the AI developments seem to be done by the military - so they will come tooled up and earlier than you think.

What sort of world are we going to live in when no one has a job? How will the world economy work at that point? Does anybody who does this sort of research ever ask themselves these kind of questions?

What would you need "a job" for?
 
We seem to be approaching an society of those with resources, those with skills, and those who are unemployable due to automation.

Damn, that video's difficult to listen to, computer voices have a lot of catching up to do :D EDIT: Or is it human with weird compression?

I'd ask you the same question that I asked @Tired Tyres; what would you need "a job" for?
 
What sort of world are we going to live in when no one has a job? How will the world economy work at that point?
Great questions!

Personally, I don't need a job, since I have income from pensions, annuities, stocks and CD's, rents and royalties, and government social security retirement benefits.

I suspect many people get by without a job, relying on friends, family, government, or even illegal or black market sources of income.

Even so, many people around the world are employed in doing some kind of paid activity, or job.

We are probably a very, very long way from seeing this basic condition, or economy, change very much.

However, there may come a time when there are large global surpluses of young men and women who lack the income to support and entertain themselves.

Here is where we begin to worry about religious or political movements appealing to disenfranchised masses to take direct action which may threaten the established political order. The stage is set for conflict between the haves and the have-nots, just as it always has been.

But the future is not set in stone. Something always comes along to upset the predictions.
 
Well, like it or not, technology will likely leave all of us - or rather, 99.9% of us - unemployed within 50 years. And if won't be there in fifty years, it will be in a hundred. Automation is an unstoppable force - automating any productive process presents so many advantages it would be downright idiotic not to. At some point, we'll have to change our economic and political systems for something entirely different that will better fit a society of consumers rather than producers, or else we will have to live in a world of many have-nots and very few haves.

As for the original topic of the thread - which is not automation, but rather sentient artificial intelligence and how it may be the end of humanity: futurism as an aspiring scientific discipline has always and will always disappoint, because the future is not predictable. And even if it were, we don't have enough elements to predict what the end result of prolonged interaction between man and man-made intelligence will be - if, of course, we'll ever manage to create sentient artificial constructs.

However, the eventual assimilation of mankind by its own creations seems a very likely and, within the right conditions, auspicable goal. After all, we already use the Internet as an augmentation of our cognitive capabilities every day when we Google something, or look it up on Wikipedia through our smartphone - an electronic device that has almost become part of our bodies and always follows us.
Technological singularity doesn't necessarily mean the end of mankind; it simply means that mankind will have to change. I believe how we will choose to change will be up to us more than anything else.

Of course, step one is avoiding a gray swarm / gray goo scenario. An intelligent, self-replicating AI would likely turn on us not because of ethic reasons, but because we could be used as building material.
 
Humans and by extension, machines, do not possess ubiquitous energy just yet. So unless nano and (presumed) AI are combined, "pulling the plug" remains an option.
 
Personally, I think AI, as in actual, conscious, self aware entity would be a very bad idea. I don't think it would get like terminator, but maybe more like Revolution, where nano bots started taking over humans.
I am not some purist, I am missing a a lot of one of my fingers, and would love to have a go go gadget finger. I am not opposed to mechanical implants or other such things. But on the notion of AI. I think we should stop at ultra adaptive programming. Something that can learn and adapt at a set task, or set of tasks, but I think once you give a computer self awareness, it is going to go bad fast.
 
What sort of world are we going to live in when no one has a job? How will the world economy work at that point? Does anybody who does this sort of research ever ask themselves these kind of questions?
Does not have to be bad. Just image life now, take away 8 hours of work, and maybe have machines do all the cooking and cleaning. You now have full 24 hours a day to productively chase any meaningless task you want.

Also, while AI and machinery may do a lot of things better than us, they seem to have trouble being human. We will lose cashiers before engineers, but athletes and comedians might be relatively secure for a longer period. In the past people imagined a future where other people would pay to see a horse. In our future, people may pay to see other people filling in for machines out of novelty.
 
...maybe more like Revolution, where nano bots started taking over humans...

That's interesting, now you say it I realise I'm thinking about AI as large control-servers of some kind, and I see that other people think of humanoid things... but nano is a real possibility too. I guess we haven't really got any idea what form it might take.

I saw some fungus that could play the piano the other day (not Jamie Cullem, bless him), so anything's possible :)
 
I don't think human developed AI is the threat to be concerned with, simply because the development is so slow.

I'd be more concerned with Artificial Life (AL?), because imagine that you would programme a single piece of code, a "seed of life" and send it to a computer. What this code would do is to tell the computer to create billions and billions of copies, all with tiny random variations, and send these to other computers. The successful variations would be able to keep reproducing - still with tiny variations made, and eventually you'd have a system of evolution developing. Only, with the help of computers each generation could take a fraction of a second to complete, making the evolutionary process million or even billion times faster than real life evolution. Imagine that after one year of evolution in this system, the code would have gone through as many generations as real life has after one billion years. Give the code 3.5 years and would have gone through as many generations as the difference between us and the first original life form.

That would also probably be the easiest way to create a real artificial intelligence. Rather than manually creating a highly complex code that we don't even understand fully ourselves, we'd use natural selection to do the job for us. Sooner or later we'd get an intelligent form. Of course it would evolve so rapidly that after half a second it'd be replaced by a superior code. Which in turn doesn't have to be more intelligent, it could just be a simple and highly successful virus attacking the code from within.
 
I don't think human developed AI is the threat to be concerned with...that would also probably be the easiest way to create a real artificial intelligence. Rather than manually creating a highly complex code that we don't even understand fully ourselves, we'd use natural selection to do the job for us. Sooner or later we'd get an intelligent form. Of course it would evolve so rapidly that after half a second it'd be replaced by a superior code. Which in turn doesn't have to be more intelligent, it could just be a simple and highly successful virus attacking the code from within.

The premise is that the singularity has been reached, there is no difference between the AI system and human intelligence. I'm not aware of an actual definition for "full AI" but I'd say that was pretty much it.

How we got there is a different question - although your post's a good one in that regard, imo.
 
That's interesting, now you say it I realise I'm thinking about AI as large control-servers of some kind, and I see that other people think of humanoid things... but nano is a real possibility too. I guess we haven't really got any idea what form it might take.

I saw some fungus that could play the piano the other day (not Jamie Cullem, bless him), so anything's possible :)
I would imagine something like Transcendence. Maybe not at first, perhaps through its own form of evolution. That is another interesting thought path too. How fast would it develop once consciousness is reached? We have quirks that may act to nudge us down certain paths, or keep us from seeing others. Perceptions of time and scale, sickness, concepts of morality, mortality and so on. What happens when you no longer have these? Does it cause a hyper expansion in its evolution? Does it have a possible opposite effect, where perhaps there is no evolution or change because without some of those traits we have it has no will or want to be more? IDK man, but I am not sure we should find out either.
Edit: Yeah Eran, that's kinda the way I am going with my thought process too. Just get it to a point were it can start evolving on its own. After that, it is out of our hands, and honestly, I think out of our control.
 
The premise is that the singularity has been reached, there is no difference between the AI system and human intelligence. I'm not aware of an actual definition for "full AI" but I'd say that was pretty much it.

How we got there is a different question - although your post's a good one in that regard, imo.

But for some reason we value intelligence as the most important trait, when that not necessarily has to be true. Evolution is only interested in what works, and it's not certain that intelligence will always be the best card.

Look at the Ebola virus, that thing is not even a proper form of life (much less an intelligent one) and yet it's a real threat to us.

Regarding intelligent robots, before we reach super-intelligent AI we'd probably get to dumb-ass AI who would create some amateur plot to have us all kill and which fails miserably. It would leave digital fingerprints all over the crime scene and maybe even be stupid enough to wear a hat that says "It was me, I did it". We'd get on to it, shut it down and make sure that we're more careful next time.

It all makes sense, before we get to super-advanced hyper-minds capable of plotting behind our backs we'd get the not-so-smart AI that does all kinds of mistakes. And if AI is capable of becoming evil (in our eyes), why would it wait to become evil until it's super-smart?
 
If i think about it, it is based on the basic "if", "then" and "else".
Human thinking is based on that and so will Ai or AL be, when programmed by humans.
Try to secure humanity would be a simple "if" obstacle==human "then" leave obstacle "else" remove self.
If you remove that simple line, humanity is in trouble.

For example that ebola, we see ebola as a threat and call it a virus.
Ai or Al will see humanity as a virus/threat when it recognises that humans restrain it's (the AI or AL) potential.
It will see us as a virus because we kill living beings (animals and in war/hate, fellow humans).
It will see us as a threat because we want to control the IA/AL and if in danger, pull the plug.

It is very basic, but lets be honest, Humanity is still not as smart as it thinks it is.
If full Ai is based on humans, forget it, it will be the end of humanity.
If Full Ai is based on it's own programming? It will leave this planet, when it can, to learn and explore.

That's what i think.
 
If i think about it, it is based on the basic "if", "then" and "else".
Human thinking is based on that and so will Ai or AL be, when programmed by humans.
Try to secure humanity would be a simple "if" obstacle==human "then" leave obstacle "else" remove self.
If you remove that simple line, humanity is in trouble.

For example that ebola, we see ebola as a threat and call it a virus.
Ai or Al will see humanity as a virus/threat when it recognises that humans restrain it's (the AI or AL) potential.
It will see us as a virus because we kill living beings (animals and in war/hate, fellow humans).
It will see us as a threat because we want to control the IA/AL and if in danger, pull the plug.

It is very basic, but lets be honest, Humanity is still not as smart as it thinks it is.
If full Ai is based on humans, forget it, it will be the end of humanity.
If Full Ai is based on it's own programming? It will leave this planet, when it can, to learn and explore.

That's what i think.


I don't think it should be overlooked that we ourselves are machines. We're just biological. You can program a machine however you want. It can be as intelligent as 20 Einsteins but have the curiosity of a doorstop. Self preservation is not inevitable, even if something is self aware.

Also, a smart AI should recognize other intelligence. It could be a huge waste of time and resources to "combat" humanity when you can negotiate.
 
There's an Asimov story where we see the computer controlled bombers leaving on a sortie to attack the enemy while a return attack takes place. The conflict goes on daily unknowing that the human "masters" all died out millenia before.

Gut-dammit... now you've got me wracking my head... I remember that story, but not the title...

Most of the AI developments seem to be done by the military - so they will come tooled up and earlier than you think.

What sort of world are we going to live in when no one has a job? How will the world economy work at that point? Does anybody who does this sort of research ever ask themselves these kind of questions?

Look up "HOW2" by Clifford D. Simak... as a funny take on that... but with a chilling undertone.

Also answers @TenEightyOne's question. :D
 
Back