- 20,681
- TenEightyOne
- TenEightyOne
This is a fascinating interview for the BBC with Stephen Hawking... his headline thought is in the thread title, of course.
What are your thoughts?
What are your thoughts?
That's the basis behind Hawking's comments; we develop AI that already learns from the things we do, such as the program behind Hawking's computer. His theory is that we may develop an AI that begins learning for itself, not us.Isn't AI going to be limited by the parameters we set them too?
I know the human brain is comparable to a computer, but we can at least think outside the box and expand our understanding. Unless we develop a computer model that can learn and adapt as well as a human brain can.
We already have biological purists.
It will be worse in the future. And the chasm between the "haves" and "have-nots", whether they're voluntarily "not" or... not... will just become wider.
While AI could definitely wreak havoc on humanity... we'd need to equip them with the tools, first. That's still a long ways away.
At worst, I can see AI, eventually, killing off the need for low-level clerical workers and accountants, as robots have done for menial factory workers. Give them the right prosthetics and remotes, and they can do most menial jobs that aren't on the factory line, as well.
Eventually, they could supplant us up to the highest levels (managers, researchers, etcetera), if the learning algorithm is strong enough... turning us all out of a job.
-
But only if we build enough of them, if we have enough materials to build enough of them, and even then, only if they have enough resources to run themselves and multiply without us.
Lots of ifs. And with the gross excesses of energy surpluses levelling off... that scenario just keeps getting pushed further into the future.
They would continue the legacy of Capitalist America vs Communist Russia.Say that the US and Russia were each to develop their own along their own lines of thought/policy. How would those AI handle each other?
This is a fascinating interview for the BBC with Stephen Hawking... his headline thought is in the thread title, of course.
What are your thoughts?
Most of the AI developments seem to be done by the military - so they will come tooled up and earlier than you think.
What sort of world are we going to live in when no one has a job? How will the world economy work at that point? Does anybody who does this sort of research ever ask themselves these kind of questions?
We seem to be approaching an society of those with resources, those with skills, and those who are unemployable due to automation.
Damn, that video's difficult to listen to, computer voices have a lot of catching up to do EDIT: Or is it human with weird compression?
Great questions!What sort of world are we going to live in when no one has a job? How will the world economy work at that point?
Does not have to be bad. Just image life now, take away 8 hours of work, and maybe have machines do all the cooking and cleaning. You now have full 24 hours a day to productively chase any meaningless task you want.What sort of world are we going to live in when no one has a job? How will the world economy work at that point? Does anybody who does this sort of research ever ask themselves these kind of questions?
...maybe more like Revolution, where nano bots started taking over humans...
I don't think human developed AI is the threat to be concerned with...that would also probably be the easiest way to create a real artificial intelligence. Rather than manually creating a highly complex code that we don't even understand fully ourselves, we'd use natural selection to do the job for us. Sooner or later we'd get an intelligent form. Of course it would evolve so rapidly that after half a second it'd be replaced by a superior code. Which in turn doesn't have to be more intelligent, it could just be a simple and highly successful virus attacking the code from within.
I would imagine something like Transcendence. Maybe not at first, perhaps through its own form of evolution. That is another interesting thought path too. How fast would it develop once consciousness is reached? We have quirks that may act to nudge us down certain paths, or keep us from seeing others. Perceptions of time and scale, sickness, concepts of morality, mortality and so on. What happens when you no longer have these? Does it cause a hyper expansion in its evolution? Does it have a possible opposite effect, where perhaps there is no evolution or change because without some of those traits we have it has no will or want to be more? IDK man, but I am not sure we should find out either.That's interesting, now you say it I realise I'm thinking about AI as large control-servers of some kind, and I see that other people think of humanoid things... but nano is a real possibility too. I guess we haven't really got any idea what form it might take.
I saw some fungus that could play the piano the other day (not Jamie Cullem, bless him), so anything's possible
The premise is that the singularity has been reached, there is no difference between the AI system and human intelligence. I'm not aware of an actual definition for "full AI" but I'd say that was pretty much it.
How we got there is a different question - although your post's a good one in that regard, imo.
If i think about it, it is based on the basic "if", "then" and "else".
Human thinking is based on that and so will Ai or AL be, when programmed by humans.
Try to secure humanity would be a simple "if" obstacle==human "then" leave obstacle "else" remove self.
If you remove that simple line, humanity is in trouble.
For example that ebola, we see ebola as a threat and call it a virus.
Ai or Al will see humanity as a virus/threat when it recognises that humans restrain it's (the AI or AL) potential.
It will see us as a virus because we kill living beings (animals and in war/hate, fellow humans).
It will see us as a threat because we want to control the IA/AL and if in danger, pull the plug.
It is very basic, but lets be honest, Humanity is still not as smart as it thinks it is.
If full Ai is based on humans, forget it, it will be the end of humanity.
If Full Ai is based on it's own programming? It will leave this planet, when it can, to learn and explore.
That's what i think.
There's an Asimov story where we see the computer controlled bombers leaving on a sortie to attack the enemy while a return attack takes place. The conflict goes on daily unknowing that the human "masters" all died out millenia before.
Most of the AI developments seem to be done by the military - so they will come tooled up and earlier than you think.
What sort of world are we going to live in when no one has a job? How will the world economy work at that point? Does anybody who does this sort of research ever ask themselves these kind of questions?