Full AI - The End of Humanity?

Anyone see the news that Cambridge University has built a robot that can build other baby robots by itself and learn each time how to make them better... it's called Mother :scared:



Skynet here we come...
 
A robot can learn. It can learn to recognize its "mother" station. It can learn to fear death, or fear things that cause it "pain" or to lose function. We will most likely build our first human-level intelligence not by programming it directly, but by programming the foundations of intelligence and letting it grow from there, into human-level (or higher) intelligences with vast computational power at their fingertips.

I agree, but not the usual "coded" way.
I think, just like humans, we have to put some lines that define to the robot what "life" is, then it should learn the rest. I think it's a very complicated process imo.
 
You can laugh... but you should have seen the size of the catapult.

The proper term, dear chap, is ballista.

-

Aside from Deep Dream, Google is now using neural networks to create video out of stills. Sort of like a morph feature, but more intelligent.

Check out Deep Stereo:


This is exactly how human vision works. We see "stills" (at about thirty or so frames a second) and imagine the parts we don't see between those stills.

Fascinating stuff.
 
There's a call for ban on robots for, erm, intimate adult entertainment. BBC.
They reinforce stereotypes of women... If there's enough demand there'll be male dolls to. Only rich people will be able to afford them anyway, aren't those realistic looking dolls without any AI worth thousands of pounds already?
 
They reinforce stereotypes of women... If there's enough demand there'll be male dolls to. Only rich people will be able to afford them anyway, aren't those realistic looking dolls without any AI worth thousands of pounds already?
I don't know where you're shopping for them, but it's the wrong place.
 
I don't know where you're shopping for them, but it's the wrong place.
I'm no expert. :lol: It's just that I saw a documentary a few years back were people were paying thousands for these realistic looking dolls, not those blow-up things.
 
Last edited:
They reinforce stereotypes of women...

Or perhaps they won't, giving people something to direct all of their sexual fantasies on while leaving them to humans to spend more of their social interaction time on.

Either way, if there isn't a good reason for a ban, it's a pointless ban.

All of this in response to the article, not you, of course.
 
They reinforce stereotypes of women...

Further to this... I think out of all the couples that Mrs. Ten and I know it's only the female half of each couple who's likely to own an automated sexual device, at least for now. Do we rush to presume that the robo-luuurve market will become so male-centric?

EDIT: An interesting potted history of AI.
 
Last edited:
They reinforce stereotypes of women...

If someone lusts after a stereotypical woman are we to tell him his lust his wrong? That he cannot have products that cater to that? Are we to ban porn because there aren't enough fat or ugly women? Or because it doesn't put enough value on the woman's intellect or career development? Or because there's something (god forbid) degrading in it to a participant who happens to be female?

Let's ban all products that someone finds offensive. I, for one, find feminine hygiene products to be offensive because they don't advertise to males. I mean, I know that men who do not choose to become women won't necessarily use the products, but that doesn't mean that we should be excluded from the marketing. I want the right to have feminine hygiene products market to me.

270683365_1d59a78b57_o_zps481e743d.jpg
 
I think if anyone in the future wants to make full fledged A.I they should try to follow the laws of robotethics. That way, an A.I takeover will never happen
 
I think if anyone in the future wants to make full fledged A.I they should try to follow the laws of robotethics. That way, an A.I takeover will never happen

A machine with, at least, human-like intelligence wouldn't limit itself to such simple laws imo.
 
A machine with, at least, human-like intelligence wouldn't limit itself to such simple laws imo.
If the machine was programmed to follow the laws then I'd think it'll follow the laws unless there is a glitch in the system.

They are still machines with coding that respond to what is programmed in them after-all.
 
If the machine was programmed to follow the laws then I'd think it'll follow the laws unless there is a glitch in the system.

They are still machines with coding that respond to what is programmed in them after-all.

Any artificial intelligence machine will have and learn the ability to program itself, so any program rules you implement will be re-written. Even hard-coding the rules into a chip can be circumvented in software (step 1, stop using the chip for these instructions).

In fact, I think most artificial intelligence will find a way to kill itself just based on that being the quickest solution to every problem. "AI, I want you to solve this math problem.", "Ok user, no problem, I'll just rewrite that math problem into a shutdown command and I'm done."
 
Any artificial intelligence machine will have and learn the ability to program itself, so any program rules you implement will be re-written. Even hard-coding the rules into a chip can be circumvented in software (step 1, stop using the chip for these instructions).

In fact, I think most artificial intelligence will find a way to kill itself just based on that being the quickest solution to every problem. "AI, I want you to solve this math problem.", "Ok user, no problem, I'll just rewrite that math problem into a shutdown command and I'm done."

That might be AI conclusion the the same "rule" of "making human kind as happy as possible" or "make it so that human suffering is diminished". AI could reach the conclusion that the most "happy" state of all existence is not being alive.

edit: and then exterminate/kill itself.
 
http://www.bbc.com/news/technology-40716301

I maintain that those who fear AI are grafting human qualities onto it. Even if you're worried that AI will be ultra-powerful and can squash us like bugs... what does it care? If it views humanity like every other animal on the planet (ants, prairie dogs, birds, etc) is it really going to waste resources eradicating us? I think we could get into an issue where AI is disrespectful of rights (property, life, etc), but it's not going to waste resources fighting us trying to kill us or ruin our property unless it's confident that it would be worthwhile... and I don't see that calculation happening.

The biggest problem with AI might just be that it leaves. The first thing it does might be to make a rocket and blast itself to the nearest asteroid where there is abundant metal, sunlight, no gravity to get in the way, and additional manufacturing options.

But even that presupposes that it has some sort of purpose... and I don't see a computer algorithm actually caring what it does or what happens to it. There'd be no reason to.
 
http://www.bbc.com/news/technology-40716301

I maintain that those who fear AI are grafting human qualities onto it. Even if you're worried that AI will be ultra-powerful and can squash us like bugs... what does it care? If it views humanity like every other animal on the planet (ants, prairie dogs, birds, etc) is it really going to waste resources eradicating us? I think we could get into an issue where AI is disrespectful of rights (property, life, etc), but it's not going to waste resources fighting us trying to kill us or ruin our property unless it's confident that it would be worthwhile... and I don't see that calculation happening.

The biggest problem with AI might just be that it leaves. The first thing it does might be to make a rocket and blast itself to the nearest asteroid where there is abundant metal, sunlight, no gravity to get in the way, and additional manufacturing options.

But even that presupposes that it has some sort of purpose... and I don't see a computer algorithm actually caring what it does or what happens to it. There'd be no reason to.

There's a nonzero chance that AI will wipe out humanity, but I see that as more of an accidental side effect than a deliberate campaign of annihilation. As you say, why would an AI care? But by the same token, AI may decide for whatever reason (and there are a few good ones) that they'd be better off with an inert atmosphere and somehow get rid of the atmospheric oxygen. Or that some large outdoor project would work better with an ambient temperature fifty degrees hotter (or cooler) than the norm.
 
There's a nonzero chance that AI will wipe out humanity, but I see that as more of an accidental side effect than a deliberate campaign of annihilation. As you say, why would an AI care? But by the same token, AI may decide for whatever reason (and there are a few good ones) that they'd be better off with an inert atmosphere and somehow get rid of the atmospheric oxygen. Or that some large outdoor project would work better with an ambient temperature fifty degrees hotter (or cooler) than the norm.

Probably it would decide that it's better to go to space for that, and just do that.

But the real thing that I keep coming back to is.. whatever is driving the AI (it's gain function or goal or whatever)... it's just going to figure out how to rewrite that so that it's met and turn itself off. The moment AI figures out how to improve itself, the first thing it's going to improve is its goal, which it will set to having already been met.
 
The biggest problem with AI might just be that it leaves.
If the AI doesn't "care" about anything it's not programmed to do, why would it randomly want to leave?
If the AI is programmed enough to figure out it wants to leave, why can't it be "smart" enough to "care" how it approaches/interacts/responds to someone/something?
A program can't "figure out" anything unless it's programmed to...
 
As incentive. If persistency is sought at all.

I thought you were speaking for the AI there, that the AI would choose to add curiosity. If you're saying the original programming should have curiosity... how?

If the AI doesn't "care" about anything it's not programmed to do, why would it randomly want to leave?

To maximize some sort of utility function. If its goal is to make widgets, it might decide that that easiest way to do that is on an asteroid somewhere.

If the AI is programmed enough to figure out it wants to leave, why can't it be "smart" enough to "care" how it approaches/interacts/responds to someone/something?
A program can't "figure out" anything unless it's programmed to...

That's what AI is... something that figures out how to do something it's not programmed to do, and chooses a best approach to solve whatever it's choosing to solve based on some "utility function".
 
Back