Full AI - The End of Humanity?

In a world where AI has taken over all of the automatable jobs, humans will have jobs entertaining each other and enjoy a massive standard of living since we don't have to pay for for any of the work that the AI is doing. There's no reason to think that AI machines won't see working as their calling, or to think that they'd want to be compensated somehow. Most of the traits that people put on artificial intelligence that concern them are human traits.

Yeah. Creative jobs for everyone. It'd be great. I don't see any reason why peaceful coexistence with true AI (synthetics, if you will) would not be possible.
 
Yeah. Creative jobs for everyone. It'd be great. I don't see any reason why peaceful coexistence with true AI (synthetics, if you will) would not be possible.

Well, it would not be possible because of the humans. But it would not be difficult (for the cibernetic organisms) to deal with such an issue.:mischievous:


Danoff's vision is very reminiscent of H.G. Wells' Time Machine.
 
Well, it would not be possible because of the humans. But it would not be difficult (for the cibernetic organisms) to deal with such an issue.:mischievous:

Deal with what issue? What is it that cybernetic organisms would decide to deal with?
 
Why would AI want to exterminate humans? They'd let us do it to ourselves, lol.
 
I see a very Ratchet & Clank type life, if we get that far. A mutual coexistence, minus the third game. For those who haven't played it (Up Your Arsenal), the main villain turns organic life into robots because he believes they're superior.
 
Yeah. Creative jobs for everyone. It'd be great. I don't see any reason why peaceful coexistence with true AI (synthetics, if you will) would not be possible.

Lovely, a world of artists, musicians and writers! Sounds like the Eloi in HG Wells' Time Machine. It also sounds like BS.

I fail to understand the economy of such a world, a world of capital but without labor. Machines and AI cost money and energy to build and maintain. Raw materials need to be converted into goods. People will need goods and services. But since people would no longer labor, how and why would they be paid or be supported in the luxury you rhapsodize about? Would the capitalists give away goods and services without receiving labor in return? What would 7 billion hungry, restless people really be like if they lacked a job, a purpose, a productive use?

Those who dream of a fully automated world need to explain its economy in some convincing detail. I don't think you can do it.
 
Simple, the AI provide for themselves, for the most part. Generate their own electricity. Build their own replacement parts. And keep a million or two human slaves as miners and knob polishers... while a few thousand lucky creatives get stuck in zoos and laboratories, where their "illogical" thought processes are probed, studied and dissected.

-

But realistically... a human created AI world will not be a full ecosystem, and thus, not self-sustaining... they'll still need us in some capacity to take care of them.
 
Don't you think AI would be fascinated by us? They'd probably be stuck utterly obsessed with epistemology, constantly studying us to understand just what the heck natural intelligence really is.

 
But realistically... a human created AI world will not be a full ecosystem, and thus, not self-sustaining... they'll still need us in some capacity to take care of them.

Will they?

It's not that hard to imagine a system with it's own repair bots and such that are able to take care of any problems and manufacture new units as necessary. Humans are nominally self-sustaining, I don't see why a mechanical variation can't be made that would be self-sustaining.
 
I fail to understand the economy of such a world, a world of capital but without labor.

Art and entertainment requires labor.

Machines and AI cost money and energy to build and maintain.

No they don't. Machines and AI can do it.

Raw materials need to be converted into goods.

...by machines.

People will need goods and services.

...from machines.

But since people would no longer labor, how and why would they be paid or be supported in the luxury you rhapsodize about?

They'd work entertaining each other.

Would the capitalists give away goods and services without receiving labor in return?

...machines...

What would 7 billion hungry, restless people really be like if they lacked a job, a purpose, a productive use?

...more fun.

Those who dream of a fully automated world need to explain its economy in some convincing detail. I don't think you can do it.

We no longer need telephone operators - that job is automated. Do you understand the economics of that? That's all you need to understand.


The lack of peaceful coexistence between humans and what Omnis defines as "true AI (synthetics if you will)"

I think we miscommunicated. For there to be a non-peaceful existence, machines must be fighting for some reason. Why? What issue is it that machines decide that they need to fix by fighting?
 
Last edited:
What issue is it that machines decide that they need to fix by fighting?

OhXboVS.jpg
 
I'm don't have a very deep or founded opinion on this subject but I really like Sam Harris' approach to AI.

An this is a video with some thoughts on the topic. There are more around the web but this shorter.



If you want the written version: here.

There's also a TED talk very interesting (although a bit awkward ^^ on this subject) :D

Also, more from Sam Harris. Very interesting.
 
Last edited:
@zzz_pt so much stupid in that video. Lack of understanding of economics, morality, artificial intelligence, natural and artificial selection, and incentives. It's an attempt to bring together a wide range of thought to a problem where the people trying it don't understand any of it. I don't even know where to start on that video. I think I could argue with every 30 seconds of it for days.
 
@zzz_pt so much stupid in that video. Lack of understanding of economics, morality, artificial intelligence, natural and artificial selection, and incentives. It's an attempt to bring together a wide range of thought to a problem where the people trying it don't understand any of it. I don't even know where to start on that video. I think I could argue with every 30 seconds of it for days.

I entirely agree. I haven't got to the TED talk yet, that may be better - they sometimes are.
 
It seems inconceivable you could make and presumably agree with post #74, then minutes later disagree with post #75. :confused:

Because in definition they're apples and oranges. The YouTube video takes a very wide social scope and makes a lot of presumptions while the topic on which Hawking, Musk, Wozniak et. al. raise is in a much narrower scope (autonomous, 'intelligent' weaponry).

That's not to say that I agree entirely with the BBC article, I posted it here to further debate.
 
Because in definition they're apples and oranges. The YouTube video takes a very wide social scope and makes a lot of presumptions while the topic on which Hawking, Musk, Wozniak et. al. raise is in a much narrower scope (autonomous, 'intelligent' weaponry).

That's not to say that I agree entirely with the BBC article, I posted it here to further debate.
Many of those very same folks arguing against autonomous weapons in post #74 are showing up in post #75 extending their concerns to AGI. Yet you almost entirely agree with the one, and disagree entirely with the other. You are a difficult man to understand sometimes.
 
@zzz_pt so much stupid in that video. Lack of understanding of economics, morality, artificial intelligence, natural and artificial selection, and incentives. It's an attempt to bring together a wide range of thought to a problem where the people trying it don't understand any of it. I don't even know where to start on that video. I think I could argue with every 30 seconds of it for days.

This is him (Sam Harris) sharing some thoughs on the sigularity, in Dennet's words "the fateful moment when AI surpasses its creators in intelligence" and the concerns or possible questions it could raise.

Being somthing it could happen in 100 or 1000 years I can't see how any of the subjects you mention could have any importance. This text he sumbited to the Edge Question 2015 was only focoused on the possible prospect of the AI could reach a point where we, humans, couldn't or would have serious problems taming it or controling it. With only a few hundreads of words he couldn't extend on every single thing conected to the subject and that was not his objective, I think.

I should have written the title of the text. "Can we avoid the digital apocalypse?" Is not even an affirmative proposition. Is him trying to adress a very unlikely but probable scenario. As he said, you only have to accept 2 things: 1) we will continue to produce better computers/technology/AI and 2) our brains don't have any magical powers.

I don't find it stupid at all. It would be stupid IMO to mix our current economics, morality or knowledge about any other subject with something that, if it happens in the future, the context can be very different - morally, economically and our overall knowledge as well. None of these issues has any influence, IMO, in the discussion of this particular subject inside the overall AI topic. EDIT: When I say none of these issues has any influence I'm refering to the point when a super intelligent AI would be autonomus, therefore, it could learn and change whatever values or morals we could put into it. I don't know if I'm making myself undersdood here. :) The TED video touches this thing (very briefly).

But as I stated earlier, I've only read and listen to some articles or conferences. If you think this is stupid, it's OK. I would like to know why tho.
 
Last edited:
But as I stated earlier, I've only read and listen to some articles or conferences. If you think this is stupid, it's OK. I would like to know why tho.

Because of the improper application of economics, morality, artificial intelligence, natural and artificial selection, and incentives

I don't find it stupid at all. It would be stupid IMO to mix our current economics, morality or knowledge about any other subject with something that, if it happens in the future, the context can be very different - morally, economically and our overall knowledge as well. None of these issues has any influence, IMO, in the discussion of this particular subject inside the overall AI topic. EDIT: When I say none of these issues has any influence I'm refering to the point when a super intelligent AI would be autonomus, therefore, it could learn and change whatever values or morals we could put into it. I don't know if I'm making myself undersdood here. :) The TED video touches this thing (very briefly).

Well the youtube video you posted disagrees. The youtube video thinks that the moral and economic concerns of AI are significant. The moral and economic delimma subsequently presented requires a misunderstanding of economics and morality.

A small example (and I have no interest in going through the entire video unless folks here are interested in any particular aspect of it) he wonders whether we should program cars to choose to prefer to hit white people over hitting black people (supposing a car had no choice). To motivate this he cites subconscious preference of some people to sacrifice white people's lives over black.

To even pose that as a potential dilemma is to suggest that there is some merit to the notion that morality here is unclear. This falls prey to one of the basic traps of logic, the notion that disagreement suggests a lack of an answer. There is an answer to this question. No dilemma.

That's a 30 second piece of the video that was stupid. There are more.
 
Yes, you're right. He speaks about morality and ethics. I was more focoused on the part where he reads the text he submited. I find that part more interesting. Of course the morality and ehtics has to go along with the AI development. I was saying that, when talking about the singularity, that's not as relevant - and probably that's why he didn't focus so much on it in the text.

About the stupid 30 seconds you mention, this is what he said:

Even designing self-driving cars presents potencial ethical problems that we need to get straight about. Any self-driving car needs some alegorythm by which to rank/order bad outcomes. So if you want a car who will avoid a child who dashes in front of it in the road perhaps by driving up on the sidewalk, you also want a car who would avoid the people on the sidewalk and preferencially hit a mail box instead of a baby carriage. So you need some intelligent sort of outcomes. These are moral decisions. Do you want a car that is unbiased with respect to the age and size of the people or the color of their skin? would you like a car that is more likely to run over white people than people of color?

This entire segment came after he wondered of who's values would be put into the machine? As he stated, "the advent of this technology would cut through moral relativism like a laser" mentioning theocracy and religious morality.

He wasn't wondering if a car should chose between black or white people. Then he talked about a psychologic study with liberals and conservatives where they they would show very different results in similar situations - some people would do one thing and others would do the oppostite. The question was, if someone has to "built" a machine (a car was just an example, it can be any other machine) with "our" morals and values, which ones would be chosen?

Like he wrote:

But whose values should count? Should everyone get a vote in creating the utility function of our new colossus? If nothing else, the invention of an AGI would force us to resolve some very old (and boring) arguments in moral philosophy.

However, a true AGI would probably acquire new values, or at least develop novel—and perhaps dangerous—near-term goals. What steps might a superintelligence take to ensure its continued survival or access to computational resources? Whether the behavior of such a machine would remain compatible with human flourishing might be the most important question our species ever asks.



The content I posted was mainley related to the superintelligent / singularity issue inside the AI broader theme. Of course you can find it stupid.

I find it an interesting subject to think about. Steven Pinker thinks it's an utopia (as nuclear powered cars or cities under water) and it will never be a real thing. No one has the monopoly on speculating on the future of AI and more specifically on how it will develop or if we'll be capable of making it an "ally" instead of an "headache".
 
The content I posted was mainley related to the superintelligent / singularity issue inside the AI broader theme. Of course you can find it stupid.

I find it an interesting subject to think about. Steven Pinker thinks it's an utopia (as nuclear powered cars or cities under water) and it will never be a real thing. No one has the monopoly on speculating on the future of AI and more specifically on how it will develop or if we'll be capable of making it an "ally" instead of an "headache".

Ok, so you don't want to talk about the moral or economic challenges referenced in the youtube video. I don't find the discussion in general stupid, I found much of the discussion in the video to be sidetracked by the speaker's lack of understanding of the issues he claimed were significant.

What is a scenario that you're seriously concerned about when it comes to AI?
 
In a world where AI has taken over all of the automatable jobs, humans will have jobs entertaining each other and enjoy a massive standard of living since we don't have to pay for for any of the work that the AI is doing. There's no reason to think that AI machines won't see working as their calling, or to think that they'd want to be compensated somehow. Most of the traits that people put on artificial intelligence that concern them are human traits.

That would be rather beneficial for the employers of companies to adopt AIs for work instead of bare humans as AI robots will not whine about what they are doing as their own calling at all no matter how hard the labor can be, and most notably AI robots are capable of toiling away at their own task the clock around without taking rests - thereby giving them more possibility of producing the materials in shorter period of time / maximizing the profits by selling them.

Some people in my country also do suggest to use artificial robots rather than recruit workers as a "promising" work force for the exact same reason - with a hair of human traits to the robots will suffice as the prospective work force instead of human workers who are absolutely impossible to keep themselves engaged in the works ceaselessly(And much less grumble about the salary they can actually earn depending on how well they performed during workhours....)
 
AI robots will not whine about what they are doing as their own calling at all no matter how hard the labor can be

I disagree - I think that with full AI there will be a cost/reward symbolism in the function. Remember that by "full" we're not talking about a machine that can decide which bolt to put into a GM chassis, we're talking about an intelligence that "matches or exceeds" or own. If an AI entity is to grow and develop then it needs to understand what is bad for it and what is good for it.
 
Good for the AI? What about good for humanity?

That would be up to any human design element to incorporate what they believe to be ethical safety controls. Would we expect that full AI would consider humans over itself otherwise?
 
That would be up to any human design element to incorporate what they believe to be ethical safety controls. Would we expect that full AI would consider humans over itself otherwise?
Im no expert, but I wouldn't "expect that full AI would consider humans over itself". How any "ethical safety controls" might be compatible with AGI I don't know either. Even if human design elements were present, there would be a difference between US and Russian designs, ISIS and Revolutionary Guards designs, etc.
 
Last edited:
Even if human design elements were present, there would be a difference between US and Russian designs, ISIS and Revolutionary Guards designs, etc.

In early days, naturally... but then there'd be a difference between MIT and Google too. Once AI starts specifying and replicating who's to say what the flavours will be?
 
Back