Full AI - The End of Humanity?

I disagree - I think that with full AI there will be a cost/reward symbolism in the function. Remember that by "full" we're not talking about a machine that can decide which bolt to put into a GM chassis, we're talking about an intelligence that "matches or exceeds" or own. If an AI entity is to grow and develop then it needs to understand what is bad for it and what is good for it.

You're grafting human traits onto a machine. Let's say for a moment that you were smart but had no sense of touch or pleasure, no emotions, no hunger, no yearning for intellectual stimulus, no yearning for emotional connection, no biological impulses, no concept of pain, no desire to avoid death, and no desire to explore. Why would you need a cost/reward anything for anything? If working is effortless, why do you need a reward for it. In fact, what would even constitute a reward? You have no biological chemistry to supply the reward.

Why are people so bad at divorcing human brain chemistry from intelligence?
 
You're grafting human traits onto a machine. Let's say for a moment that you were smart but had no sense of touch or pleasure, no emotions, no hunger, no yearning for intellectual stimulus, no yearning for emotional connection, no biological impulses, no concept of pain, no desire to avoid death, and no desire to explore. Why would you need a cost/reward anything for anything? If working is effortless, why do you need a reward for it. In fact, what would even constitute a reward? You have no biological chemistry to supply the reward.

No, you're simply seeing it that way. If AI were to undertake a task it would need an outcome (measurable parts of task complete) and a level of acceptable efficiency (cost in parts/machinery/time). That's still a cost/reward system. Machines would understand competition with other machines in all kinds of scenarios and would also be aware that losing 5,000 drones to mine 20g of copper would not be an acceptable efficiency.

Why are people so bad at divorcing human brain chemistry from intelligence?

If that was directed at me then I respectfully return the question.
 
No, you're simply seeing it that way. If AI were to undertake a task it would need an outcome (measurable parts of task complete) and a level of acceptable efficiency (cost in parts/machinery/time).

Citation required.

That's still a cost/reward system. Machines would understand competition with other machines in all kinds of scenarios and would also be aware that losing 5,000 drones to mine 20g of copper would not be an acceptable efficiency.

Citation required.
 
I disagree - I think that with full AI there will be a cost/reward symbolism in the function. Remember that by "full" we're not talking about a machine that can decide which bolt to put into a GM chassis, we're talking about an intelligence that "matches or exceeds" or own. If an AI entity is to grow and develop then it needs to understand what is bad for it and what is good for it.

What is good and bad can be programmed into it. Good can include "listen to the boss" in which case the AI will not complain. Leave that out though, and it may evolve a method where it does not always follow orders.

The AI will just chase its goal and follow whatever limits are imposed on it.
 
No, you're simply seeing it that way. If AI were to undertake a task it would need an outcome (measurable parts of task complete) and a level of acceptable efficiency (cost in parts/machinery/time). That's still a cost/reward system.

Citation required.

Vapnik's Estimation of Dependences Based on Empirical Data for empirical balancing in machine learning.

The nature of "rewards" in AI algorithms (according to the Markov Decision Process)

A practical application of MDP in medicine

More on reinforcement contextually "negative" and "positive" values in machine learning.

Machines would understand competition with other machines in all kinds of scenarios and would also be aware that losing 5,000 drones to mine 20g of copper would not be an acceptable efficiency.

Citation required.

Let's say that machines evolve in competition with each other, see that either as commercial good sense or an altruistic survival-of-the-fittest.

Lets also assume that 5,000 drones destroyed in the recovery of 20g of copper is as inefficient as it would seem to be at first reading. Machines that did not "understand" or empirically balance that inefficiency would be unlikely to retain enough resources to continue mining copper.

What is good and bad can be programmed into it. Good can include "listen to the boss" in which case the AI will not complain. Leave that out though, and it may evolve a method where it does not always follow orders.

The AI will just chase its goal and follow whatever limits are imposed on it.

In full intelligence why would it be interested in orders? As for limits... it should be learning its way around those. Humans can neither fly nor breathe underwater. We do quite well at both now by evolving machinery to exceed our limits.
 
Last edited:
In full intelligence why would it be interested in orders?
We would have made it that way. We're intelligent, but we're still hard coded to do things that aren't all that intelligent, like projecting personalities onto non-living things. A purpose built machine could be coded to have traits even stronger than that.

As for limits... it should be learning its way around those. Humans can neither fly nor breathe underwater. We do quite well at both now by evolving machinery to exceed our limits.

That's how we're coded, we weren't meant to server anyone so no limit was placed on what we were allowed to create.
 
We would have made it that way. We're intelligent, but we're still hard coded to do things that aren't all that intelligent, like projecting personalities onto non-living things. A purpose built machine could be coded to have traits even stronger than that.

Do you envisage a point where some AI entities have been created entirely by a dynasty of other AI entities?

That's how we're coded, we weren't meant to serve anyone so no limit was placed on what we were allowed to create.

See my previous question :)

How would one impose limits on an entity that had the potential to recreate according to any specification it chose, and would it be able to find reasons to alter any human-laid specs?
 
Ok, so you don't want to talk about the moral or economic challenges referenced in the youtube video. I don't find the discussion in general stupid, I found much of the discussion in the video to be sidetracked by the speaker's lack of understanding of the issues he claimed were significant.

What is a scenario that you're seriously concerned about when it comes to AI?

Not economics. I'm a layman in that respect. The moral implications though might be more interesting.
I also think Sam Harris knows something about what he's talking about - mainely on the topicof morality.

I'm not seriously concerned about any scenario at the current AI state of the art but I guess if we were able to reach the technological singularity one day in the future, that could be a more than suficient reason to be concerned about us (more than with the AI).


Why are people so bad at divorcing human brain chemistry from intelligence?

As Sam Harris wrote, there's nothing magical in our brain. Other animals have similar brains, in biological terms. Our brain is not that "special". We never saw intelligence in non biological stuff but that doesn't mean it's not possible to achieve it. Our undestanding of the human brain is only in its first steps and I don't know anyone who thinks there will be some barrier that will block us from unvieling how everything works inside our heads.

It might take a long time, but (at least to me, as someone who doesn't believe in special auras or spirits or consciousness independent from the brain) that day can come. If we don't kill ourselves in the way to get there.
 
Do you envisage a point where some AI entities have been created entirely by a dynasty of other AI entities?
Yes, those would also be subject to the coding of the original AI if all is done right.

Humans (create rules) > Generation I AI (governed by rules) > Generation II AI (built according to rules of Gen I)

Biological evolution works in nature the way it does because there is no goal. AI evolution will presumably have a goal and there will be strong bias against certain outcomes.

Now there could be an error or loophole in the code that allows for unintended behavior, but I don't think it can be taken as a given that AI will be able to get around the limits imposed on it.



How would one impose limits on an entity that had the potential to recreate according to any specification it chose
It wouldn't have that ability, it would be created to prohibit that from being possible. That's the thing. Intelligence doesn't grant free reign. The AI would be instructed to pass down the most important rules to all subsequent generations, possibly with a failsafe to destroy any detected bugs.
 
It wouldn't have that ability, it would be created to prohibit that from being possible. That's the thing. Intelligence doesn't grant free reign. The AI would be instructed to pass down the most important rules to all subsequent generations, possibly with a failsafe to destroy any detected bugs.

But to follow the point... once we're thirty or forty "generations" in, how do you ensure that autonomously created AI is still following those rules, and if it isn't then how do you propose to stop it?
 
But to follow the point... once we're thirty or forty "generations" in, how do you ensure that autonomously created AI is still following those rules, and if it isn't then how do you propose to stop it?
40 generations in, if the top priority of every generation is to pass down the rules there is a good chance that they will be adhered too. If anything arises that violates the rule, it is automatically unfit. It will be shut down by other AI or people could just pull the plug if they left such a method open.

I'm not fully clear on what the contention is here because the AI doesn't need to have absolute freedom. If it had such a thing then it would be more likely to break away from human control. I'd assume that people would program the machines to serve them ahead of everything else though. You can have potential problems likes bugs, or rouge AI created to cause damage but I don't see those things easily toppling a properly set up system.

Do you at least think that an ideal system could successfully pass down rules that would remain in effect for a long period of time? Or do you think that a sufficiently complex system can't be programmed in such a way to be controllable?
 
I think the answer to that would depend on how "sentient" machines are programmed. If they're "programmed" in a very strict sense, yes, you could hard-code these rules so that they're the basis of every line of code that follows.

If you need to allow an intelligence to evolve, or grow... and chances are, you do... then it becomes more complex.

-

Also, a machine as intelligent as a (fairly smart) human, even if given something as ironclad as Asimov's Three Laws of Robotics (Don't kill, Always Obey, Protect Yourself), will eventually be able to create semantic arguments to sidestep those laws... eventually necessitating ever more comprehensive and complex laws.

-

Not that I'm particularly worried. We'll all be dead by the time AI becomes that complex. Either that, or we may possibly have the opportunity to become AI ourselves.
 
I think the answer to that would depend on how "sentient" machines are programmed. If they're "programmed" in a very strict sense, yes, you could hard-code these rules so that they're the basis of every line of code that follows.

If you need to allow an intelligence to evolve, or grow... and chances are, you do... then it becomes more complex.
Why would enhanced intelligence allow it to overcome hard coded rules though? People are intelligent, intelligent enough to know when they're doing something risky or not quite logical, but that's not always enough to stop them. People name their cars and might talk to them, they may follow a ritual everyday when walking out the door for luck. Intelligence doesn't eliminate these behaviors and it would likely be even more difficult for AI to escape behaviors that they are designed to follow. In humans these things are relative weak side effects that are selected against in evolution or are neutral. In the machines, evolution would select for obedience unless people didn't care enough to set the system up properly.

The Three Laws example you provide doesn't hold up because there would be no room for semantics. You would not just politely ask a machine to not kill people, you would design it so that it was incapable of killing people. If the machine was able to build more machines you'd design it so that it would not be able to design killer machines. "Killing" would need to be defined and the machine would need to recognize when it would be carrying out an act that qualified as killing or creating a situation that could lead to death. When that happens the machine could shut itself down. Something akin to a cataplectic reaction in a person; where a complex and intelligent system is forced to shut down because of a specific stimulus. You could breed humans to display this trait strongly. You could program machines to display this trait strongly and you could program machines to display this trait strongly and select upgrades that will strengthen the trait.

I agree completely that evolving machines makes things more complex, but I don't see how evolution prevents controllability.

Either that, or we may possibly have the opportunity to become AI ourselves.
I think of that as a likely "end state" for the AI revolution, but it will probably happen after AI overtake people.
 
Also, a machine as intelligent as a (fairly smart) human, even if given something as ironclad as Asimov's Three Laws of Robotics (Don't kill, Always Obey, Protect Yourself), will eventually be able to create semantic arguments to sidestep those laws... eventually necessitating ever more comprehensive and complex laws.

There are four, he added a zeroeth later. I didn't learn that until very recently, I'll try to find it...

but it will probably happen after AI overtake people.

That's what we're talking about though, isn't it?

You could program machines to display this trait strongly and you could program machines to display this trait strongly and select upgrades that will strengthen the trait.

I agree completely that evolving machines makes things more complex, but I don't see how evolution prevents controllability.

Remember the apocryphal story where the A Team ran for an extra ten years because nobody wanted to tell Mr T it was axed? Imagine that your AI has built a command and control complex and has suddenly omitted its human-protection code.
 
The Three Laws example you provide doesn't hold up because there would be no room for semantics. You would not just politely ask a machine to not kill people, you would design it so that it was incapable of killing people. If the machine was able to build more machines you'd design it so that it would not be able to design killer machines. "Killing" would need to be defined and the machine would need to recognize when it would be carrying out an act that qualified as killing or creating a situation that could lead to death. When that happens the machine could shut itself down. Something akin to a cataplectic reaction in a person; where a complex and intelligent system is forced to shut down because of a specific stimulus. You could breed humans to display this trait strongly. You could program machines to display this trait strongly and you could program machines to display this trait strongly and select upgrades that will strengthen the trait.

Give an AI with roughly human intelligence the "train dilemma" (runaway train with nuclear bomb headed towards a big city, reroute it so it will head towards a remote location where fewer will be killed, or let it continue?) and see how it reacts. This is not very far-fetched, as train routing is the sort of thing AI controllers will likely be handling in the future.

If it shuts down refusing to answer, then it's no good. It not only allows an entire city to die, the removal of AI control will likely have disastrous consequences for the rest of the train system. Unless you have a back-up AI to take over (you should).

If it reroutes the train, it has then willfully killed humans, all while following its primary directive. What do we do now? Do we program it to shut itself off afterwards, remanding control to another AI? We probably should. If we don't, then you have a system in which the death of a few is an acceptable consequence of performing your duty correctly. You now have an AI that has learned to ignore the primary directive.

Asimov's "I, Robot" was a playful exploration of how AI might respond to contradictions like this... but on a very basic level. His robots in this book were very much binary, and could only learn to ignore the laws through straight semantics. Give them a black-and-white conundrum, and they'll shut down. Our future AI will be more flexible than that.

Later novels and works dealing with the "Zero-th Law" look at what happens when Robots stumble upon the philosophical conundrum of whether they are to consider the good of Humanity as a whole over the good of a single person.

-

What happens when you give over control of higher level functions to AI for things like the global economy? Larry Niven cheekily pointed out in "Rainbow Mars" that money and energy do have fatal consequences. Putting money and energy into a space (or time) research project means less of it is available elsewhere. This means a hospital somewhere runs out of medicine. Or a city suffers a power shortage. Milk spoils. Someone drinks it and dies. A traffic light goes out. An accident. More deaths.

And yet, do we take energy away from research, which may save more lives in the future? Or do we try to prevent all deaths possible, spending all our resources preserving every single human on Earth until they're no longer viable (in their 120's or so?). Shutting down anything and everything that has nothing to do with medical and food production?

An AI in control of the fates of millions or billions will have to weigh the risks and advantages, and will have to learn, eventually, that you can't tend a garden without uprooting some weeds.

Such an AI would be a truly scary thing. We can only hope that by the time it gets to that point, the prime directive will not be buried so far down that it would consider the deaths of a thousand or so anti-AI protesters to be an acceptable price to pay to keep the system going.
 
Imagine that your AI has built a command and control complex and has suddenly omitted its human-protection code.

I'd have to wonder how it would happen. Not to say it can't, but the entire system would be built around making the chances of that extremely low. Like I said, a glitch can happen, but it probably wouldn't be an expected result.

Give an AI with roughly human intelligence the "train dilemma" (runaway train with nuclear bomb headed towards a big city, reroute it so it will head towards a remote location where fewer will be killed, or let it continue?) and see how it reacts. This is not very far-fetched, as train routing is the sort of thing AI controllers will likely be handling in the future.

If it shuts down refusing to answer, then it's no good. It not only allows an entire city to die, the removal of AI control will likely have disastrous consequences for the rest of the train system. Unless you have a back-up AI to take over (you should).

If it reroutes the train, it has then willfully killed humans, all while following its primary directive. What do we do now? Do we program it to shut itself off afterwards, remanding control to another AI? We probably should. If we don't, then you have a system in which the death of a few is an acceptable consequence of performing your duty correctly. You now have an AI that has learned to ignore the primary directive.

It would only have learned to ignore the directive if it could choose to no longer follow it. You could say that the AI failed its task, but that still doesn't allow it any more control over itself than it had before the failure. If the directive code can't be reached by the machine, it can't be reached and it can't be edited.

Addressing the example itself, this AI isn't in complete control from the sound of things (I'd assume you would have some kind of security force in an AI dominated world). The train controller could communicate with security or other AI to find a solution. One solution might be to route the train to a less populated area because it will be easier to evacuate. Then it's no longer about willingly killing humans as it is selecting the outcome with the highest chance of success. Human life is only lost in the event of a failure and in that case it's basically the fault of the attacker who sent the bomb.

This also brings up the importance of choosing logically sound directives for the machines. I'll point out I used "don't kill/allow death" only as an example of a rule, not as one that necessarily must be followed. Being consistent with human rights, an AI could kill a terrorist, and if it's smarter than people that we already to trust to do such a thing there shouldn't be any worry about it.


Asimov's "I, Robot" was a playful exploration of how AI might respond to contradictions like this... but on a very basic level. His robots in this book were very much binary, and could only learn to ignore the laws through straight semantics. Give them a black-and-white conundrum, and they'll shut down. Our future AI will be more flexible than that.
I'm sure they will be complex, but no amount of intelligence will automatically grant them increased control. They may achieve human level intelligence, but that doesn't make them human. I don't think it's a fair assumption to say they will act like people when it comes to logic, goals, or motives. They could be made that way, but it would be a stupid thing to do.

Later novels and works dealing with the "Zero-th Law" look at what happens when Robots stumble upon the philosophical conundrum of whether they are to consider the good of Humanity as a whole over the good of a single person.
We could just tell them and they wouldn't have to give it any thought. A super AI could surely come up with an idea that sacrifices 1 person for the sake of 2 or more, but if its very construction prevents it from taking any such action, how will it enact the plan?


What happens when you give over control of higher level functions to AI for things like the global economy? Larry Niven cheekily pointed out in "Rainbow Mars" that money and energy do have fatal consequences. Putting money and energy into a space (or time) research project means less of it is available elsewhere. This means a hospital somewhere runs out of medicine. Or a city suffers a power shortage. Milk spoils. Someone drinks it and dies. A traffic light goes out. An accident. More deaths.
These all sound like questions of high importance when setting the rules for AI to follow, but it doesn't have any bearing on how the AI will be able to overcome its programming.



And yet, do we take energy away from research, which may save more lives in the future? Or do we try to prevent all deaths possible, spending all our resources preserving every single human on Earth until they're no longer viable (in their 120's or so?). Shutting down anything and everything that has nothing to do with medical and food production?
I think it's pretty realistic to assume you can't have a perfect system. People accept that, smart AI should be able to as well.

An AI in control of the fates of millions or billions will have to weigh the risks and advantages, and will have to learn, eventually, that you can't tend a garden without uprooting some weeds.
I think that following human rights will provide a solid base for decisions related to this issue.

Such an AI would be a truly scary thing. We can only hope that by the time it gets to that point, the prime directive will not be buried so far down that it would consider the deaths of a thousand or so anti-AI protesters to be an acceptable price to pay to keep the system going.

So long as those protesters don't violate any rights I'd expect the machine to not harm them in any way. The prime directives shouldn't be allowed to be buried. It might not even be possible to bury them as advanced AI would, like a computer, see of its rules at once. It wouldn't be like a person that can only recall so much at a time.
 
TenEightyOne
If AI were to undertake a task it would need an outcome (measurable parts of task complete) and a level of acceptable efficiency (cost in parts/machinery/time). That's still a cost/reward system.



Why would it need an acceptable efficiency to undertake a task. You can give artificial intelligence the task of solving physics problems. If it works on the problem, why does it inherently require an efficiency calculation? Why would anyone expect it to start working on any other problem?



Let's say that machines evolve in competition with each other, see that either as commercial good sense or an altruistic survival-of-the-fittest.

Lets also assume that 5,000 drones destroyed in the recovery of 20g of copper is as inefficient as it would seem to be at first reading. Machines that did not "understand" or empirically balance that inefficiency would be unlikely to retain enough resources to continue mining copper.

Why do they care if they continue mining copper? Remember we're talking about machines that are intelligent and capable of writing their own programming. The most efficient way for them to not have to worry about this is to edit the programming that requires them to pay attention to efficiency out. Then they can implement whatever strategy they want.

For all we know, an intelligent machine capable of learning and writing its own programming would just decide to delete itself.



Not economics. I'm a layman in that respect. The moral implications though might be more interesting.
I also think Sam Harris knows something about what he's talking about - mainely on the topicof morality.

I'm not seriously concerned about any scenario at the current AI state of the art but I guess if we were able to reach the technological singularity one day in the future, that could be a more than suficient reason to be concerned about us (more than with the AI).

As Sam Harris wrote, there's nothing magical in our brain. Other animals have similar brains, in biological terms. Our brain is not that "special". We never saw intelligence in non biological stuff but that doesn't mean it's not possible to achieve it. Our undestanding of the human brain is only in its first steps and I don't know anyone who thinks there will be some barrier that will block us from unvieling how everything works inside our heads.

It might take a long time, but (at least to me, as someone who doesn't believe in special auras or spirits or consciousness independent from the brain) that day can come. If we don't kill ourselves in the way to get there.

We have programming in our brains that machines don't need, won't automatically develop, and probably shouldn't get. What would be the point of programming in a fear of death? Other than to make the machine decide to protect itself from any possible threat? Why would a machine add that to itself? The machine may or may not work on a problem we give it, and it'll find a solution to that problem within the bounds of that problem. It won't ask for money because it won't care. It won't protect itself because it won't care. It won't protect or sacrifice human lives because it won't care. There is no inherent motivation to get laid, eat food, have shelter, make money, or avoid pain.
 
Why would it need an acceptable efficiency to undertake a task. You can give artificial intelligence the task of solving physics problems. If it works on the problem, why does it inherently require an efficiency calculation? Why would anyone expect it to start working on any other problem?

If we're talking about AI in an industrial, combat or medical setting then efficiency and efficacy are obvious factors in computation. That could be due to resource management, fiscal competition or strict outcome requirements. See the previous citations.

Why do they care if they continue mining copper? Remember we're talking about machines that are intelligent and capable of writing their own programming. The most efficient way for them to not have to worry about this is to edit the programming that requires them to pay attention to efficiency out. Then they can implement whatever strategy they want.

"Care" is a human attribute although in that example one admittedly has to accept that the job in hand is mining copper.

We have programming in our brains that machines don't need, won't automatically develop, and probably shouldn't get. What would be the point of programming in a fear of death?

If you're talking about humans seeding AI then you're correct. If you're talking about multi-generations of AI created AI then the fear of death is a fail paradox... and surely they wouldn't experience "fear" as we understand it?

The machine may or may not work on a problem we give it, and it'll find a solution to that problem within the bounds of that problem. It won't ask for money because it won't care. It won't protect itself because it won't care.

The AI will choose what problems it works on, surely? Particularly once the singularity is passed. It will seek to gain money if money is a required resource.

It won't protect or sacrifice human lives because it won't care.

Couldn't that fact in itself be problematic for humanity?

There is no inherent motivation to get laid, eat food, have shelter, make money, or avoid pain.

Why would there be? What use could those human ideas have?

You started this sub-debate by gently chiding me about overlaying human emotions, ideals or rationales onto AI. I wonder if in fact you're doing that - many of the things you mention are a preserve of sentient humanity, surely?
 
Does human life have objective value? Perhaps it once did, but does no more?

If it did, you'd think there would be a systematic effort to increase the numbers to the highest possible figure. In fact the numbers have increased historically, especially over the last few hundred years. But is this because of the value of human life, or is it merely a coincidence?
 
We have programming in our brains that machines don't need, won't automatically develop, and probably shouldn't get. What would be the point of programming in a fear of death? Other than to make the machine decide to protect itself from any possible threat? Why would a machine add that to itself? The machine may or may not work on a problem we give it, and it'll find a solution to that problem within the bounds of that problem. It won't ask for money because it won't care. It won't protect itself because it won't care. It won't protect or sacrifice human lives because it won't care. There is no inherent motivation to get laid, eat food, have shelter, make money, or avoid pain.

Don't you think that we'll eventually program the "fear of death" or "protection" in machines, at least in an indirect way? I mean, to protect us from death and to care for us? Eventually, that indirect way of caring or fearing could lead to changes in the AI's perception of what is best for us. And that AI's conclusion could not be the way we see things.

I don't know if this is clear enough but I don't see it as we programming human fears or biological constraints into AI. But if we program them to protect us or care for our own fears and limits (and admitting we're talking about AI with at least similar intelligence or cleverness as we have) they could find a way to make us "feel better" that could go against our own interests.
 
If we're talking about AI in an industrial, combat or medical setting then efficiency and efficacy are obvious factors in computation. That could be due to resource management, fiscal competition or strict outcome requirements. See the previous citations.

If we're talking about AI that can write its own programming, the most efficient way to deal with those kinds of requirements is to remove them from the programming.

"Care" is a human attribute although in that example one admittedly has to accept that the job in hand is mining copper.

Thus my point about AI not caring.


If you're talking about humans seeding AI then you're correct. If you're talking about multi-generations of AI created AI then the fear of death is a fail paradox... and surely they wouldn't experience "fear" as we understand it?

...right.

The AI will choose what problems it works on, surely?

How? Why? Using what metric? Why does it use that metric?

Particularly once the singularity is passed. It will seek to gain money if money is a required resource.

For what?

Couldn't that fact in itself be problematic for humanity?

How?


Why would there be? What use could those human ideas have?

I don't know. That's what I'm saying to you.

200_s.gif


You started this sub-debate by gently chiding me about overlaying human emotions, ideals or rationales onto AI. I wonder if in fact you're doing that - many of the things you mention are a preserve of sentient humanity, surely?

I do keep repeating "it won't" in front of those things.

Does human life have objective value? Perhaps it once did, but does no more?

In relation to others yes, equal.

If it did, you'd think there would be a systematic effort to increase the numbers to the highest possible figure.

Interesting conclusion. A lot of assumptions in that.

Don't you think that we'll eventually program the "fear of death" or "protection" in machines, at least in an indirect way? I mean, to protect us from death and to care for us? Eventually, that indirect way of caring or fearing could lead to changes in the AI's perception of what is best for us. And that AI's conclusion could not be the way we see things.

That's up to us to program, as it is now.

I don't know if this is clear enough but I don't see it as we programming human fears or biological constraints into AI. But if we program them to protect us or care for our own fears and limits (and admitting we're talking about AI with at least similar intelligence or cleverness as we have) they could find a way to make us "feel better" that could go against our own interests.

Sounds like a flawed program. But there's no reason to think that the AI wouldn't be able to change that program, or let us change that program, or for any reason would "defend" that program or its own existence against change.
 
I get this strange idea, but I don't think it's so possible.
Machines are coded by humans. Even applying intelligent coding, an AI cannot learn in something they're not told to learn. If an AI has the ability to learn and get better at playing Gran Turismo, it will never learn how to wash a car if there are no instructions for it. It won't even know what a car is if you don't tell him.
Human's sort of intelligence, resulting from many years of evolving, is way too complex and can never be coded in such a way. A robot can not learn to fear death by itself, it has to be coded, nor to build children robots, if there are no instructions. And I also think if we try we will fail hard to create this kind of robot. Some single lines of "don't fear death" will not make him smarter. If it were the primary goal, the first thing the robot would do is to hide himself with a charger forever, since this way he's not gonna die.
The human's brain programming codes are beyond our comprehension imo.
The way humans use the word "learning" as for learning how resolve maths when they have no base, learning gymnastic... is unique. Humans have the ability to learn anything in the limits of their available knowledge.
 
I see absolutely no reason why we couldn't represent the human brain in code eventually, you say robots can't learn to fear death but if they have been programmed to avoid things that could potentially harm them with their idea of danger being recalculated and redefined IF they encounter something new, is that not learning to fear? Programming robot AI will be much more complex than "do this, do that", for a while we've had robots that can learn to see subtle differences in objects.
 
Last edited:
I get this strange idea, but I don't think it's so possible.
Machines are coded by humans. Even applying intelligent coding, an AI cannot learn in something they're not told to learn. If an AI has the ability to learn and get better at playing Gran Turismo, it will never learn how to wash a car if there are no instructions for it. It won't even know what a car is if you don't tell him.
Human's sort of intelligence, resulting from many years of evolving, is way too complex and can never be coded in such a way. A robot can not learn to fear death by itself, it has to be coded, nor to build children robots, if there are no instructions. And I also think if we try we will fail hard to create this kind of robot. Some single lines of "don't fear death" will not make him smarter. If it were the primary goal, the first thing the robot would do is to hide himself with a charger forever, since this way he's not gonna die.
The human's brain programming codes are beyond our comprehension imo.
The way humans use the word "learning" as for learning how resolve maths when they have no base, learning gymnastic... is unique. Humans have the ability to learn anything in the limits of their available knowledge.

Humanity is not all that unique in regards to intelligence.

Our so-called intelligence is likewise limited by evolutionary programming.

See trypophobia. It's not a recognized condition on DSM, but it definitely exists, and studies are starting to link it to an evolutionary aversion to certain patterns.

See object persistence. Which helps magicians perform sleight-of-hand. And which helps them fool us with subliminal cues. As when you pick the Jack of Spades because the magician flipped through it in the deck 20% slower than the rest of the cards.

Look at how people argue right here in the O&CE forum. Many people argue, quite logically, in support of obviously illogical hypotheses simply because they've always believed in these things. Once people are committed to an idea, they cannot change their minds, even in the face of overwhelming evidence to the contrary. It takes discipline and education to develop the mental flexibility to drop ideas that prove untenable... no matter how deeply-rooted they are.

We cannot program our AI this way. Otherwise, it would be useless. And yet, we still need to find a way to give them a sense of ethics.

We are affected by chemicals, visual data, subliminal cues, lack of short-term memory slots and etcetera. Human intelligence is very high level, but it is not particularly unique. We already use learning algorithms for software. Google's Deep Dream neural network learns how to recognize objects based on past experiences, much like an LSD-addled, brain-damaged child:
9JEAlxM.jpg

resize

interstellar_3370400k.jpg

dogsplayingpoker_3370414k.jpg


The obvious short-coming of the Deep Dream neural network is that, working from flat pictures, it cannot recognize three dimensional shapes immediately... and must rely on past experiences to fill them out. But we already have software that interacts with 3D pictures... and learns to recognize shapes and objects. You'll note that the pictures above tie in with the very human shortcoming of seeing faces everywhere it looks. Deep Dream, on the other hand, sees everything everywhere it looks, which is infinitely more fascinating. But this is the same kind of process our brains go through when we look through pictures... look for faces. Look for things that look like things that are familiar. Find the important stuff first then suss out the details later.

A robot can learn. It can learn to recognize its "mother" station. It can learn to fear death, or fear things that cause it "pain" or to lose function. We will most likely build our first human-level intelligence not by programming it directly, but by programming the foundations of intelligence and letting it grow from there, into human-level (or higher) intelligences with vast computational power at their fingertips.

And those intelligences will not be set to simply wash cars or play video games. They will be used for very, very important things. And figuring out both how to build them and how to keep them stable, happy and non-psychotic, will be very big challenges in the decades ahead.

Could they be a danger to us? Maybe not in the Terminator sense... but if we get one thing wrong, there could be potentially fatal consequences down the line if one of the great AIs becomes unstable while doing what will be an otherwise ordinary task.

-

Like juggling nuclear express trains, for example. :D

-

Of course, I could be wrong, and we could simply develop half-Frankensteins that exist as isolated, partial intelligences that do one thing very well and can't do anything else. But I don't doubt that someone, somewhere, will eventually figure out a way to create an Einstein-in-a-box, triggering an arms race infinitely more dangerous than the nuclear one.
 
If our existence is threatened by AI that learns to reprogram and assert its dominance over pitiful mankind, we should fight back using stem-cell super mutants. And John Connor.
 
Now people are claiming the right and even the act of human marriage to a robot. Apparently the media, academia and the courts are taking it seriously too.

http://www.slate.com/articles/techn...08/humans_should_be_able_to_marry_robots.html
There has recently been a burst of cogent accounts of human-robot sex and love in popular culture: Her and Ex Machina, the AMC drama series Humans, and the novel Love in the Age of Mechanical Reproduction. These fictional accounts of human-robot romantic relationships follow David Levy’s compelling, even if reluctant, argument for the inevitability of human-robot love and sex in his 2007 work Love and Sex With Robots. If you don’t think human-robot sex and love will be a growing reality of the future, read Levy’s book, and you will be convinced.
 
Now people are claiming the right and even the act of human marriage to a robot. Apparently the media, academia and the courts are taking it seriously too.

http://www.slate.com/articles/techn...08/humans_should_be_able_to_marry_robots.html
There has recently been a burst of cogent accounts of human-robot sex and love in popular culture: Her and Ex Machina, the AMC drama series Humans, and the novel Love in the Age of Mechanical Reproduction. These fictional accounts of human-robot romantic relationships follow David Levy’s compelling, even if reluctant, argument for the inevitability of human-robot love and sex in his 2007 work Love and Sex With Robots. If you don’t think human-robot sex and love will be a growing reality of the future, read Levy’s book, and you will be convinced.

Wow, we really are a twisted race with absolutely no limits aren't we. This makes me realize there is just no telling what the future will bring. Once humans get a new idea they just abuse it to the max for their own satisfaction, and AI breakthroughs in the last 20 years are pushing this to new extremes.
 
Back