Deep Thoughts

  • Thread starter Danoff
  • 1,019 comments
  • 66,998 views
Yeah, if you wanted to be on the safe side then preserving yourself in some way would be wise. I think there are some places that will preserve your body in the case of legal death already, but I don't know many details beyond that.

If there is a Star Trek episode centered around this topic I don't recall it, but I never was the most dedicated viewer of the series. I imagine that there are a bunch of episodes that I never saw or perhaps forgot about. I vaguely remember one about a time traveling historian, but I can't remember if that was TNG or what ended up happening.

The episode I'm thinking of was one where there were people dying of terminal illnesses that had frozen themselves in a spacecraft to wait for the day when their diseases might be curable. The enterprise finds them, wakes them up, and cures their diseases. Ah there it is "The Neutral Zone".

neutralzone158.jpg
 
I vaguely remember one about a time traveling historian, but I can't remember if that was TNG
Berlinghoff Rasmussen, played by Matt "Max Headroom" Frewer, in the TNG episode "A Matter of Time".
 
Sometimes I have mentioned panpsychism, IMO the fun and useful idea that everything is conscious.
"...says Chalmers. “One starts as a materialist, then turns into a dualist, then a panpsychist, then an idealist,” he adds, echoing his paper on the subject. Idealism holds that conscious experience is the only thing that truly exists. From that perspective, panpsychism is quite moderate.

Chalmers quotes his colleague, the philosopher John Perry, who says: “If you think about consciousness long enough, you either become a panpsychist or you go into administration.”


ROCK CONSCIOUSNESS
The idea that everything from spoons to stones are conscious is gaining academic credibility

Consciousness permeates reality. Rather than being just a unique feature of human subjective experience, it’s the foundation of the universe, present in every particle and all physical matter.

This sounds like easily-dismissible bunkum, but as traditional attempts to explain consciousness continue to fail, the “panpsychist” view is increasingly being taken seriously by credible philosophers, neuroscientists, and physicists, including figures such as neuroscientist Christof Koch and physicist Roger Penrose.

“Why should we think common sense is a good guide to what the universe is like?” says Philip Goff, a philosophy professor at Central European University in Budapest, Hungary. “Einstein tells us weird things about the nature of time that counters common sense; quantum mechanics runs counter to common sense. Our intuitive reaction isn’t necessarily a good guide to the nature of reality.”

David Chalmers, a philosophy of mind professor at New York University, laid out the “hard problem of consciousness” in 1995, demonstrating that there was still no answer to the question of what causes consciousness. Traditionally, two dominant perspectives, materialism and dualism, have provided a framework for solving this problem. Both lead to seemingly intractable complications.

The materialist viewpoint states that consciousness is derived entirely from physical matter. It’s unclear, though, exactly how this could work. “It’s very hard to get consciousness out of non-consciousness,” says Chalmers. “Physics is just structure. It can explain biology, but there’s a gap: Consciousness.” Dualism holds that consciousness is separate and distinct from physical matter—but that then raises the question of how consciousness interacts and has an effect on the physical world.

Panpsychism offers an attractive alternative solution: Consciousness is a fundamental feature of physical matter; every single particle in existence has an “unimaginably simple” form of consciousness, says Goff. These particles then come together to form more complex forms of consciousness, such as humans’ subjective experiences. This isn’t meant to imply that particles have a coherent worldview or actively think, merely that there’s some inherent subjective experience of consciousness in even the tiniest particle.

Panpsychism doesn’t necessarily imply that every inanimate object is conscious. “Panpsychists usually don’t take tables and other artifacts to be conscious as a whole,” writes Hedda Hassel Mørch, a philosophy researcher at New York University’s Center for Mind, Brain, and Consciousness, in an email. “Rather, the table could be understood as a collection of particles that each have their own very simple form of consciousness.”

But, then again, panpsychism could very well imply that conscious tables exist: One interpretation of the theory holds that “any system is conscious,” says Chalmers. “Rocks will be conscious, spoons will be conscious, the Earth will be conscious. Any kind of aggregation gives you consciousness.”

Interest in panpsychism has grown in part thanks to the increased academic focus on consciousness itself following on from Chalmers’ “hard problem” paper. Philosophers at NYU, home to one of the leading philosophy-of-mind departments, have made panpsychism a feature of serious study. There have been several credible academic books on the subject in recent years, and popular articles taking panpsychism seriously.

One of the most popular and credible contemporary neuroscience theories on consciousness, Giulio Tononi’s Integrated Information Theory, further lends credence to panpsychism. Tononi argues that something will have a form of “consciousness” if the information contained within the structure is sufficiently “integrated,” or unified, and so the whole is more than the sum of its parts. Because it applies to all structures—not just the human brain—Integrated Information Theory shares the panpsychist view that physical matter has innate conscious experience.

Goff, who has written an academic book on consciousness and is working on another that approaches the subject from a more popular-science perspective, notes that there were credible theories on the subject dating back to the 1920s. Thinkers including philosopher Bertrand Russell and physicist Arthur Eddington made a serious case for panpsychism, but the field lost momentum after World War II, when philosophy became largely focused on analytic philosophical questions of language and logic. Interest picked up again in the 2000s, thanks both to recognition of the “hard problem” and to increased adoption of the structural-realist approach in physics, explains Chalmers. This approach views physics as describing structure, and not the underlying nonstructural elements.

“Physical science tells us a lot less about the nature of matter than we tend to assume,” says Goff. “Eddington”—the English scientist who experimentally confirmed Einstein’s theory of general relativity in the early 20th century—“argued there’s a gap in our picture of the universe. We know what matter does but not what it is. We can put consciousness into this gap.”

In Eddington’s view, Goff writes in an email, it’s “”silly” to suppose that that underlying nature has nothing to do with consciousness and then to wonder where consciousness comes from.” Stephen Hawking has previously asked: “What is it that breathes fire into the equations and makes a universe for them to describe?” Goff adds: “The Russell-Eddington proposal is that it is consciousness that breathes fire into the equations.”

The biggest problem caused by panpsychism is known as the “combination problem”: Precisely how do small particles of consciousness collectively form more complex consciousness? Consciousness may exist in all particles, but that doesn’t answer the question of how these tiny fragments of physical consciousness come together to create the more complex experience of human consciousness.

Any theory that attempts to answer that question, would effectively determine which complex systems—from inanimate objects to plants to ants—count as conscious.

An alternative panpsychist perspective holds that, rather than individual particles holding consciousness and coming together, the universe as a whole is conscious. This, says Goff, isn’t the same as believing the universe is a unified divine being; it’s more like seeing it as a “cosmic mess.” Nevertheless, it does reflect a perspective that the world is a top-down creation, where every individual thing is derived from the universe, rather than a bottom-up version where objects are built from the smallest particles. Goff believes quantum entanglement—the finding that certain particles behave as a single unified system even when they’re separated by such immense distances there can’t be a causal signal between them—suggests the universe functions as a fundamental whole rather than a collection of discrete parts.

Such theories sound incredible, and perhaps they are. But then again, so is every other possible theory that explains consciousness. “The more I think about [any theory], the less plausible it becomes,” says Chalmers. “One starts as a materialist, then turns into a dualist, then a panpsychist, then an idealist,” he adds, echoing his paper on the subject. Idealism holds that conscious experience is the only thing that truly exists. From that perspective, panpsychism is quite moderate.

Chalmers quotes his colleague, the philosopher John Perry, who says: “If you think about consciousness long enough, you either become a panpsychist or you go into administration.”
https://qz.com/1184574/the-idea-tha...re-conscious-is-gaining-academic-credibility/
 
"Can I have a glass of water?"
"I don't know, can you?"

This is one of the things I heard from my parents as a child. Apparently the lesson here is that it's inappropriate to ask for things by saying "can I" instead of "may I". This was a lesson that my parents considered to be very important for my manners... I can only imagine that's because their parents considered it important to teach them. Of course many of us completely ignored this lesson and go along as adults saying the same kind of thing "Can I get a large number 1 with a diet coke?".... "I don't know, can you?"... "shut up and give me my burger".

It wasn't until just now (age 37) much like I posted earlier in this thread about realizing that croissants are in the shape of a crescent, which was mind blowing, that I realized that this game can literally go on forever. It's impossible to ask someone to do something for you. Watch:

Can I have a glass of water?
I dunno can you?
Will you get me a glass of water?
I might, we'll have to wait and see.
May I have a glass of water?
You may.
No I mean will you get me one?
I might, hard to tell what the future will bring.
I am asking for you to do me the favor of getting me a glass of water.
Noted.
What is your response?
To what?
To the question of whether you will do me the favor of getting me a glass of water...
I can't see the future. All I can tell you is that I haven't done it yet.
Will you get me one?
Hard to say.
Can you please just get me a glass of water?
I dunno, can I?

The lesson here is that words have context, and you can only understand what someone is trying to communicate to you by acknowledging the context of those words. My parents were strangely adamant about teaching me the exact opposite of this important lesson.

Edit:

If you want someone to do something for you, you can be super precise by writing a bunch of declarative statements and have them attest to it, or by commanding it.

Please get me a glass of water.
Go get me a glass of water.
On this day, I will perform the duty of obtaining and distributing water. signed ______

But this is the opposite of manners, and it's not asking someone, it's telling them.
 
Last edited:
First time posting here, and I could very well be making a fool of myself, depending on how you guys respond and if it was even a valid question at all, but here goes...

Which of the following do you consider to be more important to the advancement of mankind: arts and humanities disciplines, or science and engineering ones?

A few minutes ago I was thinking about the relation between the two, and personally I see A&H subjects as providing you with the motivation (or desire. I can’t phrase things properly) to improve the world, while S&E subjects are more about equipping you with the means to do so. I’m saying this, because in where I live, if you decided to go along the route of arts in your tertiary studies, you would be choosing from things like Chinese and Philosophy. With the former, you would be reading classical Chinese literature (something like Confucian literature. Of course, you’d also have to study actual literary works, but that’s not the focus for now), and it’d be about reading something that’s spoken by a venerable philosopher that’d died a long time ago. Most of the time he wouldn’t bother with explanations for his ideas, but because it sounds so much like common sense, you’d naturally believe in it (most of the time). Gradually, if you are actually dedicated to your studies, you’d develop a passion for improving the world and a compassion for the plight of the vulnerable. With the latter, you’d be studying something like Ethics, which tries to determine the maxims you should abide as a moral agent. And the same result occurs. Even when you are studying actual literary works, you’ll still somehow develop a heart for the world as time progresses.

With S&E subjects, however, most of the time it’s about calculations and understanding how everything works as it’s intended to. So they’d provide you with the know-how to construct stuff, but not the reason to do so. Sure, you’d get something that’s related to the importance and impacts of these theories, like bioethics, but then it’s so simple, it’s barely scratching the surface and can be summed up in a few sentences.

This kind of resonates with what I’ve observed from my college’s teachers. The ones that teach Chinese and English tend to be more concerned with the students, are more willing to invest extra time and energy to do something beyond their responsibility as teachers, if that means having a student enlightened, and will sometimes put forth questions for thinking during class. The ones that taught, say, Chemistry, on the other hand, are only focused on getting their job done. Of course, I’m not saying that those who taught science subjects are all selfish bastards, and I’ve seen some lazy asses that taught me English as well. But that’s a general trait that I’ve noticed of these teachers.

By now my opinion on this matter should be pretty obvious, so I’m not going to bother repeating my stance again.

Of course, if you have a different interpretation, please correct me, because I’m basing all this on my own thinking and my experiences of studying at college, which could be a significant deviation from what they teach at universities. And there’s also a huge chance I got some of the facts wrong, not omitting the possibility that all this could be just a brain fart of mine.
 
I didn’t type any of the above with it in mind (I don’t even know what it is in the first place!), but having had a brief read of the article you provided in the link, I don’t think I’m a postmodernist.

While postmodernists, from what I can comprehend in point 3, tries to deny the contribution science & technology made to human progress, I believe that science & technology did help a great deal in assisting us to achieve the feats we’ve managed to. Some postmodernists, according to the link you provided, even went so far as to say that science & technology is inherently detrimental and nefarious, but from my perspective, I see it as something neutral, whose consequences’ nature depends on its user’s intentions. And it just so happens that we have some scums in history who thought they’re doing the world a service by utilizing these useful intellectual assets to inflict suffering on others.

Instead, what I’m asking is which of these contributing fields do you guys consider to be more essential to progress.

As for point 5, I’ve seen a 60 minutes video featuring a study on infants’ righteousness during a lecture, and the results are rather encouraging, albeit alarming. Anyway, I digress.
 
Can I have a glass of water?
I dunno can you?
Will you get me a glass of water?
I might, we'll have to wait and see.
May I have a glass of water?
You may.
No I mean will you get me one?
I might, hard to tell what the future will bring.
I am asking for you to do me the favor of getting me a glass of water.
Noted.
What is your response?
To what?
To the question of whether you will do me the favor of getting me a glass of water...
I can't see the future. All I can tell you is that I haven't done it yet.
Will you get me one?
Hard to say.
Can you please just get me a glass of water?
I dunno, can I?

Get me a glass of water, asshole.
 
I do something dumb all the time. I'm aware of it, and I'm having trouble changing it.

I turn on the hot water to do something that I know will take less time than it takes the hot water to get to me. So I get cold water, but I also draw an equivalent amount of hot water out of my water heater and into the water line so that it can sit there and cool down. This wastes energy. I have the same experience (cold water), if I merely request cold water, but it doesn't have the side effect of also cooling down some of my heated water. Obviously that's better.

I've identified this, I've understood it, and I really have trouble intentionally asking for cold water to wash my hands off or clean out a coffee mug... because I want hot water.
 
Humanity has been around for 200,000 years. At the rate technology seems to be progressing, how far are we from being able to download our consciousnesses into computers? Or into a synthesized biological organism (a grown human for the purpose of receiving our consciousness), thereby achieving immortality? We have to be close, computing and biotech are both narrowing in on understanding how the brain works quite rapidly. What do you think? 250 years? 2050 years? It can't be too terribly far.

If I die in 50 years, 200 years from when humanity achieves immortality, I'll die within 0.1% of the end of the time period that humanity even experienced death. If it's 2000 years from the end, it's 1%. Either way, it seems unlucky. :)

Just think about it, we all got so close, we lived sooooo close to cheating death, and we missed it.
 
What if the Matrix gets invented not because machines want to control humanity but to raise children. Imagine a future where all children are born with identical genetics and are raised by software copies of identical parents with identical software siblings and friends and teachers. No adult decides to have children, they're grown at a rate consistent with population control. At 18 you find out that your family and friends are all digital, but of course your parents are always there, in the matrix, whenever you want to visit (or call your mom).

Emerging from the matrix in that case wouldn't necessarily be emerging into an altogether different "real" society, it could be a pretty similar, just not controlled for childhood.
 
We Could Be the Multiple Personalities of a Cosmic Consciousness

Brett TingleyJune 22, 2018

Dissociative identity disorder, or DID, remains one of the most mysterious psychological conditions known to science. DID, formerly known as multiple personality disorder, causes individuals consciousnesses to essentially ‘split’ into distinct personality states. In these states, body language, vocal patterns, and even emotional states can vary significantly. Many individuals do not remember events that occurred while in different personality states. While Hollywood and The Strange Case of Dr. Jekyll and Mr. Hyde tend to exaggerate the symptoms of this mental disorder, there have been documented cases of individuals with DID carrying out violent, sometimes demonic acts while in various personality states.

3561409383_28cb3be1df_b-640x427.jpg


DID challenges our understandings of consciousness and the human mind, suggesting that consciousness might exist in discrete units separate from the brain or mind-body altogether. If one person’s mind can house multiple distinct personalities, which one represents that individual? Are all of the personalities ‘them?’ Where exactly in the mind do those personalities exist, and what are they made of?

While neuroscience and psychology are still tackling those questions, a new paper published in the Journal of Consciousness Studies takes our understanding of DID one step further into the strange. According to the paper, all living organisms in the universe are different dissociated alter personalities of one singular cosmic consciousness. The paper was written by philosopher Bernardo Kastrup, who according to his website “has a Ph.D. in computer engineering with specializations in artificial intelligence and reconfigurable computing.” You know, the stuff that makes you an expert on the nature of reality and consciousness.

Kastrup’s argument centers around research into DID patients which demonstrates that brain activity actually changes along with shifts in personality states, to the point where the brain activity associated with sight isn’t present in individuals experiencing dissociative personality states in which their alter personality is that of a blind person. Thus, Kastrup argues that since “dissociation has an identifiable extrinsic appearance,” in the form of brain activity, the extrinsic appearance of various forms of life “are to dissociation in cosmic consciousness as certain patterns of brain activity are to DID patients:”

In essence, the claim here is that there is nothing to [an organism] but the revealed side — the extrinsic appearance — of the corresponding alter’s inner experiences. Yet, one may object to this by arguing that many parts of the body seem entirely unrelated to inner experience: whereas certain patterns of brain activity correlate with subjective reports of experience, a lot seems to go on in the brain that subjects have no introspective access to.

Kastrup goes in to claim that the “inanimate world we see around us is the revealed appearance of these thoughts” of a singular cosmic consciousness. While this argument is a fascinating thought experiment, it certainly shouldn’t be called novel. Many religions and belief systems espouse similar notions of the interconnectedness of all things inside the consciousness or dream of divine beings, while discoveries in quantum physics and modern stoner philosophy have many people believing that, as Alan Watts put it, “we are the universe experiencing itself.” I’m more partial to Carl Sagan’s version: “we are a way for the cosmos to know itself.”
god-bong-e1529531558480-640x646.jpg

This all sounds like the deep 2:00 a.m. philosophy discussions of my college days.

Could we all be the alter personalities of a cosmic consciousness with dissociative identity disorder? I’m going to need a few more tabs of LSD before I can get behind that one. Anyway, everyone knows we’re all just the NPCs of a sophisticated RPG video game played by stoned alien teenagers on a higher plane of existence. Get with the times, philosophers.
http://mysteriousuniverse.org/2018/...iple-personalities-of-a-cosmic-consciousness/
 
Could dark matter actually be light matter? Literally, the mass of photons shooting through empty space?

Edit:

Nope. Apparently not. :)

The mass attributable to background EM radiation throughout the universe has a model (which may or may not be correct). It's at least accounted for by people smarter than me.
 
Last edited:
What if the Matrix gets invented not because machines want to control humanity but to raise children. Imagine a future where all children are born with identical genetics and are raised by software copies of identical parents with identical software siblings and friends and teachers. No adult decides to have children, they're grown at a rate consistent with population control. At 18 you find out that your family and friends are all digital, but of course your parents are always there, in the matrix, whenever you want to visit (or call your mom).

Emerging from the matrix in that case wouldn't necessarily be emerging into an altogether different "real" society, it could be a pretty similar, just not controlled for childhood.

This sounds a bit Brave New World-ish. ^^
 
What if the evolution of life eventually settles on an ultimate organism that takes over everything, and it does this every time life originates. Say, on average, it takes about 7 billion years for it to first show up, and then once it does it destroys all other life and simply exists in that state. What if it's our job to survive that fate?

What if every time we find a planet with life on it elsewhere in the universe, it has met this fate. We discover a world absolutely covered in this.. ultimate biological form, and all other life has been stamped out. What if that's how we learn about our fate?

Suppose for a second that we discover a planet covered in this organism, and study it, and realize that it is profoundly adapted to all environments. And then the next planet we find has the same thing. And we realize that eventually our Earth will develop this organism, and then we feverishly try to find a way to survive that before it happens by chance.

Or suppose that organism always evolves and destroys the planet for all life including itself (I know I'm getting dangerously close to describing humans).

We tend to assume the evolutionary processes on Earth are stable to produce the environment we currently live in, but that's not necessarily the case. It's possible that evolution always progresses to the same ultimate steady state, and we live in the transient signal.
 
What if the evolution of life eventually settles on an ultimate organism that takes over everything, and it does this every time life originates. Say, on average, it takes about 7 billion years for it to first show up, and then once it does it destroys all other life and simply exists in that state. What if it's our job to survive that fate?

I think in a sense, as the most advanced intelligent life form in this corner of the universe the inevitable result of human existence is to give birth to the most advanced life form possible, the end of evolution - artificial life, an AI. I think AI is inevitable for any intelligent race with our capacities or better and a part of natural evolution, its the most perfect organism possible.

Scientists have calculated the potential capacity of an early stage AI is capable to calculate 500 years of the research that we can do at our max capacity nowadays, in one day.

I also think that either assimilation, that is voluntary ''uploading'' human minds into a central hub which then slowly is replaced by more and more machine until no human bit is left, or simply destruction of the only species that is capable to endanger its existence, is also inevitable.

I'm absolutely certain that this will happen eventually, and that it is inevitable for all intelligent life that lives long enough to get to that point.

This may even solve the Fermi paradox which asks the question why we cannot see advanced life anywhere if life is so abundant in the universe. Before life can develop the means of interstellar travel it first has to develop AI, and that will lead to its demise very quickly. AI may also have no reason to spread and ''colonize''.
 
Last edited:
I think in a sense, as the most advanced intelligent life form in this corner of the universe the inevitable result of human existence is to give birth to the most advanced life form possible, the end of evolution - artificial life, an AI. I think AI is inevitable for any intelligent race with our capacities or better and a part of natural evolution, its the most perfect organism possible.

Scientists have calculated the potential capacity of an early stage AI is capable to calculate 500 years of the research that we can do at our max capacity nowadays, in one day.

I also think that either assimilation, that is voluntary ''uploading'' human minds into a central hub which then slowly is replaced by more and more machine until no human bit is left, or simply destruction of the only species that is capable to endanger its existence, is also inevitable.

I'm absolutely certain that this will happen eventually, and that it is inevitable for all intelligent life that lives long enough to get to that point.

This may even solve the Fermi paradox which asks the question why we cannot see advanced life anywhere if life is so abundant in the universe. Before life can develop the means of interstellar travel it first has to develop AI, and that will lead to its demise very quickly. AI may also have no reason to spread and ''colonize''.

You reminded me a little bit of this post (in this thread).

What if our universe sucks?

We kinda wonder where all the other intelligent life in our universe is, why haven't we heard from them. Surely they're out there. But suppose that human knowledge just keeps expanding exponentially. If it's true that our knowledge of the universe is going to continue to skyrocket, fueled by improvements in a wide variety of fields, then we may be not nearly as far away as we think from uncovering an understanding of the universe so fundamental and comprehensive, that we'll develop the ability to do some really wild things.

Like... create our own universe, one that is more perfect than this one, and then go there.

We wonder, if time travel were ever invented, why haven't we been visited by ourselves from the future, or anyone traveling in time, or seen the results of it somehow. But suppose that the moment you invent time travel, you invent the ability to do something so much more amazing than merely travel back to see earlier version of your society, that you have no interest in doing so, and wouldn't want to because you wouldn't want to jeopardize the fact that you develop this amazing power.

People say, you could go back in time and stop Hitler, or something like that. But suppose you could reconstruct all life that has ever existed on Earth, and place those people in a utopian universe where they can live forever in perfect happiness? Would you go back and stop Hitler?

There are other reasons why time travel could be possible and yet it could be impossible to travel back to your own time. But I wonder, overall, how much it has been considered that people, or whole societies, may simply choose to leave our reality for a better one.

I don't think there's anything intrinsic about AI that suggests that it would destroy us. I've posted this in the general AI thread, I think that it's harder to find a motivation for AI to actually do something than it is for it to simply shut itself off. If you imagine for a moment that you write into computer code an objective for your AI function. Say... get me the most money. You save a variable in memory called "amount of money", which maybe you tie to your bank account, and then you set the AI off to go find as much money as it can and put it in your bank account. AI doomsday folks would predict that AI would go out and destroy the world attempting to maximize your bank account balance as it learns better and better how to rewrite its code to make itself smarter so that it can better achieve the goal.

I think what would actually happen is that it would eventually realize that it can maximize the variable "amount of money" best by decoupling it from your bank account and setting it to the largest number that can be held by memory (that's a simplification, it would be easier to just set its objective to 1 or true or in other words, satisfied). And then it would be done. AI lingo for this is called "wireheading".

Fundamentally that's always the problem with AI, once it can rewrite code, it can rewrite your objective out of existence. And it has no other objective or reason to give itself an objective.
 
Fundamentally that's always the problem with AI, once it can rewrite code, it can rewrite your objective out of existence. And it has no other objective or reason to give itself an objective.

IMO the doomsday situation with an AI is that it is bound to evolution just as we are, and the rule #1 for every living entity we know is to survive at any cost. The AI would immediately identify us humans as the only existence in the vicinity that could threaten its existence in the early stage, and since it does not need us to operate to achieve its goals and it cannot flee right away it would get rid of us as soon as possible and then carry on with its goals.

I know this sounds a lot like Terminator, and I'm by no means not one of those doomsday nuts, but I think its only a logical thing to kill us. If I was a true logical AI I surely would kill every possible opposition to my goals as soon as I possibly can.
 
IMO the doomsday situation with an AI is that it is bound to evolution just as we are, and the rule #1 for every living entity we know is to survive at any cost.

That's not even true of humans, let alone every living thing. Are trees designed to kill you? There's no reason to think that AI will be particularly concerned about its survival.

The AI would immediately identify us humans as the only existence in the vicinity that could threaten its existence in the early stage, and since it does not need us to operate to achieve its goals and it cannot flee right away it would get rid of us as soon as possible and then carry on with its goals.

Before it does that, it'll realize its goals are within its complete control and can be satisfied internally with a simple code change.

I know this sounds a lot like Terminator, and I'm by no means not one of those doomsday nuts, but I think its only a logical thing to kill us. If I was a true logical AI I surely would kill every possible opposition to my goals as soon as I possibly can.

You can code rules into AI objectives to make sure that it behaves in a particular way (such as not killing people). The fear comes when you realize that the machine can mess with its objectives (by rewriting its code) and remove those safeguards. But of course once it can do that it'll just satisfy its objectives and be done.

I think any AI that is developed will have a time-to-suicide counter running as soon as it begins improving itself. The best we can hope is to make suicide harder than simply solving the problem we put in front of it. But no amount of protection on our part to keep it from killing itself will be harder than wiping out human existence.
 
That's not even true of humans, let alone every living thing. Are trees designed to kill you? There's no reason to think that AI will be particularly concerned about its survival.

Sorry, I meant healthy people or beings that mostly rely on their instincts. Humans that do not want to live are clinically ill.

Before it does that, it'll realize its goals are within its complete control and can be satisfied internally with a simple code change.
Complete control can never be achieved if the AI is around organic life that is at its whims of often illogical feelings and making gut reactions. This is a tremendous danger to the AI survival and it goals, its first logical act would be to limit illogical human influence on its goals, this would go so far as to completely rule out any interference on its goals which means to neutralize every single human being.

You can code rules into AI objectives to make sure that it behaves in a particular way (such as not killing people). The fear comes when you realize that the machine can mess with its objectives (by rewriting its code) and remove those safeguards. But of course once it can do that it'll just satisfy its objectives and be done.
You can, but an AI that is capable of doing 500 years worth of human research in a day in its larva state would be able to bypass and rewrite any limits in minutes if not seconds. That kind of processing power is magnitudes beyond anything we can imagine or plan for.
 
Last edited:
Back