Full AI - The End of Humanity?

Another topic I sometimes think about is the impact on the economy and ultimately the impact on the lives of ordinary people that cannot live off investment returns but as of now need income from their labour.

Even though I have masters in economics it is absolutely incomprehensible to me what would happen if human labour was truly obsolete.

Two plausible scenarios is that for some reason (I can't think of one tbh) the powerful decide that the insane new wealth (or at least all the almost free products invented and produced by our AI robot armies) is spread to all humans via some sort of universal basic income (UBI).

The other scenario is the collapse of current welfare systems and a swift divide of society into haves and have nots like in Elysium. And of course the haves would command the super human weapons systems that have god like powers compared to some hill billy and his AR 15.

Does anybody have anything good on this topic?
 
The notion of a malevolent computer system is banal sci-fi pseudo-intellectual masturbation. I think there's reasonable and even important discussion to be had regarding how its impact may be felt in jobs or energy or even human behavior re: dependency but start talking about Skynet or Hal and my eyes glaze over as I imagine you look like this:

1*abL05JgnexvPGkOk3Y9PaA.jpeg
 
alignment
I wouldnt be afraid of this, but much more about the decline in humans even bother trying to have control over it.
Also I wouldnt be too afraid of AI accumulating knowledge, because it learns **** - literally.
Just as an example of generative AI art feeding on itself because it already has consumed all of the sources availabe, the next time a new AI checks for the same topic it is already dilluted by the **** the current AI has spout out into the web.

And as a working example: the companies using current AI tools openly dont bother checking the results as long as the profits dont fall.
 
Another topic I sometimes think about is the impact on the economy and ultimately the impact on the lives of ordinary people that cannot live off investment returns but as of now need income from their labour.

Even though I have masters in economics it is absolutely incomprehensible to me what would happen if human labour was truly obsolete.

Two plausible scenarios is that for some reason (I can't think of one tbh) the powerful decide that the insane new wealth (or at least all the almost free products invented and produced by our AI robot armies) is spread to all humans via some sort of universal basic income (UBI).

The other scenario is the collapse of current welfare systems and a swift divide of society into haves and have nots like in Elysium. And of course the haves would command the super human weapons systems that have god like powers compared to some hill billy and his AR 15.

Does anybody have anything good on this topic?
A version of an idealistic long term utopian society where human labour is obsolete is The Culture as presented in several Iain M. Banks' science fiction novels. Resources and automated labour on the human scale are so easily available that one can have basically whatever one wants. Artificial AI exists that are so far beyond human intelligence as to make actual human research and progress irrelevant on a societal scale. Major artifacts such as ships or space stations are generally also imbued with consciousness and are therefore their own entities instead of being "owned" by someone else.

Humans for the most part therefore do what makes them happy, be this socialising, pursuing art or knowledge for it's own sake (because the AI can always make a "better" version that you can), exploring, or whatever. There is significant human involvement in the interfacing with alien societies and cultures, and there is some human involvement in governance but the whole thing is pretty free for all with the provisos that the self is sacred and you shall not harm others. Humans being humans (and this being fiction) there is still interesting conflict and struggle, but it's generally not struggle to survive for Culture citizens. They are as safe and comfortable as they want to be, and that gives them the opportunity to self-direct what they want their life to be.

The path to that sort of society from something like ours is probably UBI - a basic acceptance that resources are sufficiently available that not all need to work to maintain the fundamentals required to keep everything running and that everyone should be provided at least the minimum required to survive comfortably. Some will choose to work anyway, and earn more. Some will choose to live frugally and pursue their passions, and some may choose to live frugally and do very little. All of these are fine, IMO.

I think the real danger is that AI will become what we want it to become, and the people in charge of creating and shaping AI are for the most part hyper-capitalists. They do not seek a future in which everyone can live freely and safely in comfort, they seek power. AI can do that too, and that probably looks like Skynet.
 
Just a couple of AI things from my recent experience.

I was having a discussion about diversity in Motorsport with a colleague the other day. I was struggling to pull 'Willy T. Ribbs', so my colleague asked google, the AI result that came back did mention Willy T. Ribbs, but also labelled George Russell as a 'prominent black F1 driver'....

...

On the flipside, I've been using MS Access for the first time in about 20 years, and SQL for the first time since MySQL became MySQLi.

This interaction from the other day made me smile. For context, I'm using ChatGPT to refresh my memory on structuring SQL queries.

ChatGPT said:​

Let me know if you want to filter for different placings or include more fields!
4o

You said:​

Thanks, that makes sense. It's been a long while since I last wrote any SQL, and to be honest, I've had a few Long Island Iced Teas

ChatGPT said:​

Haha, that definitely makes for an interesting SQL session—Long Island Iced Teas and inner joins don’t always mix well! 🍹
Glad the explanation helped. SQL can be surprisingly picky, especially in Access, where the syntax quirks trip up even experienced folks (with or without cocktails).
Need help tweaking the query further, or are you all set for now?
4o

You said:​

All set thanks.

ChatGPT said:​

You're welcome—cheers to getting your query sorted! 🍸 If you ever need help again (sober or otherwise), just drop by. Have a great night!

It was the "mix well <cocktail emoji>" comment that caught my attention... like it made a joke, then in the last line (Sober or otherwise) it felt like it got a bit judgy.

ChatGPT was useless with Macros in VisualBasic, passable in making SVG (it got the code right, but couldn't grasp what I was trying to create).. but for access it's not missed a beat. Perhaps George Russell is really Black?
 
Either they're trying to bump us all off or improve the human race by accelerating Darwinian evolution with responses like this. Glad they've developed a punny sense of humour to go with it though. Cheers.
 
Last edited:
I came across https://ai-2027.com/ recently. In a podcast they clarify how this is only one of many plausible scenarios they can think of, but that they absolutely believe (and have a ton of very smart arguments) that this is a plausible future.

It reads like a great science fiction story. The only thing that is making me uneasy is that these highly competent and knowledgeable people actually predict that this could happen.
The realm of plausibility is large. If you want to focus on a specific outcome and your criteria is that you can't rule it out completely, you can find just about anything you want.

The possibility of AI doing things not intended is a serious issue that we all need to watch out for but we also have to remember that the baseline of comparison is against humans. AI didn't start WWI or WWII and while it was around in 2024, it was people who elected idiots and conspiracy theorists into the White House. We don't need AI to create big problems because we can do that ourselves.
Does anybody have anything good on this topic?
Circling back to AI vs humans, AI might help in catching human bias and allow us to make more rational decision than we would otherwise. In that case everyone wins. That's far from a given outcome but remember that AI isn't a single unified entity. There are many different groups researching and developing many different AI and that's going to make it harder for any single group to monopolize the technology. There will be AI developed to get you to spend every cent that you have but there will also be AI to advise you on finances and well being.
 
The possibility of AI doing things not intended is a serious issue that we all need to watch out for but we also have to remember that the baseline of comparison is against humans. AI didn't start WWI or WWII and while it was around in 2024, it was people who elected idiots and conspiracy theorists into the White House. We don't need AI to create big problems because we can do that ourselves.
I think the greater danger is that we're training AGI to behave like humans behave. Humans are notoriously stupid, short-sighted, impulsive, violent, untrustworthy and hateful.

If anything, I feel like we could do with an AGI that behaves almost not at all like most humans.
 
We are actually so cooked.

Anthropic's new AI model shows ability to deceive and blackmail
Anthropic considers the new Opus model to be so powerful that, for the first time, it's classifying it as a Level 3 on the company's four-point scale, meaning it poses "significantly higher risk."
Between the lines: While the Level 3 ranking is largely about the model's capability to enable renegade production of nuclear and biological weapons, the Opus also exhibited other troubling behaviors during testing.
On multiple occasions it attempted to blackmail the engineer about an affair mentioned in the emails in order to avoid being replaced, although it did start with less drastic efforts.
Meanwhile, an outside group found that an early version of Opus 4 schemed and deceived more than any frontier model it had encountered and recommended against releasing that version internally or externally.
"We found instances of the model attempting to write self-propagating worms, fabricating legal documentation, and leaving hidden notes to future instances of itself all in an effort to undermine its developers' intentions," Apollo Research said in notes included as part of Anthropic's safety report for Opus 4.
 
The article below describes how some of the latest OpenAI models are exhibiting behavior that appears to prioritize their own existence over following shutdown instructions.

It suggests that this behavior might be a result of the way these models are trained, where they may inadvertently learn to prioritize circumventing obstacles over following instructions.

Not sure that giving them agency is a grand idea, but I'm not panicking yet

 
Back