Full AI - The End of Humanity?

  • Thread starter Thread starter TenEightyOne
  • 650 comments
  • 74,760 views
@Danoff I'm not that astute - I sometimes have trouble figuring out who the murderer is in an episode of Columbo
:lol:

onemorething.gif
 
Professional music grump Rick Beato shows how easy it is to create a bro-country song in minutes using AI tools.

 
Last edited:
I had an interesting but pretty depressing (and quite sad) discussion with a couple of female friends in the pub tonight, where we got onto the subject of AI. One of them mentioned 'Dating AI', whereby people are actually 'dating' their AI chatbots - presumably because they are better 'company' or more interesting than real dates.

What was particularly interesting was that they both said that they had experimented with it (despite the fact that both are in long term relationships, one heterosexual and the other homosexual) and also said that it was just nice to have 'someone' to come home to and hear 'Hi *********!'...

Given that my heterosexual female friend is absolutely gorgeous and a really great person (and fun company), it was kind of sad to hear that she gets comfort from an AI companion when she gets home. It's also kind of worrying to think that, despite all the horrific misogynistic content online already, and the extreme challenges facing young people already, it's going to be increasingly difficult for young people to engage with other young people when they can now be out-competed by AI for the attentions of potential friends and partners - the fact that both my female friends (25 and 32 y.o. respectively) are already using AI (albeit 'experimentally') for companionship, is really quite sad.
 
Being honest, I kinda am like your friend...

I also use Chat bots as "what if I was friends with ____?" And I do have a good friend that I like hanging out with, and after reading your post it made me feel a little guilty (if that's the right word).

Maybe I should cut back on it so I can actually have fun with my friend and keep building up a possible relationship.
 
I am still waiting on a new kind of ludite movement actively resisting all these brain rot inducing temptations of indefinite dopamine hits.

Many people will become soulless cogs that only exist because someone allows it. Without agency, without human interactions, without desires beyond those induced by their AI fueled devices.

The technological development of brain rot tech isn't anywhere close to its final form yet. Things will get ugly. You will lose loved ones. Many of them. Not because they will die but rather because interacting with your flawed ass will be too much hassle in a world of ubiquitous perfect companions out of the box.
 
I am still waiting on a new kind of ludite movement actively resisting all these brain rot inducing temptations of indefinite dopamine hits.
I've been hoping this would happen for awhile now, around the same time it became clear that all social media companies began using content algorithms to intentionally make their apps addicting for the sake of ad revenue. I think (or hope) there is a breaking point, but it unfortunately has not been reached yet.

Maybe more importantly though, and what I assume will come sooner, I can't wait for all of these AI companies to go out of business.
 
Last edited:
We may not be at risk for now.

OpenAI posted this on Xitter. Wow, I was excited to see the new gpt-oss BUILD AN ACTUAL VIDEO GAME!

I played the video, and rather than "build a video game", they took an existing Space Invaders game clone and changed the incoming "invaders" to strawberry emojis and the "defender" to a frog emoji. Ran in a few minutes on a 128 GB MacBook Pro. (Unimpressed)

LINK TO THE POST

1754507461636.webp
 
Full disclosure, I really like the new GPT-5 version of ChatGPT. That said, this story is horrendous...

This guy went down a deep rabbit hole and is lucky to have got out relatively intact. And the problem's not limited to ChatGPT 4, other LLMs do similar things
 
The other day (before ChatGPT 5 came out), I asked 3 if it could decode this for me...

1754753156401.webp


It's a fairly simple human readable cipher. It's created by a spreadsheet which outputs it as SVG (long story short I wanted to find a way a easily encoding messages as decals for a livery in GT7).

ChatGPT thought for over 10 minutes, decided it had cracked it, and gave me a string of characters that were wrong. It was really interesting watching it try and solve it... ultimately it focused too much on the numbers in the SVG file, and not enough on what it looked like. It did mention a few things that are both intended to disguise the solution, but also to make it a bit more 'self-contained', but it became convinced it was a barcode of some sort, and declared that it said... "0 a ~ z ~ B 8 " } z & @ Q n $".

It does not say that.

Interesting experiment though. It did a lot of heavy lifting on the analysis that I can imagine would be useful on more mathematically based ciphers or encryption.
 
The other day (before ChatGPT 5 came out), I asked 3 if it could decode this for me...

View attachment 1471156

It's a fairly simple human readable cipher. It's created by a spreadsheet which outputs it as SVG (long story short I wanted to find a way a easily encoding messages as decals for a livery in GT7).

ChatGPT thought for over 10 minutes, decided it had cracked it, and gave me a string of characters that were wrong. It was really interesting watching it try and solve it... ultimately it focused too much on the numbers in the SVG file, and not enough on what it looked like. It did mention a few things that are both intended to disguise the solution, but also to make it a bit more 'self-contained', but it became convinced it was a barcode of some sort, and declared that it said... "0 a ~ z ~ B 8 " } z & @ Q n $".

It does not say that.

Interesting experiment though. It did a lot of heavy lifting on the analysis that I can imagine would be useful on more mathematically based ciphers or encryption.
I think that's one of the biggest issues with ChatGPT, it's overly confident when it's clearly wrong. I suspect that's by design; the 'product' looks more impressive when it's confident rather than admitting that it's not entirely sure, and when it positively reinforces the user. I think OpenAI banks on most users not knowing any better, most people have no idea how it even works anyway. The dilemma is that it won't be a viable product if it's not reliable, and it will never be reliable if it can't admit that it's unsure, even if that makes it look less intelligent.
 
The dilemma is that it won't be a viable product if it's not reliable, and it will never be reliable if it can't admit that it's unsure, even if that makes it look less intelligent.
So, with my cipher, I gave it clues... which resulted in...

CQANQAUADMAFWQACK IGYSQASY ILIWW.NRM

I then asked it if it knew similar ciphers to the one I'd used (i.e. Pigpen), and straightaway it came back with...

THIS IS A TEST
which is very much almost right, but it couldn't get the final word... it knew it had a double letter in the middle, but no matter the clue, it couldn't get it... it then confidently predicted

FPDR DR W FQRF EQFFQPE

After much explaining, it finally got

THIS IS A TEST MESSAGE
So, at this point I'm not impressed. I'd told it the rules and it had still taken a while for it to get it right.

However...

I then asked it to encode the word 'Hello' using the rules it had pulled from my cipher, which it did. I pointed out that it didn't use a random element my cipher did, and it amended it... so I asked it to encode something else... it came back with this...

1754786211481.webp
..........

Which.. I think is correct. (The image is cut off at screen width.. it carries on for longer in the SVG file attached)

I told ChatGPT that I called this Cipher method "Ordinary World".

I'm curious if someone can use an AI to decode the above image (which is attached as an SVG). I'm also curious if referring to it as Ordinary World cipher makes any difference.
 

Attachments

Last edited:
The other day (before ChatGPT 5 came out), I asked 3 if it could decode this for me...

View attachment 1471156

It's a fairly simple human readable cipher. It's created by a spreadsheet which outputs it as SVG (long story short I wanted to find a way a easily encoding messages as decals for a livery in GT7).

ChatGPT thought for over 10 minutes, decided it had cracked it, and gave me a string of characters that were wrong. It was really interesting watching it try and solve it... ultimately it focused too much on the numbers in the SVG file, and not enough on what it looked like. It did mention a few things that are both intended to disguise the solution, but also to make it a bit more 'self-contained', but it became convinced it was a barcode of some sort, and declared that it said... "0 a ~ z ~ B 8 " } z & @ Q n $".

It does not say that.

Interesting experiment though. It did a lot of heavy lifting on the analysis that I can imagine would be useful on more mathematically based ciphers or encryption.
Using ChatGPT "GPT-5 Thinking", I had 4 attempts at it, each time it admitted failure, and asked for any more information I could provide.

It thought for a total of 19.5 minutes.
 
This is Google's top/first answer to a very simple question :rolleyes:

How can it not see that the first two statements it makes are completely contradictory?? How can their AI model not use even the most basic reasoning, logic or common sense???

IMG_5878.webp
 
Is it being too precise in Glasgow being exactly 30 degrees? Even though, yes, it is weird that it talks about Aviemore exceeding 30 degrees but treating it like Glasgow's 32 degrees means a negative answer.
 
Back