I've been hoping this would happen for awhile now, around the same time it became clear that all social media companies began using content algorithms to intentionally make their apps addicting for the sake of ad revenue. I think (or hope) there is a breaking point, but it unfortunately has not been reached yet.I am still waiting on a new kind of ludite movement actively resisting all these brain rot inducing temptations of indefinite dopamine hits.
Who knew that "predicting the next word" is not a great way of doing actual, you know, "math"?
I think that's one of the biggest issues with ChatGPT, it's overly confident when it's clearly wrong. I suspect that's by design; the 'product' looks more impressive when it's confident rather than admitting that it's not entirely sure, and when it positively reinforces the user. I think OpenAI banks on most users not knowing any better, most people have no idea how it even works anyway. The dilemma is that it won't be a viable product if it's not reliable, and it will never be reliable if it can't admit that it's unsure, even if that makes it look less intelligent.The other day (before ChatGPT 5 came out), I asked 3 if it could decode this for me...
View attachment 1471156
It's a fairly simple human readable cipher. It's created by a spreadsheet which outputs it as SVG (long story short I wanted to find a way a easily encoding messages as decals for a livery in GT7).
ChatGPT thought for over 10 minutes, decided it had cracked it, and gave me a string of characters that were wrong. It was really interesting watching it try and solve it... ultimately it focused too much on the numbers in the SVG file, and not enough on what it looked like. It did mention a few things that are both intended to disguise the solution, but also to make it a bit more 'self-contained', but it became convinced it was a barcode of some sort, and declared that it said... "0 a ~ z ~ B 8 " } z & @ Q n $".
It does not say that.
Interesting experiment though. It did a lot of heavy lifting on the analysis that I can imagine would be useful on more mathematically based ciphers or encryption.
So, with my cipher, I gave it clues... which resulted in...The dilemma is that it won't be a viable product if it's not reliable, and it will never be reliable if it can't admit that it's unsure, even if that makes it look less intelligent.
Using ChatGPT "GPT-5 Thinking", I had 4 attempts at it, each time it admitted failure, and asked for any more information I could provide.The other day (before ChatGPT 5 came out), I asked 3 if it could decode this for me...
View attachment 1471156
It's a fairly simple human readable cipher. It's created by a spreadsheet which outputs it as SVG (long story short I wanted to find a way a easily encoding messages as decals for a livery in GT7).
ChatGPT thought for over 10 minutes, decided it had cracked it, and gave me a string of characters that were wrong. It was really interesting watching it try and solve it... ultimately it focused too much on the numbers in the SVG file, and not enough on what it looked like. It did mention a few things that are both intended to disguise the solution, but also to make it a bit more 'self-contained', but it became convinced it was a barcode of some sort, and declared that it said... "0 a ~ z ~ B 8 " } z & @ Q n $".
It does not say that.
Interesting experiment though. It did a lot of heavy lifting on the analysis that I can imagine would be useful on more mathematically based ciphers or encryption.
That was my conclusion as well - however, it's a bit like saying Real Madrid have never scored 7 goals in a game, but they have scored 8 a few times. It's not possible to reach 31.8 deg C without it having been 30.0 deg C first.Is it being too precise in Glasgow being exactly 30 degrees? Even though, yes, it is weird that it talks about Aviemore exceeding 30 degrees but treating it like Glasgow's 32 degrees means a negative answer.
Google has been on the road to death since they decided that search volumes were more important than search quality which was way before the modern generative AI craze broke out. Bad search results like this are "good" in Google's eyes, because many users will recognise the obviously faulty information and redo their searches over and over to get the right answer and thereby expose themselves to that many more ads in doing so.Google will die unless it does something about it's garbage AI answers...
This search of mine didn't return an AI answer, it's just straight up wrong.Google has been on the road to death since they decided that search volumes were more important than search quality
Google will die unless it does something about it's garbage AI answers...
Won’t be long when you click for the source and it’s an official Whitehouse message.It is making them look really bad. I ran into it just the other day as well. Again, in a medical setting:
Q: "Is Tylenol dissolve powder gluten free?"
AI: "Yes, the company has stated publicly that Tylenol dissolve powder is gluten free. Click to see more."
Q: click to see more
AI: "Yes, the company has stated publicly that Tylenol dissolve powder is gluten free. Here's a link to the source"
Q: click the source
That source? A random person leaving an Amazon review of the product.
I think it will take one major law suit for Google to sort out what could be a life threateningly wrong answer.It is making them look really bad. I ran into it just the other day as well. Again, in a medical setting:
Q: "Is Tylenol dissolve powder gluten free?"
AI: "Yes, the company has stated publicly that Tylenol dissolve powder is gluten free. Click to see more."
Q: click to see more
AI: "Yes, the company has stated publicly that Tylenol dissolve powder is gluten free. Here's a link to the source"
Q: click the source
That source? A random person leaving an Amazon review of the product.
That would figure - misinformation is the name of the game from now on.Won’t be long when you click for the source and it’s an official Whitehouse message.
by getting the basic answer wrong, then providing some BS 'explanation' that is, quite literally, just a bunch of words that make sense when written in that order. It's meaningless drivel.
To be fair, that sounds like ALOT of people I know* 😂That is a huge part of it. It's not just wrong, it's so damned confident and authoritative about being wrong.
That bugs the hell out of me - it happens all the time. Replying "Where is the link" or "I can't download the file" generates the response "My bad, sorry - here's the file", and another ****ing dead link.That's ChatGPT as well. You ask it to generate a word file with some template info filled in, and it's like... yup, I did that, it's all filled in, here's the link to your word file.... nothing is filled in.
Exactly - it is staggering when it gets it right, and I guess this is why it will benefit diligent users, while also destroying society at the same time. It's the intellectual equivalent of the 'rich get richer and the poor get poorer' aphorism**Don't get me wrong, when it's in the zone it's absolutely amazing. I did some astounding stuff with ChatGPT over the weekend having it generate code for me and getting it working, and it's honestly mind bogglingly good. It's just also sometimes mind bogglingly bad, and Google putting it at the top of search results is basically gross negligence.