I've been hoping this would happen for awhile now, around the same time it became clear that all social media companies began using content algorithms to intentionally make their apps addicting for the sake of ad revenue. I think (or hope) there is a breaking point, but it unfortunately has not been reached yet.I am still waiting on a new kind of ludite movement actively resisting all these brain rot inducing temptations of indefinite dopamine hits.
Who knew that "predicting the next word" is not a great way of doing actual, you know, "math"?
I think that's one of the biggest issues with ChatGPT, it's overly confident when it's clearly wrong. I suspect that's by design; the 'product' looks more impressive when it's confident rather than admitting that it's not entirely sure, and when it positively reinforces the user. I think OpenAI banks on most users not knowing any better, most people have no idea how it even works anyway. The dilemma is that it won't be a viable product if it's not reliable, and it will never be reliable if it can't admit that it's unsure, even if that makes it look less intelligent.The other day (before ChatGPT 5 came out), I asked 3 if it could decode this for me...
View attachment 1471156
It's a fairly simple human readable cipher. It's created by a spreadsheet which outputs it as SVG (long story short I wanted to find a way a easily encoding messages as decals for a livery in GT7).
ChatGPT thought for over 10 minutes, decided it had cracked it, and gave me a string of characters that were wrong. It was really interesting watching it try and solve it... ultimately it focused too much on the numbers in the SVG file, and not enough on what it looked like. It did mention a few things that are both intended to disguise the solution, but also to make it a bit more 'self-contained', but it became convinced it was a barcode of some sort, and declared that it said... "0 a ~ z ~ B 8 " } z & @ Q n $".
It does not say that.
Interesting experiment though. It did a lot of heavy lifting on the analysis that I can imagine would be useful on more mathematically based ciphers or encryption.
So, with my cipher, I gave it clues... which resulted in...The dilemma is that it won't be a viable product if it's not reliable, and it will never be reliable if it can't admit that it's unsure, even if that makes it look less intelligent.
Using ChatGPT "GPT-5 Thinking", I had 4 attempts at it, each time it admitted failure, and asked for any more information I could provide.The other day (before ChatGPT 5 came out), I asked 3 if it could decode this for me...
View attachment 1471156
It's a fairly simple human readable cipher. It's created by a spreadsheet which outputs it as SVG (long story short I wanted to find a way a easily encoding messages as decals for a livery in GT7).
ChatGPT thought for over 10 minutes, decided it had cracked it, and gave me a string of characters that were wrong. It was really interesting watching it try and solve it... ultimately it focused too much on the numbers in the SVG file, and not enough on what it looked like. It did mention a few things that are both intended to disguise the solution, but also to make it a bit more 'self-contained', but it became convinced it was a barcode of some sort, and declared that it said... "0 a ~ z ~ B 8 " } z & @ Q n $".
It does not say that.
Interesting experiment though. It did a lot of heavy lifting on the analysis that I can imagine would be useful on more mathematically based ciphers or encryption.