Full AI - The End of Humanity?

1716514853115.png


 
Last edited:


I was dealing with this new "feature" just the other day when I googled my daughter's congenital deformity to see if it was linked with some of the symptoms she has been having and google AI proudly explained that it was indeed linked. Only problem was that none of their source articles said that. They took unrelated sources and melded them together to give me an incorrect answer.

This isn't just a bad idea, it's negligent.
 

I love that AI was trained in large part on internet content and therefore the answers that you get largely reflect the answers you'd get if you asked the internet a question. Mostly helpful, often inaccurate, occasionally pure troll.
I was dealing with this new "feature" just the other day when I googled my daughter's congenital deformity to see if it was linked with some of the symptoms she has been having and google AI proudly explained that it was indeed linked. Only problem was that none of their source articles said that. They took unrelated sources and melded them together to give me an incorrect answer.

This isn't just a bad idea, it's negligent.
I think it's a bit like Wikipedia used to be. Useful as an overview, do not take any specific points seriously and do your own research from primary sources before you actually use the information for anything. I don't think as an information source it will ever really get much better than that. The source material (the entire internet) has too much jank in it for a system with no comprehension to be able to totally filter it out.

I think we'll have to be able to restrict the source material and ask for summaries to get decent answers, like for a medical question you could ask it to take information only from certain qualified medical information and research repositories and summarise the findings for you. AI is an amazing tool, but it's still a tool and I think at the moment people are still discovering what it is and isn't good at.
 
I think we'll have to be able to restrict the source material and ask for summaries to get decent answers, like for a medical question you could ask it to take information only from certain qualified medical information and research repositories and summarise the findings for you.
ChatGPT has specific GPTs which are more narrowly trained than the general "ChatGPT" chat starting point. For example, for technical discussions, I use the "Tech Support Advisor". For food and cooking chats, I use "Sous Chef".

Look for such less generalized GPTs under "Explore GPTs".

I asked ChatGPT 4o to explain this:-

The specialized GPT model called "Tech Support Advisor" is typically trained on a wide range of technical documentation, manuals, troubleshooting guides, and user support interactions. This training data includes information about hardware, software, networking, and other technical domains, allowing the model to provide more precise and informed responses to technical queries. Here are some reasons why technical discussions might be better conducted with the "Tech Support Advisor" GPT:
  1. Expertise in Technical Domains: "Tech Support Advisor" is tailored to understand and respond to technical issues with greater depth, leveraging extensive training on relevant materials.
  2. Accurate Troubleshooting: The model is adept at guiding users through troubleshooting steps, identifying common issues, and providing effective solutions based on known patterns and best practices.
  3. Detailed Technical Knowledge: It possesses in-depth knowledge about various technologies, systems, and platforms, enabling it to address specific technical questions with precision.
  4. Efficient Problem-Solving: "Tech Support Advisor" can quickly identify and suggest appropriate actions or configurations, reducing the time and effort required to resolve technical problems.
  5. Contextual Understanding: The specialized training allows the model to understand the context of technical terms and scenarios better, ensuring more relevant and applicable advice.
Overall, using a model like "Tech Support Advisor" for technical discussions ensures that the information provided is accurate, relevant, and specifically tailored to address technical issues, making it an ideal choice for such interactions.
 
I think it's a bit like Wikipedia used to be. Useful as an overview, do not take any specific points seriously and do your own research from primary sources before you actually use the information for anything. I don't think as an information source it will ever really get much better than that. The source material (the entire internet) has too much jank in it for a system with no comprehension to be able to totally filter it out.

The top result summary used to be at least a little useful - because it was at least summarized from a single source. Maybe that source was wrong, but it was a popular hit for the search and all of the summary came from it. Far less garbage than the current version.
 
Last edited:
google-gasoline.jpg


Pasta alla Petroliana

Screenshot-20240525-091526-Samsung-Internet.jpg


Who could have guessed Google leaning on AI would result in en****tification?
 
Last edited:
Who could have forseen that a search engine but worse would be absolutely horrible?
 
If Google AI says I can gaze at the sun for 15-20 minutes a day, it must be safe. Right?
 
From what I understand, a lot of these stupid replies come from posts made on Reddit, Quora and anywhere else like that. The glue pizza post came from a child. This AI-assisted search engine is unable to distinguish between good advice and bad advice so... how is it better than a 'basic' search engine that gives you information based on your exact keywords?
 
From what I understand, a lot of these stupid replies come from posts made on Reddit, Quora and anywhere else like that. The glue pizza post came from a child. This AI-assisted search engine is unable to distinguish between good advice and bad advice so... how is it better than a 'basic' search engine that gives you information based on your exact keywords?
Hard to find a 'basic' search engine these days though. Most of them give you information based vaguely on your keywords and what they think they can advertise to you.

The en****tification is all around us. The internet peaked at least ten years ago.
 
AI will replace sex within 10 years. Instead of going to all the trouble of hooking up people will just send their AIs to knock virtual boots, thereby removing all the hassle, drama and risk while making the process of increasing ones body count much more efficient.

The techbros need to touch some ****ing grass.
 
AI will replace sex within 10 years. Instead of going to all the trouble of hooking up people will just send their AIs to knock virtual boots, thereby removing all the hassle, drama and risk while making the process of increasing ones body count much more efficient.

The techbros need to touch some ****ing grass.
My bodycount:

spermlead.jpg
 
The top result summary used to be at least a little useful - because it was at least summarized from a single source. Maybe that source was wrong, but it was a popular hit for the search and all of the summary came from it. Far less garbage than the current version.
I think the less outright bleak but plausible and still bad eventuality of wide spread use of AI is just mountains and mountains of absolute garbage content. Absolute garbage music, absolute garbage films, absolute garbage research etc just everywhere to the point where finding anything legitimately good will be extremely difficult. Netflix is already halfway there without AI.
 
I think the less outright bleak but plausible and still bad eventuality of wide spread use of AI is just mountains and mountains of absolute garbage content. Absolute garbage music, absolute garbage films, absolute garbage research etc just everywhere to the point where finding anything legitimately good will be extremely difficult. Netflix is already halfway there without AI.

AI summaries based on AI generated answers
 
ai-prediction.jpg


The techbros are not alright.

Duh, they'll take up sniffing glue.

These sorts of predictions are always immature and pointless.

I rarely drink anymore but that's more a personal choice than anything else (having spent exactly zero seconds knowingly with a chat-bot). But by the time someone loses their second or third job to "AI", I can understand them taking up any bad habit.
 
There's a few going round that could be fakes, but there's literally no way to tell the difference between a totally fake AI answer and an AI answer that the AI itself has made up.

1717544769028.jpeg


I think we need a new law in the vein of Poe's Law, which (effectively) states that without a clear indication of intent it's impossible to tell the difference between parody and a genuine sentiment of extreme views.

I propose the name "Dick's Law" in honour of regular reality wrangler and robot-invoker Philip K. Dick, to read "Without evidence to the contrary, it's impossible to discern an inaccurate answer genuinely created by a large language model AI from a parody thereof.".
 
Speaking of copyright infringement or other violations of intellectual property ownership by LLMs, what do you think Peugeot should do about this message from ChatGPT? "There was an error preparing the conversation. Please try again. APIError 403"

A lame attempt at humor intended for gearheads who know why Porsche's 901 was renamed 911.
 
What tasks would you give an AI assistant?

Suppose you have a general AI embodied in a human-like robot that is programmed to carry out your suggested tasks. Let's propose some limitations on the intelligence of this entity such that it can't really innovate new technologies or solve problems that humans have not tackled. So like, you couldn't ask it to solve climate change or to derive a unifying theory of physics. But you could ask it to do things that humans can do, such as design a circuitboard (providing that it had the tools) or solve some problems where the solution to such a problem is easily found. Let's also assume super-human strength but not necessarily unlimited strength. So you can't ask it to lift up your house or your car, but you can ask it to move something that weighs a hundreds of pounds. Basically - stay realistic in our unrealistic scenario. Let's assume that building one of these requires tooling that you can't make and don't have, so you can't ask it to build another one. Let's also assume your neighbors have one so you can't just make money by renting it out. We're talking about personal use not commercial.

There are a nearly infinite number of things I might want such an assistant to do.
  • cook
  • clean
  • build onto or modify the house. I might ask it to do carpet replacement while I sleep for example, or install solar panels.
  • car maintenance
  • yard maintenance
  • In a large city I could see asking it to carry you places. In old age, perhaps you want it to carry you up the stairs.
  • If your space is limited in your house you might want to ask it to re-arrange a room for an activity such as exercise, or even be a trainer throughout exercise. When finished, put the room back together.
  • Fix/sew clothing - "copy this article of clothing in a different color"
  • drive/bicycle my kids to school (no, not teach school... they need peer interaction)
  • plant a veggie garden
  • teach things/answer questions "teach me to paint"
  • bookkeeping/financial analysis
  • shopping


At first I was thinking that it would replace things, like a dishwasher or washing machine. But after thinking about it, I think I have too many tasks for it to replace those items.

Honestly I think step 1 would be cook, step 2 would be clean, and step 3 would be repairs/maintenance. That would probably keep it pretty busy for a while. Step 4 might be tasking it with getting me in better shape.

Alright your turn.
 
Last edited:
What tasks would you give an AI assistant?

Suppose you have a general AI embodied in a human-like robot that is programmed to carry out your suggested tasks.
I'm going on a tangent already, but I have to wonder if the human form (what I'm interpreting human-like to mean) is ideal. I agree that cooking is a good task for a reliable AI to handle but I'd envision my cook as self contained assembly line that could store ingredients and cookware internally. Mobility wouldn't be required, though it could be nice if it could bring food to me I guess. I'd like the ability to do things for myself if I felt like it, so that might be an advantage to having normal appliances with a completely separate robot operator, but human operation should still be possible with a purpose built AI cook station. The same applies to a few other points on your list. If AI can drive cars, shouldn't the AI be in the car itself?


Moving on to the question asked, I agree with your list and your priorities. I might add something like asking it to imitate my routine to look for inefficiencies in my home layout or opportunities to improve usage of time or fitness and similar things. For example the AI might notice that I move between floors a lot and suggest moving something from one floor to another to cut down on that.

The AI could also be a physical stand in for me in certain situations. This could range from giving me a remote body when I'm not able to travel to serving as a personal representative if I'm preoccupied with something else. These would also benefit from the human form, especially if the robot was sized to match me. The AI might be able to learn my preferences and then combine this with appropriate sizing to make reliable guesses on what kind of items I might use by testing them in place of me.

Serving as a personal health assistant is another potential role. The AI could spend more time with me than a human doctor and might be able to spot something abnormal quickly. It could also answer basic questions, and if it were to become normal in healthcare, serve as a remote body for a doctor or receive specialized training from a doctor to do simple tasks if needed. Expanding this beyond routine healthcare, the AI could serve as a companion during nature outings. It could provide instant assistance if I ended up lost or injured.
 
I'm going on a tangent already, but I have to wonder if the human form (what I'm interpreting human-like to mean) is ideal. I agree that cooking is a good task for a reliable AI to handle but I'd envision my cook as self contained assembly line that could store ingredients and cookware internally. Mobility wouldn't be required, though it could be nice if it could bring food to me I guess. I'd like the ability to do things for myself if I felt like it, so that might be an advantage to having normal appliances with a completely separate robot operator, but human operation should still be possible with a purpose built AI cook station. The same applies to a few other points on your list. If AI can drive cars, shouldn't the AI be in the car itself?
All of this is true, but I wanted the AI assistant to be a general-purpose assistant that could stand in on a wide variety of tasks rather than a specific task. If the hypothetical brought up very specific items that everyone wanted (like cooking) then it might make sense to design a specialist for that task.
It could provide instant assistance if I ended up lost or injured.
Awesome. Yes I like that idea a lot. Essentially a bodyguard or medical safety staff. My AI assistant needs to go skiing with me.
 
Last edited:
No, you guys are all wrong. What you want is for AI to take away all your enjoyable tasks, hobbies & careers!

Design, art, writing, coding and photography is what AI development should focus on; not health, cooking, cleaning or yard maintenance. /end sarcasm.


In truth, an IRobot style machine has huge value and potential, especially for the elderly, disabled and time-poor. Personally, I’d rather do my home tasks (even the annoying ones) myself. I find them good for character, fitness, learning and sense of achievement.

There’s the chance of a very slippery slope here, where anything remotely difficult becomes T2’s job, leaving the owner with a pretty vacant, lonely and purposeless life. More time to scroll on their phone, yes, less actual living done in a day.

I know it’s very pessimistic, but I don’t see the wider societal benefit of able-bodied people outsourcing their own chores to a machine.
 
No, you guys are all wrong. What you want is for AI to take away all your enjoyable tasks, hobbies & careers!

Design, art, writing, coding and photography is what AI development should focus on; not health, cooking, cleaning or yard maintenance. /end sarcasm.


In truth, an IRobot style machine has huge value and potential, especially for the elderly, disabled and time-poor. Personally, I’d rather do my home tasks (even the annoying ones) myself. I find them good for character, fitness, learning and sense of achievement.

There’s the chance of a very slippery slope here, where anything remotely difficult becomes T2’s job, leaving the owner with a pretty vacant, lonely and purposeless life. More time to scroll on their phone, yes, less actual living done in a day.

I know it’s very pessimistic, but I don’t see the wider societal benefit of able-bodied people outsourcing their own chores to a machine.
Doing the dishes does not provide me with purpose. If anything, chores present me with time confetti that prevents a longer more personally fulfilling hobby.
 
Back