Full AI - The End of Humanity?

OpenAI just announced ChatGPT plugins



OpenAI plugins connect ChatGPT to third-party applications. These plugins enable ChatGPT to interact with APIs defined by developers, enhancing ChatGPT's capabilities and allowing it to perform a wide range of actions.
  • Plugins can allow ChatGPT to do things like:
  • Retrieve real-time information; e.g., sports scores, stock prices, the latest news, etc.
  • Retrieve knowledge-base information; e.g., company docs, personal notes, etc.
  • Perform actions on behalf of the user; e.g., booking a flight, ordering food, etc.
The AI model acts as an intelligent API caller. Given an API spec and a natural-language description of when to use the API, the model proactively calls the API to perform actions. For instance, if a user asks, "Where should I stay in Paris for a couple nights?", the model may choose to call a hotel reservation plugin API, receive the API response, and generate a user-facing answer combining the API data and its natural language capabilities.

Over time, we anticipate the system will evolve to accommodate more advanced use cases.

This essentially allows ChatGPT to interact with third party services purely using natural language to guide ChatGPT to do what you want.

There's already a plugin for Wolfram Alpha


Using Wolfram Alpha, ChatGPT is able to correctly answer all "nontrivial calculations" that is would fail at before

sw032223img10.png


Before, you had to have specific knowledge and expertise to know what to look up and the tools needed to get what you want. Now, using natural language, you can get the same result with with zero prior knowledge or skill in fraction of the time

Here's a comparison between using just ChaptGPT and with using Wolfram Alpha

sw032223img24.png


sw032223img025A.png


At the end of the blog, Wolfram briefly discusses his history with developing Wolfram and his outlook of the future

[...] And now ChatGPT + Wolfram can be thought of as the first truly large-scale statistical + symbolic “AI” system. In Wolfram|Alpha (which became an original core part of things like the Siri intelligent assistant) there was for the first time broad natural language understanding—with “understanding” directly tied to actual computational representation and computation. And now, 13 years later, we’ve seen in ChatGPT that pure “statistical” neural net technology, when trained from almost the entire web, etc. can do remarkably well at “statistically” generating “human-like” “meaningful language”. And in ChatGPT + Wolfram we’re now able to leverage the whole stack: from the pure “statistical neural net” of ChatGPT, through the “computationally anchored” natural language understanding of Wolfram|Alpha, to the whole computational language and computational knowledge of Wolfram Language.

[...] I see what’s happening now as a historic moment. For well over half a century the statistical and symbolic approaches to what we might call “AI” evolved largely separately. But now, in ChatGPT + Wolfram they’re being brought together. And while we’re still just at the beginning with this, I think we can reasonably expect tremendous power in the combination—and in a sense a new paradigm for “AI-like computation”, made possible by the arrival of ChatGPT, and now by its combination with Wolfram|Alpha and Wolfram Language in ChatGPT + Wolfram.

From Tim Urban's article on AI back from 2015 (which I highly recommend reading, especially now: https://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html [language warning]), he drew these diagrams

Projections.png


1679605946912.png


It seems like we've reached a point with ChatGPT where progress has jumped exponentially in the past few years. GPT-3 was released in 2020. DALL-E 2 and GPT-3.5 was released in 2022. Bing Chat was released released last month, giving ChatGPT access to the internet. GPT-4 was just released a week ago. They've now announced plug-ins today allowing it to work with third party interfaces.

As Wolfram noted in his article, all of today's technologies were pioneered back in the 40s and 50s when these statistical models like the perceptron and neural networks were first conceptualized. It has taken these 80 years for the processing power and data to catch up to make all of this possible.

There's a lot of other good commentary on these Hacker News posts



Here's a comment I like:

1679606544356.png



There's a research paper by a bunch of Microsoft engineers that evaluate GPT-4's capabilities as an AGI which is titled "Sparks of Artificial General Intelligence: Early experiments with GPT-4". It's 95 pages long, excluding references and appendices. I plan to read some of it this week, but here is what the abstract says:

We contend that (this early version of) GPT-4 is part of a new cohort of LLMs (along with ChatGPT and Google's PaLM for example) that exhibit more general intelligence than previous AI models. We discuss the rising capabilities and implications of these models. We demonstrate that, beyond its mastery of language, GPT-4 can solve novel and difficult tasks that span mathematics, coding, vision, medicine, law, psychology and more, without needing any special prompting. Moreover, in all of these tasks, GPT-4's performance is strikingly close to human-level performance, and often vastly surpasses prior models such as ChatGPT. Given the breadth and depth of GPT-4's capabilities, we believe that it could reasonably be viewed as an early (yet still incomplete) version of an artificial general intelligence (AGI) system. In our exploration of GPT-4, we put special emphasis on discovering its limitations, and we discuss the challenges ahead for advancing towards deeper and more comprehensive versions of AGI, including the possible need for pursuing a new paradigm that moves beyond next-word prediction.

 
I’m intrigued by all this stuff, but I can’t wrap my small brain around a lot of it.

At what point will any of this become useful to someone like myself (layman)? and in what form? There has to be a point where it’s filtered into something an everyday person uses but doesn’t realise and takes it for granted.

Take touchscreen phones. Back when I registered for GT Planet I had an old Nokia. It had a monochrome screen, basic keyboard, no camera and the height of its fancy features was you could program your own ringtone and change the outer front cover for something new.

Touchscreens, multi megapixel cameras, gigabytes of storage and gaming on a small device you could put in your pocket seemed like an age away. Now we all take it for granted and any phone which doesn’t have these features or improve upon them is seen as inferior.

At what point do general AI of this scale become common place and in what form? Serious question, I have no idea and really can’t see what benefit to me it all is, yet.
 
Last edited:
I’m intrigued by all this stuff, but I can’t wrap my small brain around a lot of it.

At what point will any of this become useful to someone like myself (layman)?
It is right now. At the very least get familiar with interacting with existing AI models to prepare for more advanced ones. People are already using it to write code, letters, as a substitute for imagination, etc. Don't wait, you'll just fall behind.

 
At what point will any of this become useful to someone like myself (layman)? and in what form? There has to be a point where it’s filtered into something an everyday person uses but doesn’t realise and takes it for granted.
I think it can be useful right now. The beauty, and horror, of ChatGPT is that it works with natural language. You can write to it like how you can write to it like another human, with the same slang and words you use everyday, without having to rephrase things and jump through hoops in specific ways for a computer to understand.

A very basic example people use, and is mentioned in Tom Scott's video (which is very good and highly recommended), is writing emails. You can prompt ChatGPT about what sort of email you would like to write. It can be formal, informal, etc. and within seconds it will spit out paragraphs that you can quickly proofread and send.

Or, you can prompt ChatGPT with a complex article and have it summarize it for you. You don't have to learn any background knowledge. You can chat with ChatGPT as it walks you through the complex article.

(Side note: these AI systems, at the moment, can be confidently incorrect about a lot of things. You definitely should do your own research when asking it high risk questions such as medical, legal, financial advice, etc.)

At the same time, ChatGPT can and has been integrated into things people use everyday. Some of the headlining companies are Microsoft, Goldman Sachs, Slack, Coca Cola, Duolingo, Shopify, Discord, etc. A lot of these are still quite rudimentary integrations.


At what point do general AI of this scale become common place and in what form? Serious question, I have no idea and really can’t see what benefit to me it all is, yet.
I think we're so early on on the exponential progress curve that we truly don't know exactly how common place this will become and in what forms of how we will be using it. At the moment, we're using ChatGPT for quite simple things, asking it trivial questions, summarizing articles, writing emails, debugging code, etc. We don't actually know what its true capabilities are. We don't know how far we can take it. We don't know anymore what the future could look like with AI.

As Tom Scott said in his video, this is similar to the beginnings of the internet and Napster. We see the new technology and we know it can be highly capable, but at this point, we truly don't know what we can do with it. We can only see in retrospect where we were and where we were heading


I use AI regularly at work at this point. It isn't required of me yet, but I think it will be soon.
Out of curiosity, without doxing yourself or revealing too much, could you say how are you using AI at work?
 
I’m intrigued by all this stuff, but I can’t wrap my small brain around a lot of it.

At what point will any of this become useful to someone like myself (layman)? and in what form? There has to be a point where it’s filtered into something an everyday person uses but doesn’t realise and takes it for granted.

Take touchscreen phones. Back when I registered for GT Planet I had an old Nokia. It had a monochrome screen, basic keyboard, no camera and the height of its fancy features was you could program your own ringtone and change the outer front cover for something new.

Touchscreens, multi megapixel cameras, gigabytes of storage and gaming on a small device you could put in your pocket seemed like an age away. Now we all take it for granted and any phone which doesn’t have these features or improve upon them is seen as inferior.

At what point do general AI of this scale become common place and in what form? Serious question, I have no idea and really can’t see what benefit to me it all is, yet.
Help write South Park episodes (maybe)

(Side note: these AI systems, at the moment, can be confidently incorrect about a lot of things. You definitely should do your own research when asking it high risk questions such as medical, legal, financial advice, etc.)
This is definitely true.

While it can pass the USMLE, it does get medical information wrong that any duly dilligent doctor should know.
 
Last edited:
Out of curiosity, without doxing yourself or revealing too much, could you say how are you using AI at work?
Basically it's a search algorithm that uses AI to generate good search results without a specific query. Think kinda like google image searching using a seed image.
 
Thanks guys for the replies. Much appreciated.

I’m excited to see how quick AI advances and what we do eventually see trickle down into everyday lives.
 
A lot of people spell out doom and gloom over AI development. In some cases, because AI will take our jerbs, and in other cases, because AI will become sentient and attack. I suppose there's yet another category of concern that is centered around poorly programmed AI doing something awful and being very difficult to stop. I'm actually not super worried about any of those things, and I have laid out reasons for each in this thread.

It occurred to me that there is another threat, and one which I've only just started seriously considering. A rapid increase in technological development.

You might think I'm going to say that this is a problem because culturally we can't adapt culturally, or because it'll be unequal, or something like that. Nope. I'm mostly concerned with what we'll do with the rapid increase in technology. With AI, we could potentially engineer a super virus, or find a way to fundamentally alter spacetime in a cascade (it is possible) and end the entire universe.

Perhaps another thing to be concerned about is the Beggars in Spain scenario (good book, worth a read). Suppose we turn AI on genetic engineering of our species. We may be able to very rapidly do things like eliminate the need for sleep, or eliminate aging. Perhaps we could fundamentally improve the functioning of the brain, memory, or the immune system. Now that might sound amazing. In fact, that might sound exactly like the kind of thing we should turn AI onto solving. But consider what happens when it's not you who benefits. Consider what happens when it's a new generation of superhumans born from synthetic DNA that can't be offered to already existing humans. You and I could find ourselves considered to be an "inferior" species altogether. We've seen, in the past, what happens to even other humans who are considered inherently inferior in some way. I would definitely feel unsettled being surrounded by a new generation of immortal, super smart, untiring superhumans. It'd be great for them, I just hope they're nice.

I'm sure there are many other ways that AI could bring us some paradigm altering technology that we end up ruining our lives with.
 
Last edited:
But consider what happens when it's not you who benefits. Consider what happens when it's a new generation of superhumans. You and I could find ourselves considered to be an "inferior" species altogether.
Without turning to SciFi too much, but the Eugenics Wars springs to mind. I agree AI would be a good candidate for speeding up genetic modification / alteration.

I’m more intrigued to see what AI can do in other areas like designing green solutions to energy, transport and space travel.

If eventually AI does help us solve some of these things (especially energy) then I’m all for it.

My worry is AI in the wrong hands and used to wage war. While humans using cyber attacks can wreak havoc already, an AI would be devastating.
 
Someone connected their brain to GPT 4.

I guess now that GPT 4 allows for plugins that access the internet it could be considered as the first way people can become a robot. I haven't tried the bing version of GPT 4 yet, but apparently it can create websites so I guess people at the very least can create websites with their mind. It would be cool if all this can be used with Midjourney to visualize dreams and maybe someday people can livestream their dreams.
 
A lot of people spell out doom and gloom over AI development. In some cases, because AI will take our jerbs, and in other cases, because AI will become sentient and attack. I suppose there's yet another category of concern that is centered around poorly programmed AI doing something awful and being very difficult to stop. I'm actually not super worried about any of those things, and I have laid out reasons for each in this thread.

It occurred to me that there is another threat, and one which I've only just started seriously considering. A rapid increase in technological development.

You might think I'm going to say that this is a problem because culturally we can't adapt culturally, or because it'll be unequal, or something like that. Nope. I'm mostly concerned with what we'll do with the rapid increase in technology. With AI, we could potentially engineer a super virus, or find a way to fundamentally alter spacetime in a cascade (it is possible) and end the entire universe.

Perhaps another thing to be concerned about is the Beggars in Spain scenario (good book, worth a read). Suppose we turn AI on genetic engineering of our species. We may be able to very rapidly do things like eliminate the need for sleep, or eliminate aging. Perhaps we could fundamentally improve the functioning of the brain, memory, or the immune system. Now that might sound amazing. In fact, that might sound exactly like the kind of thing we should turn AI onto solving. But consider what happens when it's not you who benefits. Consider what happens when it's a new generation of superhumans born from synthetic DNA that can't be offered to already existing humans. You and I could find ourselves considered to be an "inferior" species altogether. We've seen, in the past, what happens to even other humans who are considered inherently inferior in some way. I would definitely feel unsettled being surrounded by a new generation of immortal, super smart, untiring superhumans. It'd be great for them, I just hope they're nice.

I'm sure there are many other ways that AI could bring us some paradigm altering technology that we end up ruining our lives with.
I can see a lot of issues. The problem is that it's basically out of the bag now*, so we're now at the "well, I guess we'll just see what happens" phase.

*Even if we were to regulate AI, China won't and its too widespread to try to regulate anyways.
 
Last edited:

Researchers into artificial intelligence (AI) have worked for decades on the problem of creating “believable agents” — engaging, human-like entities that appear to have lives of their own.


The latest effort comes from a team at Stanford University and Google who used ChatGPT, a powerful AI tool, to create 25 characters who “live” in a fictional world called Smallville. The researchers supplied the AI with scant information on each agent, including their name, relationships, goals and job. Left to their own devices, the agents behaved in unexpected ways.


They slept, cooked meals and went to work. They formed opinions, made new friendships, nurtured grudges and gossiped. They discussed a forthcoming election, reflected on the past and made plans that required them to co-ordinate with each other.

Joon Sung Park of Stanford, who led the research, said: “I think we’re on a trajectory to creating agents that are so believable in their social environment that people will interact with them in a manner similar to how we interact with each other.”
 
I don't really delve in messing around with AI programs like chatGPT, but I have been thinking about how advanced AI is getting.

granted, this is a dream, but how real it was caused me to snap back into reality. :lol:

Imagine this: AI becomes so advanced, that we'll have something akin to VEGA from DOOM 2016 and Eternal. Silly, I know, but think about it. Within the next decade, maybe even sooner, we'll have AI programs capable of using voice that are so far advanced, the line between real and fake will literally become nonexistent bar you're involved from development of said program.

To summarize my dream in a few short sentences, it's like a twilight zone episode. Friend called me up whom I haven't talked to since high school. Did some catching up. Wanted to hang out, but kept getting put on the backburner, found out said friend died a few years back. Confronted the voice on the phone, and they revealed to me they were an AI program.

This is just me projecting, but we're at the "Novelty" phase right now. Like how the Automobile and Motorcycle were viewed when they first came out.
 
I don't really delve in messing around with AI programs like chatGPT, but I have been thinking about how advanced AI is getting.

granted, this is a dream, but how real it was caused me to snap back into reality. :lol:

Imagine this: AI becomes so advanced, that we'll have something akin to VEGA from DOOM 2016 and Eternal. Silly, I know, but think about it. Within the next decade, maybe even sooner, we'll have AI programs capable of using voice that are so far advanced, the line between real and fake will literally become nonexistent bar you're involved from development of said program.

To summarize my dream in a few short sentences, it's like a twilight zone episode. Friend called me up whom I haven't talked to since high school. Did some catching up. Wanted to hang out, but kept getting put on the backburner, found out said friend died a few years back. Confronted the voice on the phone, and they revealed to me they were an AI program.

This is just me projecting, but we're at the "Novelty" phase right now. Like how the Automobile and Motorcycle were viewed when they first came out.
You should read Silver Screen by Justina Robson.

It toys with the idea of a Ghost/Human conscience in AI form being part of a bigger AI system as such.

Edit

Sorry double post, thought I’d quoted it in the post above. I’m up past my normal bed time.
 
Last edited:
TruthGPT seems more like a cryptocurrency scam than useful AI but it might generate enough controversy to at least get more people noticing AI and finding AI that can help.
 
Last edited:
Back