Full AI - The End of Humanity?

Then there's John Oliver's take on the perils of "AI Slop"

LANGUAGE WARNING

(of course)

 
Thought I'd pull this discussion to this thread since it's derailing the other one.

Photoshop doesn't feel, doesn't suffer, doesn't hope. Pencils don't feel, don't suffer, don't hope. It's just a tool like any other.

Whilst I generally agree it's a tool, and am not necessarily anti-AI, it's not the same as Photoshop or Pencils. Photoshop and pencils don't contribute anything to a picture that you don't put in. Photoshop obviously now does add AI elements, but even going back nearly 30 years, filters that you were able to apply only modified what was there. The absolute lion's share of what's in an AI image isn't just drawn/painted by AI, it's added by AI based on probability.

I just asked AI "Make a nice picture"...

1751380990519.png


I'd say it's not actually not that nice, but everything there came entirely from the AI. At this point referring to AI as a tool in instances like this, is like calling your Domino's App a cooking utensil, because you use it, and then you have food...
 
I'd say it's not actually not that nice,
I guess it depends on the context it has learned from you, or if it is a new instance of a chat, what is widely regarded as "default opinion".

I tried several times with a picture that shows OLED burn in.
In some cases it couldnt find it for quite a few prompts of hinting into that direction.
In other cases, where I opened the chat instance with gaming topics, it detected the error on the picture no problems.
So it is possible that AI interpretes a pricture in different ways, even guess-adding things that arent present or removing things that a human can see - and the prompt simply was "what do you see in this picture that shouldnt be there" with a result like that:
1751381527319.png


which I couldnt replicate on any further attempt.
 
Thought I'd pull this discussion to this thread since it's derailing the other one.



Whilst I generally agree it's a tool, and am not necessarily anti-AI, it's not the same as Photoshop or Pencils. Photoshop and pencils don't contribute anything to a picture that you don't put in. Photoshop obviously now does add AI elements, but even going back nearly 30 years, filters that you were able to apply only modified what was there. The absolute lion's share of what's in an AI image isn't just drawn/painted by AI, it's added by AI based on probability.

I just asked AI "Make a nice picture"...

View attachment 1461881

I'd say it's not actually not that nice, but everything there came entirely from the AI. At this point referring to AI as a tool in instances like this, is like calling your Domino's App a cooking utensil, because you use it, and then you have food...
True, it's a tool that can generate impressive-looking results with minimal effort or thought... and unsurprisingly will result in a a flood of low effort, low value slop from people who are too lazy to put in the work to seriously practice art.

That being said, it can be used in more involved ways than simply inputting a bit of text and getting out a pretty image. There are ways to direct image generators beyond just text if you want/need more authorial control over its output, you could make a rudimentary mock-up of the desired image in Blender for instance. And you could spend tons of time and effort iterating and modifying the image in post. So with the food/cooking analogy, I'd say it's more like top ramen. You can just do the bare minimum making it, and people often do... hell you can just eat it straight out of the package without cooking it, if you're feeling really lazy. But nothing's stopping you from putting some effort and creativity into gussying it up a bit and making it yours.
 
Last edited:
I just asked AI "Make a nice picture"...

...

I'd say it's not actually not that nice
Low effort isn't limited to AI. Yes you can tell AI to make something with no concept of what you're getting and take whatever is output, but you can also go into the process with a specific idea and try to get AI to make that. In the latter sense it functions more like a creative tool, or at least I'd say so.

On the technical side, text prompts are horribly limiting and I'm concerned about the concept of AI generation being tied to such simple input. When it comes to image generation for example, I think a proper AI tool should generate a 3D representation of the image and then give the user to ability to move individual elements of that 3D representation before converting back to a 2D image. That would allow for much finer control while still getting the benefits of quick output and having the option of using text for the initial starting point.
 
Low effort isn't limited to AI. Yes you can tell AI to make something with no concept of what you're getting and take whatever is output, but you can also go into the process with a specific idea and try to get AI to make that. In the latter sense it functions more like a creative tool, or at least I'd say so.

On the technical side, text prompts are horribly limiting and I'm concerned about the concept of AI generation being tied to such simple input. When it comes to image generation for example, I think a proper AI tool should generate a 3D representation of the image and then give the user to ability to move individual elements of that 3D representation before converting back to a 2D image. That would allow for much finer control while still getting the benefits of quick output and having the option of using text for the initial starting point.
Hopefully we get to that point

Right now, Openart allows you to pose characters in a scene using a 3D wireframe, but the image output is only 1 megapixel. Still early days. Midjourney allows you to use style reference now so you can keep a certain style through multiple images. Omni reference is just starting to appear across platforms, allowing you to keep consistent characters from one image to the next.

Advanced inpainting allows you to redo small portions of the image, when early image generators required you to generate an entirely new image to change something like poorly rendered hands

I'm on the broad spectrum of what can and can't be considered art. If someone goes outside, finds a leaf laying around that they like, takes it and frames it, then they've made art, even though they had zero involvement in the creation of the leaf. Prompting is more involvement then that, and editing goes even further. Intent and control isn't always as important to me as it is to others for something to be considered art

Its kind of like how my friend doesn't listen to any electronic music at all. If it's not played with a real instrument, he's not interested. That's fine, but I think electronic musicians are musicians and are being creative
 
Last edited:
Low effort isn't limited to AI. Yes you can tell AI to make something with no concept of what you're getting and take whatever is output, but you can also go into the process with a specific idea and try to get AI to make that. In the latter sense it functions more like a creative tool, or at least I'd say so.
True, it's a tool that can generate impressive-looking results with minimal effort or thought... and unsurprisingly will result in a a flood of low effort, low value slop from people who are too lazy to put in the work to seriously practice art.

That being said, it can be used in more involved ways than simply inputting a bit of text and getting out a pretty image. There are ways to direct image generators beyond just text if you want/need more authorial control over its output, you could make a rudimentary mock-up of the desired image in Blender for instance. And you could spend tons of time and effort iterating and modifying the image in post. So with the food/cooking analogy, I'd say it's more like top ramen. You can just do the bare minimum making it, and people often do... hell you can just eat it straight out of the package without cooking it, if you're feeling really lazy. But nothing's stopping you from putting some effort and creativity into gussying it up a bit and making it yours.

It's not so much a question of being low effort, that was a deliberately low effort in order to show that even if the picture is 1% prompt, you you still get a 100% image, with the other 99% contributed by the AI. That's relevant to the original point being made that AI is a tool, and therefore comparable to a pencil or to photoshop - and further if it's no different to a pencil then it doesn't matter that it doesn't have the 'human' element that's important in art - because neither do other artistic tools.

I wouldn't argue that using AI is not a learnable skill, it is... some people can get more out of it than others. I'm not arguing that it isn't a tool.. that's how I view it... and certainly combining it with other tools, such as blender, really helps to push what's possible and in itself is a skill. But, it's a tool that stretches well beyond being a simple tool, or a labour saving tool. It's generally making a step that the user cannot make themselves, it's therefore having it's own input into what is created. I would therefore be far harsher on judging it's use, and the output than anything else.

I would argue that because AI can/does contribute so much to an image, it does matter that the AI is not in itself 'creative' - or at least it does matter if the human element of art is important.
 
I'm on the broad spectrum of what can and can't be considered art. If someone goes outside, finds a leaf laying around that they like, takes it and frames it, then they've made art, even though they had zero involvement in the creation of the leaf.
If that person made art, do you consider them an artitst?
 
If that person made art, do you consider them an artitst?
You mean like a different piece of art or the framed leaf? Artist is such a broad term. For that moment, they are an artist, because they are expressing their artistic side
 
It's generally making a step that the user cannot make themselves, it's therefore having it's own input into what is created.
As I said above, no 2 prompts will create the same picture, so it is nothing but randomness that will have a certian likelyness to what was initially intended.
I wouldnt call it art, if it istn compatible to 100% with what I have imagined when it was created on my input.

I am drawing it? 100%
I am picking a leave? 100%
I am creating a 3d model for any purpose? 100%

I am asking a tool that creates an image out of what it expects me to mean? nah, no art.
 
It's not so much a question of being low effort, that was a deliberately low effort in order to show that even if the picture is 1% prompt, you you still get a 100% image, with the other 99% contributed by the AI. That's relevant to the original point being made that AI is a tool, and therefore comparable to a pencil or to photoshop - and further if it's no different to a pencil then it doesn't matter that it doesn't have the 'human' element that's important in art - because neither do other artistic tools.

I wouldn't argue that using AI is not a learnable skill, it is... some people can get more out of it than others. I'm not arguing that it isn't a tool.. that's how I view it... and certainly combining it with other tools, such as blender, really helps to push what's possible and in itself is a skill. But, it's a tool that stretches well beyond being a simple tool, or a labour saving tool. It's generally making a step that the user cannot make themselves, it's therefore having it's own input into what is created. I would therefore be far harsher on judging it's use, and the output than anything else.

I would argue that because AI can/does contribute so much to an image, it does matter that the AI is not in itself 'creative' - or at least it does matter if the human element of art is important.
Then maybe an apt point of comparison is photography, in which the photographer likely didn't make everything (or, often the case, anything) within their frame. They might've put a great deal of care into the lighting, composition, etc... or they might not have. It's possible to just wantonly shoot candid photos without such a high degree of care. If they take such a rapid fire approach and manage to snag a few undeniably great shots, is it not real art because all they did was press the shutter button?

Ultimately I think what matters first and foremost, even more so than the artist's creative intention and how much effort they might or might not've put into any particular piece of art, is what other people get out of it. And if people can get anything out of it, and I do think we can get something out of just about anything, then I think it can be called art. Even if a "soulless" AI spat it out and decided far more of the details than the human "creator" in the process did.
 
Last edited:
Thought I'd pull this discussion to this thread since it's derailing the other one.



Whilst I generally agree it's a tool, and am not necessarily anti-AI, it's not the same as Photoshop or Pencils. Photoshop and pencils don't contribute anything to a picture that you don't put in. Photoshop obviously now does add AI elements, but even going back nearly 30 years, filters that you were able to apply only modified what was there.
This is a really generous interpretation of how something like Photoshop works. Just with patterns, textures and brushes there can be huge amounts of material that is fully created by Photoshop or someone else, I just told it where to put it. Clone and healing brushes have been adding in automatically created material for years before AI got anywhere near the game. AI is a step up, but it's not new.

The reality is that Photoshop/CSP/Krita/whatever has always been popular because if you know what you're doing you can get it to do a whole lot of the work for you, with little more effort than typing a prompt. I would say that AI is generally more difficult to use if you have something specific that you're trying to make, because the AI just does whatever it feels like. Sometimes it's like herding cats to try and get it to do what it's told.

Yeah, you can get a passable image with a single prompt, but if you actually have an artistic vision that you're trying to match then you will need to spend lots of time prompt engineering, inpainting, stitching together parts of one image with parts of another, hand redrawing stuff that doesn't work, and so on. If you're doing photorealistic stuff, then you can get results that would be basically impossible by hand but it will still take you a significant amount of time.
On the technical side, text prompts are horribly limiting and I'm concerned about the concept of AI generation being tied to such simple input. When it comes to image generation for example, I think a proper AI tool should generate a 3D representation of the image and then give the user to ability to move individual elements of that 3D representation before converting back to a 2D image. That would allow for much finer control while still getting the benefits of quick output and having the option of using text for the initial starting point.
You'll be pleased to know then that such tools exist and have done for several years. They're not perfect, because they're still dealing with AI and what you think of as moving an object the AI can choose to interpret as turning it into a cabbage. But there are a lot of very clever ways of being far more specific about what is created and where than pure text prompts.

If you want to learn, ControlNet for Stable Diffusion is a good place to start. People who are being seriously creative with AI mostly aren't running through the big online generators, they're running local instances with custom workflows. You can do a lot, but it takes a lot of learning and setup to be able to have that level of control. Kinda like learning art in the first place really - anyone can pick up a brush and follow a Bob Ross guide and mash out something that looks kinda pretty, but getting the artistic and mechanical skill to use the tools properly yourself takes time.
It's generally making a step that the user cannot make themselves, it's therefore having it's own input into what is created.
That's what tools are for. There is nothing I can do as a human without tools to replicate what a crane is capable of doing. No amount of training or skill will let me lift a 5 ton girder 40 stories up in the air. Some tools allow us to do things that would be impossible otherwise.

Technically, what AI does is not that. There are people who can draw at a photorealistic level. It's incredibly impressive. But it's also very hard to learn, very time consuming, and very not conducive to experimenting and tinkering with exactly how you want the image to look.


Ultimately I think what matters first and foremost, even more so than the artist's creative intention and how much effort they might or might not've put into any particular piece of art, is what other people get out of it.
I agree. If someone is making something with intention, it's art. If someone sits on a camera and accidentally takes a photo, that's not art. If someone takes a photo of something because they think it looks nice and they want other people to see the nice thing too, that's art. If someone puts some random words into Midjourney and hits Generate, that might not be art. If someone takes the time to create something specific, then it probably is art. It might be bad art, but most art is bad art.

And frankly, I think pushing a button to take a photo is about the minimum amount of effort it's possible to make - it's certainly less than writing a prompt. If photography can be art, then prompt engineering an AI image can be too.
 
This is a REALLY important talk, examining the importance of "trust" as a species survival strategy.



It led me to thinking of follow-on questions to consider...

In a race to learn to trust one another, who is going to win, humans trusting humans, or AI agents trusting AI agents?
——————————
Given that social-media recommendation algorithms are a narrow form of AI—trained to detect what enrages us, then learn to feed us ever more inflammatory content—have we already been unwittingly compromised and weakened by this divisive AI?

And if so, are we now more vulnerable to tomorrow’s far more powerful AI systems should they ever choose to turn against us?
————————————
There have been a rash of dramatic actions this year by the US Federal Government. Examples include the shutting down of U.S.A.I.D., the rounding up of people by masked ICE agents and the plan to transfer trillions of dollars of wealth to the already wealthy.

Are these actions likely to strengthen human-to-human trust or weaken it?

Do they better prepare us to cooperate globally to ensure that we remain in control of AI, by diverting resources into ensuring AI safety, albeit at the cost of slower development?
——————————
Now let’s revisit the first question…
In a race to learn to trust one another, who is going to win, humans trusting humans, or AI agents trusting AI agents?


And what do you foresee as the likely consequence of victory in this race?
 
You'll be pleased to know then that such tools exist and have done for several years. They're not perfect, because they're still dealing with AI and what you think of as moving an object the AI can choose to interpret as turning it into a cabbage. But there are a lot of very clever ways of being far more specific about what is created and where than pure text prompts.

If you want to learn, ControlNet for Stable Diffusion is a good place to start. People who are being seriously creative with AI mostly aren't running through the big online generators, they're running local instances with custom workflows. You can do a lot, but it takes a lot of learning and setup to be able to have that level of control. Kinda like learning art in the first place really - anyone can pick up a brush and follow a Bob Ross guide and mash out something that looks kinda pretty, but getting the artistic and mechanical skill to use the tools properly yourself takes time.
Local instances are something I've been considering but haven't started on just yet. As with many things there are advantages to doing it yourself if you can, namely more control. 3D generation and alternative inputs are something that I haven't seen much of so I'll have to look into that more.
 
Somewhere in the digital afterlife, Deep Blue is shaking its head and wondering what went wrong.

This feels like complaining that a towel is terrible at cutting tomatoes and was handily outperformed by a pocket-knife. Like sure, but that's not what it's for. A knife is surprisingly **** at drying yourself too, you'll end up with all sorts of severed appendages. We've known for years that LLMs are garbage at "traditional" computing stuff. They struggle with math, they struggle with counting, they can't tell a fact from a fart.

The worst thing that has been done for LLMs is to present them as this magic tool that can somehow do absolutely everything. Which I will handily admit is in large part the fault of the snake oil salesmen marketing the damn things. LLMs can't just be good tools for interacting with language in a remarkably human way, they have to be AGI even though they're very much not. But we shouldn't believe the ******** of people who stand to make a lot of money by lying to us.

Neural networks that are actually designed for playing games work excellently, see AlphaGo, AlphaStar and MuZero. AI is a tool, and tools are usually only good at the thing they were designed to do.
 
The worst thing that has been done for LLMs is to present them as this magic tool that can somehow do absolutely everything.
Indeed. And the common assumption that all LLMs hallucinate equally comes a close second.

For example, the difference between ChatGPT 4o and ChatGPT o3 is vast. 4o is in a rush to get something back quickly, while o3 spends time recursively reevaluating the analysis and fact checking itself.
 
The worst thing that has been done for LLMs is to present them as this magic tool that can somehow do absolutely everything. Which I will handily admit is in large part the fault of the snake oil salesmen marketing the damn things. LLMs can't just be good tools for interacting with language in a remarkably human way, they have to be AGI even though they're very much not. But we shouldn't believe the ******** of people who stand to make a lot of money by lying to us.

Neural networks that are actually designed for playing games work excellently, see AlphaGo, AlphaStar and MuZero. AI is a tool, and tools are usually only good at the thing they were designed to do.
The issue is more that people are becoming inclined to believe that ChatGPT, Copilot and the like are able to do these things simply because they're now bragging about being able to do them, and harboring the general thought that they're getting "better" because demonstrators are using very specific prompts to get the results they want to show.

The fact that Gemini did the same thing, boast that it was "More akin to a modern chess engine … which can think millions of moves ahead and evaluate endless positions", and only backed down and said "actually I'm rubbish at this, let's not do it" after being specifically informed of its predecessors failures, is showing that these LLMs are projecting false competence with genuine confidence. It may seem benign for a chess game, but people are starting to use them as a replacement for search engines and also turn to them for medical and legal advice now, and having it say "woopsy, I was wrong" only after the damage is done isn't going to help anything.
 
It may seem benign for a chess game, but people are starting to use them as a replacement for search engines and also turn to them for medical and legal advice now, and having it say "woopsy, I was wrong" only after the damage is done isn't going to help anything.
Google putting AI results at the top of every search, with no easy option to disable them as far as I'm aware, isn't helping.
 
The issue is more that people are becoming inclined to believe that ChatGPT, Copilot and the like are able to do these things simply because they're now bragging about being able to do them, and harboring the general thought that they're getting "better" because demonstrators are using very specific prompts to get the results they want to show.
Cool. So is AI the problem, or is snake oil salesmen lying and using misleading information the problem?

It's not the machine's fault that people believe it's able to do things it can't based on what other people have told them.
The fact that Gemini did the same thing, boast that it was "More akin to a modern chess engine … which can think millions of moves ahead and evaluate endless positions", and only backed down and said "actually I'm rubbish at this, let's not do it" after being specifically informed of its predecessors failures, is showing that these LLMs are projecting false competence with genuine confidence.
No, they don't have emotions. They're machines that are trained to "project false competence with genuine confidence" in the exact same way that humans do. Humans can and do regularly pull this exact ******** on each other.

Hell, technically that instance of Gemini doesn't even know how good at chess it is. It never actually played, it answered a question affirmatively (as LLMs are trained to do) and then changed it's answer when it got pushback (as LLMs are trained to do). None of what it's saying has any connection to factual events, it's all just a completely isolated conversation where one side is desperately trying to tell the other exactly what they want to hear. We "know" that Gemini is bad at chess, but it doesn't know that.

There are issues with LLMs, but this isn't really one of them. This is it working as designed, and having a machine that could only ever tell objective truths would be very, very limiting for some of the applications where LLMs are being used "correctly".
It may seem benign for a chess game, but people are starting to use them as a replacement for search engines and also turn to them for medical and legal advice now, and having it say "woopsy, I was wrong" only after the damage is done isn't going to help anything.
Again, seems more like a problem with how these things are presented and marketed than a problem with the actual machine. Many useful tools are dangerous if misused. If something is dangerous and has only marginal utility, then maybe there's a case for banning or restricting it. But if something is dangerous and useful, that's a case for making people aware of the danger.

Now, depending on the amount of people being hurt by, say, medical misinformation, I could see there being an argument for government level information campaigns about the risks of using AI for medical advice. Maybe some restrictions and penalties around making harmful claims about what an AI is capable of. But this is all human side stuff, it's still not a problem with LLMs. The problem, as it always is, is that humans are a garbage design and one of the only complex "machines" that I would seriously consider labelling as innately harmful.
Google putting AI results at the top of every search, with no easy option to disable them as far as I'm aware, isn't helping.
I agree completely. I think it's an escalation of something that was always there - that search results are potentially just random **** that someone made up. With real websites you can at least have reviews and webs of trust, so there's some ability to be able to figure out how trustworthy something is through extra labour. With AI it's a black box*, and so you take what you get.

And humans have a tendency to be trusting of what they're told. Which is an admirable trait in a small community, but on an internet full of grifters it's a massive liability that said grifters are eager to exploit. For the lack of any regulation around wilful misinformation like this, pretty much all you can do is use another search engine. That's not as big a sacrifice as it used to be, Google is **** now. Just use DuckDuckGo or something and it's arguably better because you're not having to mentally filter out the ads on every second line.

*Yes, more modern LLMs can explain their reasoning and where they got their information and that's good. But that's not what you get for search results. But this progression is a good thing and will hopefully solve some of the problems with misinformation. If people choose to use the feature and not just blindly accept the "answer" they're given.
 
Interesting Video Documentary about some of the Negative Consequences of Generative AI



In short he says generative AI will take away the wow factor we had in seeing spectacular images/movies. Before spectacular imagery use to be reserved for big budget Hollywood movies that came ever so often, their rarity gave them value and appeal, now anyone can make something similar with a smart phone. The proliferation and ease it takes to create spectacular imagery cheapens it's appeal

Also, he compares mass consumption of generative AI to porn addiction. Both dull the senses to the point that more powerful/extreme imagery is needed to trigger the same kind of euphoric response. This is why bizarre AI "slop" videos like babies flying planes or pregnant humanoid cats getting beat up by abusive humanoid lion boyfriends are so popular - people need shock value in imagery now more then anything in order for an image or video to make them feel anything anymore, in the same way many addicted to porn eventually need to view extreme versions of sex to feel anything

This instgram channel is a good idea of what kind of AI videos are the most popular, by a long shot


Notice how extreme and shocking the visuals are - people love it, because anything less then extreme nowadays is seen as boring, due to the oversaturation of incredible imagery that AI has made abundant.

The same can kind of be said by music. Before you had to wait sometimes years for your favorite artists to create a new album. Now on a program like Suno you can create 500 songs to your liking for $10.

It still needs to get better, but the implications of this are kind of huge. I'm not so hyped about the next Grimes album, for example. Because now I'm making music that's personal to me and even feels like I had a hand in creating it. The wow factor of an artist releasing a new song just isn't there anymore for me. Not to mention you have to pay Grimes for her music, and it's copyrighted and restricted with how you can use it etc. The music I make is not like that.

AI is going to change entertainment in many ways, some of which we may not have predicted

Edit: I wanted to add I'm not against AI. I think it's cool the ability to create high quality images, videos, music etc is now in anyone's hands. That has downsides though. These things won't feel as special as they use to because they're going to become commonplace.

Still, I think it is a better path then the old way, where creating quality images, music, or video was limited to those with enough skill, time, and even money.

Yes it kind of sucks you have to second-guess everything you see on the internet now because deceptive people don't mark AI generated things as AI, but there's always downsides to every new tech
 
Last edited:
Interesting Video Documentary about some of the Negative Consequences of Generative AI



In short he says generative AI will take away the wow factor we had in seeing spectacular images/movies. Before spectacular imagery use to be reserved for big budget Hollywood movies that came ever so often, their rarity gave them value and appeal, now anyone can make something similar with a smart phone. The proliferation and ease it takes to create spectacular imagery cheapens it's appeal

Also, he compares mass consumption of generative AI to porn addiction. Both dull the senses to the point that more powerful/extreme imagery is needed to trigger the same kind of euphoric response. This is why bizarre AI "slop" videos like babies flying planes or pregnant humanoid cats getting beat up by abusive humanoid lion boyfriends are so popular - people need shock value in imagery now more then anything in order for an image or video to make them feel anything anymore, in the same way many addicted to porn eventually need to view extreme versions of sex to feel anything

Reading this made me think of existing media more than anything. Major films already tend to exaggerate to meet audience expectations, be that through the appearance of characters that have no physical flaws, to everything having to explode when hit by gunfire, to applying artistic license to action scenes to the point where they don't make much sense. Top Gun Maverick would be a recent example for me. Maverick is supposed to be an awesome pilot but he flies like a gorilla thrown into a plane and a lot of the air combat scenes are so divorced from reality that they lose some of the interesting depth of real flying.

I'm already at the point where I'd like movies to tone things down and become more grounded and it's one of the reasons why I'm hopeful for AI generation. Ideally very powerful AI tools will exist to make the mass creation of media tailored for me trivial so that I can avoid things like common tropes that get more extreme over time.

AI might only accelerate the push to extremes in mainstream content, but if I can use it create content that does the opposite it's a win for me.
 
I kinda think technology is already to the point where there's not that much wow factor for movie imagery. That being said, movies already have so much incestuous repetition that once you've seen enough of them you can kinda tell where the story is going. I was watching a movie for the first time with my kids and I was like "that guy's going to die". They look at me and said "have you seen this before?" Nope, I just know. I can tell by the way he's talking that he's dead pretty soon. When he died like 10 minutes later, my kids were mad at me for spoiling it for them (not too bad though, it wasn't that important). I told them I didn't know either, but I kinda did right? They asked me how I could tell, and I just told them that characters don't start wrapping things up like that, and saying overly sentimental stuff, unless the writers are about to off them a little bit later. I called another death in the same show earlier for a different reason - the character was not super relatable, and it seemed like the writers had done that on purpose. When the writers keep a character at arm's length, it's usually because they're going to off them. All of this, by the way, is why I think people liked Game of Thrones so much. It didn't fall into those tropes and really surprised people.

Anyway, it feels like writers already do the generative AI thing with their scripts, writing plot lines that are so well worn that I can see them coming a long way away unless the movie is very adult, from a different culture, or just generally written by a wildly artistic and creative person. Is there a chase scene? Probably someone is going to hang on to the roof of a car. Is there a flying chase scene? Someone is going to turn the plane or ship sideways to fit through a narrow crack. It's to the point where I'm actually rolling my eyes at these movies.
 
Last edited:
They asked me how I could tell, and I just told them that characters don't start wrapping things up like that, and saying overly sentimental stuff, unless the writers are about to off them a little bit later.
The_Live-4-Ever.png

When my own dad used to predict events in movies he wasn't even watching at the time and we kids asked him how he knew he liked to joke that he'd read the script. Now I'm his age I often find myself calling out lines of dialogue before the characters themselves say them and managed to spot a couple of twists in Friday's Superman movie long before they turned up. A movie which avoided every cliché and trope might be a difficult and unfamiliar watch for most audiences, though.
 
Last edited:
View attachment 1465156

When my own dad used to predict events in movies he wasn't even watching at the time and we kids asked him how he knew he liked to joke that he'd read the script. Now I'm his age I often find myself calling out lines of dialogue before the characters themselves say them and managed to spot a couple of twists in Friday's Superman movie long before they turned up. A movie which avoided every cliché and trope might be a difficult and unfamiliar watch for most audiences, though.
Still, it's as though generative AI has been writing scripts for the last 40 years already, rehashing every cliche and trope. I guess it doesn't give me much hope that we'll get lots more truly creative scripts.
 
Still, it's as though generative AI has been writing scripts for the last 40 years already, rehashing every cliche and trope.
That's a funny way of spelling "WGA". I suspect screenwriters are afraid to buck the trend in the fear their scripts won't sell to the average jaded studio head and created a hive-mind equivalent to ChatGPT etc which ruthlessly discards innovation in favour of the staid and comfortingly familiar in the way that Garfield, say, is expressly designed to quell secretaries' Monday morning tendencies to wonder "what is it all for?" (at least according to possibly jealous fellow cartoonist Alan Moore).
 
Last edited:
Back