- 3,400
- Massachusetts
- GBO-Possum
Then there's John Oliver's take on the perils of "AI Slop"
LANGUAGE WARNING
(of course)
LANGUAGE WARNING
(of course)
Photoshop doesn't feel, doesn't suffer, doesn't hope. Pencils don't feel, don't suffer, don't hope. It's just a tool like any other.
I guess it depends on the context it has learned from you, or if it is a new instance of a chat, what is widely regarded as "default opinion".I'd say it's not actually not that nice,
True, it's a tool that can generate impressive-looking results with minimal effort or thought... and unsurprisingly will result in a a flood of low effort, low value slop from people who are too lazy to put in the work to seriously practice art.Thought I'd pull this discussion to this thread since it's derailing the other one.
Whilst I generally agree it's a tool, and am not necessarily anti-AI, it's not the same as Photoshop or Pencils. Photoshop and pencils don't contribute anything to a picture that you don't put in. Photoshop obviously now does add AI elements, but even going back nearly 30 years, filters that you were able to apply only modified what was there. The absolute lion's share of what's in an AI image isn't just drawn/painted by AI, it's added by AI based on probability.
I just asked AI "Make a nice picture"...
View attachment 1461881
I'd say it's not actually not that nice, but everything there came entirely from the AI. At this point referring to AI as a tool in instances like this, is like calling your Domino's App a cooking utensil, because you use it, and then you have food...
Low effort isn't limited to AI. Yes you can tell AI to make something with no concept of what you're getting and take whatever is output, but you can also go into the process with a specific idea and try to get AI to make that. In the latter sense it functions more like a creative tool, or at least I'd say so.I just asked AI "Make a nice picture"...
...
I'd say it's not actually not that nice
Hopefully we get to that pointLow effort isn't limited to AI. Yes you can tell AI to make something with no concept of what you're getting and take whatever is output, but you can also go into the process with a specific idea and try to get AI to make that. In the latter sense it functions more like a creative tool, or at least I'd say so.
On the technical side, text prompts are horribly limiting and I'm concerned about the concept of AI generation being tied to such simple input. When it comes to image generation for example, I think a proper AI tool should generate a 3D representation of the image and then give the user to ability to move individual elements of that 3D representation before converting back to a 2D image. That would allow for much finer control while still getting the benefits of quick output and having the option of using text for the initial starting point.
Low effort isn't limited to AI. Yes you can tell AI to make something with no concept of what you're getting and take whatever is output, but you can also go into the process with a specific idea and try to get AI to make that. In the latter sense it functions more like a creative tool, or at least I'd say so.
True, it's a tool that can generate impressive-looking results with minimal effort or thought... and unsurprisingly will result in a a flood of low effort, low value slop from people who are too lazy to put in the work to seriously practice art.
That being said, it can be used in more involved ways than simply inputting a bit of text and getting out a pretty image. There are ways to direct image generators beyond just text if you want/need more authorial control over its output, you could make a rudimentary mock-up of the desired image in Blender for instance. And you could spend tons of time and effort iterating and modifying the image in post. So with the food/cooking analogy, I'd say it's more like top ramen. You can just do the bare minimum making it, and people often do... hell you can just eat it straight out of the package without cooking it, if you're feeling really lazy. But nothing's stopping you from putting some effort and creativity into gussying it up a bit and making it yours.
If that person made art, do you consider them an artitst?I'm on the broad spectrum of what can and can't be considered art. If someone goes outside, finds a leaf laying around that they like, takes it and frames it, then they've made art, even though they had zero involvement in the creation of the leaf.
You mean like a different piece of art or the framed leaf? Artist is such a broad term. For that moment, they are an artist, because they are expressing their artistic sideIf that person made art, do you consider them an artitst?
As I said above, no 2 prompts will create the same picture, so it is nothing but randomness that will have a certian likelyness to what was initially intended.It's generally making a step that the user cannot make themselves, it's therefore having it's own input into what is created.
Then maybe an apt point of comparison is photography, in which the photographer likely didn't make everything (or, often the case, anything) within their frame. They might've put a great deal of care into the lighting, composition, etc... or they might not have. It's possible to just wantonly shoot candid photos without such a high degree of care. If they take such a rapid fire approach and manage to snag a few undeniably great shots, is it not real art because all they did was press the shutter button?It's not so much a question of being low effort, that was a deliberately low effort in order to show that even if the picture is 1% prompt, you you still get a 100% image, with the other 99% contributed by the AI. That's relevant to the original point being made that AI is a tool, and therefore comparable to a pencil or to photoshop - and further if it's no different to a pencil then it doesn't matter that it doesn't have the 'human' element that's important in art - because neither do other artistic tools.
I wouldn't argue that using AI is not a learnable skill, it is... some people can get more out of it than others. I'm not arguing that it isn't a tool.. that's how I view it... and certainly combining it with other tools, such as blender, really helps to push what's possible and in itself is a skill. But, it's a tool that stretches well beyond being a simple tool, or a labour saving tool. It's generally making a step that the user cannot make themselves, it's therefore having it's own input into what is created. I would therefore be far harsher on judging it's use, and the output than anything else.
I would argue that because AI can/does contribute so much to an image, it does matter that the AI is not in itself 'creative' - or at least it does matter if the human element of art is important.
This is a really generous interpretation of how something like Photoshop works. Just with patterns, textures and brushes there can be huge amounts of material that is fully created by Photoshop or someone else, I just told it where to put it. Clone and healing brushes have been adding in automatically created material for years before AI got anywhere near the game. AI is a step up, but it's not new.Thought I'd pull this discussion to this thread since it's derailing the other one.
Whilst I generally agree it's a tool, and am not necessarily anti-AI, it's not the same as Photoshop or Pencils. Photoshop and pencils don't contribute anything to a picture that you don't put in. Photoshop obviously now does add AI elements, but even going back nearly 30 years, filters that you were able to apply only modified what was there.
You'll be pleased to know then that such tools exist and have done for several years. They're not perfect, because they're still dealing with AI and what you think of as moving an object the AI can choose to interpret as turning it into a cabbage. But there are a lot of very clever ways of being far more specific about what is created and where than pure text prompts.On the technical side, text prompts are horribly limiting and I'm concerned about the concept of AI generation being tied to such simple input. When it comes to image generation for example, I think a proper AI tool should generate a 3D representation of the image and then give the user to ability to move individual elements of that 3D representation before converting back to a 2D image. That would allow for much finer control while still getting the benefits of quick output and having the option of using text for the initial starting point.
That's what tools are for. There is nothing I can do as a human without tools to replicate what a crane is capable of doing. No amount of training or skill will let me lift a 5 ton girder 40 stories up in the air. Some tools allow us to do things that would be impossible otherwise.It's generally making a step that the user cannot make themselves, it's therefore having it's own input into what is created.
I agree. If someone is making something with intention, it's art. If someone sits on a camera and accidentally takes a photo, that's not art. If someone takes a photo of something because they think it looks nice and they want other people to see the nice thing too, that's art. If someone puts some random words into Midjourney and hits Generate, that might not be art. If someone takes the time to create something specific, then it probably is art. It might be bad art, but most art is bad art.Ultimately I think what matters first and foremost, even more so than the artist's creative intention and how much effort they might or might not've put into any particular piece of art, is what other people get out of it.
Local instances are something I've been considering but haven't started on just yet. As with many things there are advantages to doing it yourself if you can, namely more control. 3D generation and alternative inputs are something that I haven't seen much of so I'll have to look into that more.You'll be pleased to know then that such tools exist and have done for several years. They're not perfect, because they're still dealing with AI and what you think of as moving an object the AI can choose to interpret as turning it into a cabbage. But there are a lot of very clever ways of being far more specific about what is created and where than pure text prompts.
If you want to learn, ControlNet for Stable Diffusion is a good place to start. People who are being seriously creative with AI mostly aren't running through the big online generators, they're running local instances with custom workflows. You can do a lot, but it takes a lot of learning and setup to be able to have that level of control. Kinda like learning art in the first place really - anyone can pick up a brush and follow a Bob Ross guide and mash out something that looks kinda pretty, but getting the artistic and mechanical skill to use the tools properly yourself takes time.
This feels like complaining that a towel is terrible at cutting tomatoes and was handily outperformed by a pocket-knife. Like sure, but that's not what it's for. A knife is surprisingly **** at drying yourself too, you'll end up with all sorts of severed appendages. We've known for years that LLMs are garbage at "traditional" computing stuff. They struggle with math, they struggle with counting, they can't tell a fact from a fart.Somewhere in the digital afterlife, Deep Blue is shaking its head and wondering what went wrong.
![]()
Google’s Gemini refuses to play Chess against the Atari 2600
: Warned that ChatGPT and Copilot had already lost, it stopped boasting and packed up its pawnswww.theregister.com
Indeed. And the common assumption that all LLMs hallucinate equally comes a close second.The worst thing that has been done for LLMs is to present them as this magic tool that can somehow do absolutely everything.
The issue is more that people are becoming inclined to believe that ChatGPT, Copilot and the like are able to do these things simply because they're now bragging about being able to do them, and harboring the general thought that they're getting "better" because demonstrators are using very specific prompts to get the results they want to show.The worst thing that has been done for LLMs is to present them as this magic tool that can somehow do absolutely everything. Which I will handily admit is in large part the fault of the snake oil salesmen marketing the damn things. LLMs can't just be good tools for interacting with language in a remarkably human way, they have to be AGI even though they're very much not. But we shouldn't believe the ******** of people who stand to make a lot of money by lying to us.
Neural networks that are actually designed for playing games work excellently, see AlphaGo, AlphaStar and MuZero. AI is a tool, and tools are usually only good at the thing they were designed to do.
Google putting AI results at the top of every search, with no easy option to disable them as far as I'm aware, isn't helping.It may seem benign for a chess game, but people are starting to use them as a replacement for search engines and also turn to them for medical and legal advice now, and having it say "woopsy, I was wrong" only after the damage is done isn't going to help anything.
Two ways:Google putting AI results at the top of every search, with no easy option to disable them as far as I'm aware, isn't helping.
Cool. So is AI the problem, or is snake oil salesmen lying and using misleading information the problem?The issue is more that people are becoming inclined to believe that ChatGPT, Copilot and the like are able to do these things simply because they're now bragging about being able to do them, and harboring the general thought that they're getting "better" because demonstrators are using very specific prompts to get the results they want to show.
No, they don't have emotions. They're machines that are trained to "project false competence with genuine confidence" in the exact same way that humans do. Humans can and do regularly pull this exact ******** on each other.The fact that Gemini did the same thing, boast that it was "More akin to a modern chess engine … which can think millions of moves ahead and evaluate endless positions", and only backed down and said "actually I'm rubbish at this, let's not do it" after being specifically informed of its predecessors failures, is showing that these LLMs are projecting false competence with genuine confidence.
Again, seems more like a problem with how these things are presented and marketed than a problem with the actual machine. Many useful tools are dangerous if misused. If something is dangerous and has only marginal utility, then maybe there's a case for banning or restricting it. But if something is dangerous and useful, that's a case for making people aware of the danger.It may seem benign for a chess game, but people are starting to use them as a replacement for search engines and also turn to them for medical and legal advice now, and having it say "woopsy, I was wrong" only after the damage is done isn't going to help anything.
I agree completely. I think it's an escalation of something that was always there - that search results are potentially just random **** that someone made up. With real websites you can at least have reviews and webs of trust, so there's some ability to be able to figure out how trustworthy something is through extra labour. With AI it's a black box*, and so you take what you get.Google putting AI results at the top of every search, with no easy option to disable them as far as I'm aware, isn't helping.
Interesting Video Documentary about some of the Negative Consequences of Generative AI
In short he says generative AI will take away the wow factor we had in seeing spectacular images/movies. Before spectacular imagery use to be reserved for big budget Hollywood movies that came ever so often, their rarity gave them value and appeal, now anyone can make something similar with a smart phone. The proliferation and ease it takes to create spectacular imagery cheapens it's appeal
Also, he compares mass consumption of generative AI to porn addiction. Both dull the senses to the point that more powerful/extreme imagery is needed to trigger the same kind of euphoric response. This is why bizarre AI "slop" videos like babies flying planes or pregnant humanoid cats getting beat up by abusive humanoid lion boyfriends are so popular - people need shock value in imagery now more then anything in order for an image or video to make them feel anything anymore, in the same way many addicted to porn eventually need to view extreme versions of sex to feel anything
They asked me how I could tell, and I just told them that characters don't start wrapping things up like that, and saying overly sentimental stuff, unless the writers are about to off them a little bit later.
Still, it's as though generative AI has been writing scripts for the last 40 years already, rehashing every cliche and trope. I guess it doesn't give me much hope that we'll get lots more truly creative scripts.View attachment 1465156
When my own dad used to predict events in movies he wasn't even watching at the time and we kids asked him how he knew he liked to joke that he'd read the script. Now I'm his age I often find myself calling out lines of dialogue before the characters themselves say them and managed to spot a couple of twists in Friday's Superman movie long before they turned up. A movie which avoided every cliché and trope might be a difficult and unfamiliar watch for most audiences, though.
That's a funny way of spelling "WGA". I suspect screenwriters are afraid to buck the trend in the fear their scripts won't sell to the average jaded studio head and created a hive-mind equivalent to ChatGPT etc which ruthlessly discards innovation in favour of the staid and comfortingly familiar in the way that Garfield, say, is expressly designed to quell secretaries' Monday morning tendencies to wonder "what is it all for?" (at least according to possibly jealous fellow cartoonist Alan Moore).Still, it's as though generative AI has been writing scripts for the last 40 years already, rehashing every cliche and trope.