I don’t think that toil makes good art, but choices do. Artists make choices and those choices add up to something unique and tasteful. The choices are influenced by constraints and the desire to communicate something. There is a “why”, not just a “what”.
I find that AI art misses both of those, and that makes it feel soulless. No decisions were made by a thinking, feeling human. There is no “why”.
I find there is often very much a why with lots of the AI art I see. I often use it to try and visualize a dream I or one of my friends had. When I get the prompt just right it's sometimes striking how close I get to what I was seeing.
My friends will often use it for humorous situations that are happening to us in the moment. Overexagerations or putting one of us in a crazy situation. That's about as meaningful as using a gif keyboard, sure, but it gets a pretty good laugh out of me sometimes. There are recurring ones that have become a type of shorthand for us.
I've used it for highly personalized Christmas cards and anniversary invites for other people that they just otherwise would never have made because they don't have $500 to pay an artist and Clipart is too impersonal to bother with.
I've turned my mother's drawing into a beautiful realistic picture that she absolutely adored. I turned a baby picture from decades ago into a movie of a kid laughing which made my mother instinctively grab the phone from my hand with a big grin.
To me AI art is only about the why. It brings entertainment, comfort, hilarity and agency. I would never charge anybody for it I'm just doing it because I find it a neat set of tools to mess around with. And the tools aren't just asking chatgpt to make something. You can spend months refining the craft of it similar to learning something like Photoshop.
I am professionally interested in the narrative structure of texts, as a teacher and researcher, so I ask this out of genuine interest. I often see posts like this in discussions of LLM use and I wonder where they are coming from. They read like ad copy from a marketing department rather than a non-professional expressing personal feelings. Are you in marketing, or perhaps was this post generated by LLM?
Neither. And this is another bias I keep seeing. Anyone who actually uses the technology in the way I do must be using it to write (I don't) or a shill. Nope I just have been using it for various things since the early days and I wonder why people are so allergic to being able to express themselves clearer and with more agency. Not everything has to be about money. LLMs and image generators can just be fun.
I like that my writing is considered well structured / grammatically correct enough to be llm worthy though. I often think I make several obvious errors when I write off the cuff internet posts (and probably did).
This is true, but I thought the interpreted title might be something that could pique interest. I guess I could change it, but I don't think it's gonna hit the front page anyway.
UPDATED: I asked them to change it; not that it will make much of a difference, but you've got a point.
So tired of pointing that there is something in between "marketeer story tellers" and "real artists".
Bots in the Hall, Neural Viz, GossipGoblin, even Joel Havers animations. All made with genAI and all undeniably creative works, that could not have been created, at least not in that time frame, by a single person without it.
I love Matt's work and often agree with him, but the "no heart" take is just too harsh.
People being able to express their imaginations in a more clear manner always seems like a net positive to me. You can use it without heart and for the exact wrong reasons for sure.
The keyboard example is telling because that's basically what a bunch of our favorite musicians now do. The drum track for that song you're listening to? That's a piece of software not drums.
"Being able to express your imagination more clearly" is fine for what it is but that's not how people use it. His point about the least talented kinds of people using it to pretend to art is dead accurate. People think it's snobbishness about art being this superior untouchable thing, and that is absolutely not it. You see the same thing with people going on about how SWEs will be obsolete, too, because something something code generation.
Making anything worthwhile involves thinking and choices and experience, and that sort of person consider that too hard and boring and just want something that looks worthwhile.
The idea that you can't both use AI heavily and use thinking, make choices, and use experience is the part I think I disagree with. People sometimes use AI to make thoughtless slop, that doesn't mean everyone does.
I think we're going to come into contact with brilliant projects that people have spent years making and there would just be no way they could've done them without AI. It will be a human thinking it through making thousands of choices just like the OP said about Jurassic Park. You can and basically have to guide the AI like crazy to get any output worthwhile.
Indeed. I think the issue is so much of what people see is just the 'slop', which can be produced in such volume by people who really don't care much for the outcome. It distorts the perception of it a lot.
(similarly with the comment about 'it's about as hard as using google'. Yeah, if you're trying about as hard as googling something then the result is probably not going to be very interesting, but that's more effort than went into most AI generated images that you'll see on the internet)
AI art mostly doesn't make use of the medium. There are things you can do with computer vision and user interaction that just aren't possible with traditional media.
I remember the game PlaneShift, which has an interesting not-for-profit development model, and which uses natural language processing to handle text. The possibilities here are really impressive! You could fully voice player characters, parse user intent with better accuracy, etc.
There's all sorts of stuff you could do. Introspecting the models themselves gave us DeepDream. Setting early models up to talk to each other caused headlines about AI inventing its own language. Leaning into the unhinged nature of the medium interests me, as does displaying "thought" graphically.
Even back in the ML days, I had high hopes for this sort of thing. Discarding those hopes because of AI slop feels like throwing the baby out with the bathwater. The whole problem is that people use an amazing (but glitchy and imperfect) technology and use it to make bad art in existing fields, instead of pioneering good art in new fields. It's herd mentality from both sides.
I don’t think that toil makes good art, but choices do. Artists make choices and those choices add up to something unique and tasteful. The choices are influenced by constraints and the desire to communicate something. There is a “why”, not just a “what”.
I find that AI art misses both of those, and that makes it feel soulless. No decisions were made by a thinking, feeling human. There is no “why”.
I find there is often very much a why with lots of the AI art I see. I often use it to try and visualize a dream I or one of my friends had. When I get the prompt just right it's sometimes striking how close I get to what I was seeing.
My friends will often use it for humorous situations that are happening to us in the moment. Overexagerations or putting one of us in a crazy situation. That's about as meaningful as using a gif keyboard, sure, but it gets a pretty good laugh out of me sometimes. There are recurring ones that have become a type of shorthand for us.
I've used it for highly personalized Christmas cards and anniversary invites for other people that they just otherwise would never have made because they don't have $500 to pay an artist and Clipart is too impersonal to bother with.
I've turned my mother's drawing into a beautiful realistic picture that she absolutely adored. I turned a baby picture from decades ago into a movie of a kid laughing which made my mother instinctively grab the phone from my hand with a big grin.
To me AI art is only about the why. It brings entertainment, comfort, hilarity and agency. I would never charge anybody for it I'm just doing it because I find it a neat set of tools to mess around with. And the tools aren't just asking chatgpt to make something. You can spend months refining the craft of it similar to learning something like Photoshop.
I am professionally interested in the narrative structure of texts, as a teacher and researcher, so I ask this out of genuine interest. I often see posts like this in discussions of LLM use and I wonder where they are coming from. They read like ad copy from a marketing department rather than a non-professional expressing personal feelings. Are you in marketing, or perhaps was this post generated by LLM?
Neither. And this is another bias I keep seeing. Anyone who actually uses the technology in the way I do must be using it to write (I don't) or a shill. Nope I just have been using it for various things since the early days and I wonder why people are so allergic to being able to express themselves clearer and with more agency. Not everything has to be about money. LLMs and image generators can just be fun.
I like that my writing is considered well structured / grammatically correct enough to be llm worthy though. I often think I make several obvious errors when I write off the cuff internet posts (and probably did).
I do think it's, like really long, but he makes some good points.
Yeah, he kind of lied about the length.
I would let it slip, but it's obvious that the content is optimised for smartphones/tablets too, so kind of painful to scroll on a computer.
As such, I sentence this post to be ignored (by me).
The irony is that an AI would do a great job summarising it too...
The actual title is A cartoonist's review of AI art.
This is true, but I thought the interpreted title might be something that could pique interest. I guess I could change it, but I don't think it's gonna hit the front page anyway.
UPDATED: I asked them to change it; not that it will make much of a difference, but you've got a point.
Ok, changed now. (Submitted title, for those who care to track such things, was "A Long Screed About AI Art, by Matthew Inman".)
Thanks!
So tired of pointing that there is something in between "marketeer story tellers" and "real artists".
Bots in the Hall, Neural Viz, GossipGoblin, even Joel Havers animations. All made with genAI and all undeniably creative works, that could not have been created, at least not in that time frame, by a single person without it.
I love Matt's work and often agree with him, but the "no heart" take is just too harsh.
People being able to express their imaginations in a more clear manner always seems like a net positive to me. You can use it without heart and for the exact wrong reasons for sure.
The keyboard example is telling because that's basically what a bunch of our favorite musicians now do. The drum track for that song you're listening to? That's a piece of software not drums.
And plenty of people complain about "overtuned" music with robotic drums and overcorrected vocals that drain all the imperfections out of the sound.
(Here's a musician's take on it: https://youtu.be/BDJF4lR3_eg?si=kKJVF2hqSd-TOEzX)
"Being able to express your imagination more clearly" is fine for what it is but that's not how people use it. His point about the least talented kinds of people using it to pretend to art is dead accurate. People think it's snobbishness about art being this superior untouchable thing, and that is absolutely not it. You see the same thing with people going on about how SWEs will be obsolete, too, because something something code generation.
Making anything worthwhile involves thinking and choices and experience, and that sort of person consider that too hard and boring and just want something that looks worthwhile.
The idea that you can't both use AI heavily and use thinking, make choices, and use experience is the part I think I disagree with. People sometimes use AI to make thoughtless slop, that doesn't mean everyone does.
I think we're going to come into contact with brilliant projects that people have spent years making and there would just be no way they could've done them without AI. It will be a human thinking it through making thousands of choices just like the OP said about Jurassic Park. You can and basically have to guide the AI like crazy to get any output worthwhile.
Indeed. I think the issue is so much of what people see is just the 'slop', which can be produced in such volume by people who really don't care much for the outcome. It distorts the perception of it a lot.
(similarly with the comment about 'it's about as hard as using google'. Yeah, if you're trying about as hard as googling something then the result is probably not going to be very interesting, but that's more effort than went into most AI generated images that you'll see on the internet)
AI art mostly doesn't make use of the medium. There are things you can do with computer vision and user interaction that just aren't possible with traditional media.
I remember the game PlaneShift, which has an interesting not-for-profit development model, and which uses natural language processing to handle text. The possibilities here are really impressive! You could fully voice player characters, parse user intent with better accuracy, etc.
There's all sorts of stuff you could do. Introspecting the models themselves gave us DeepDream. Setting early models up to talk to each other caused headlines about AI inventing its own language. Leaning into the unhinged nature of the medium interests me, as does displaying "thought" graphically.
Even back in the ML days, I had high hopes for this sort of thing. Discarding those hopes because of AI slop feels like throwing the baby out with the bathwater. The whole problem is that people use an amazing (but glitchy and imperfect) technology and use it to make bad art in existing fields, instead of pioneering good art in new fields. It's herd mentality from both sides.