this post was submitted on 05 Oct 2023
171 points (89.4% liked)
Technology
59587 readers
6310 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
14 / 20 here. I dunno why there are so many people, particularly on Reddit, who absolutely hate AI art. Yeah some of it can look janky, uncanny valley, or such but a lot of it looks really damn cool.
And not all of us have talents to create visual art of our own so text creation is much more accessible for us to explore our imaginations. Or lack the money to commission pieces from human artists.
I suspect they hate it not because of any features of the actual images themselves, but for what it means to how society as a whole treats art.
For some it's simply financial. Their career is at stake, an industry that they thought was a stable source of employment is now on the leading edge of a huge shake-up that might not need them at all in the future.
For others it's seen as an attack on their personal self-worth. For years - for generations - there has been a steady drumbeat insistence that art is what makes humans "special." Both specific artists, and humanity in general. It was supposed to be a special skill that we had that set us above the animals and the machines. And now that's been usurped.
It's like the old folk take of John Henry, the steel-driving man who made a heroic last stand against Skynet's forces in the railroad construction industry. People want to think humans are irreplaceable and art seemed like a rock-solid anchor for that. Turns out it was actually not.
Spot on!
Agree and I sympathize with all the points.
On the financial point, we, as a society, badly need to stop depending on jobs for survival before it's too late. But I know that we're unlikely to change until a lot of people get hurt.
And on the self-worth point, it feels awful to be replaced, even if the money isn't an issue. People take pride in their work and want their work to be celebrated. Yet, we're quickly approaching a point where it's going to be very difficult for people to create art by hand that can hold a candle to AI art. Sure, there's still many master artists, but they got where they are through hard work. How many new potential artists will be willing to put in that hard work when any random Joe Blow can generate something better in seconds? Human made art (from scratch) won't go away, but it is harder to feel good about what you create when it feels like your art has no place anymore.
I suspect that society isn't going to stop depending on jobs for survival until it's too late. That is, it'll only implement something like UBI or equivalent solution once most jobs have been replaced and there's a legion of permanent unemployed who are forcing the issue to be addressed. Unfortunately that just seems to be the way of things, very few problems ever get addressed preemptively.
IMO this isn't really a reason to try to slow down AI, because that will only slow down the eventual UBI-like solution to it. At this point I don't think "change human nature first" is a viable approach.
A lot of Redditors don’t even know why they think a certain way, they think that way because everyone else around them thinks that way. There are some legit criticisms of AI art but most of the time it’s just bullshit lip service to artists when they don’t actually care
Yeah I've had posts deleted on Reddit before because "ew AI art". Like, I'm just trying to share interesting images. I'm not profiting off them in any way. But they take it so personally.
Personally, I have no issue with models made from stuff obtained with explicit consent. Otherwise you're just exploiting labor without consent.
(Also if you're just making random images for yourself, w/e)
((Also also, text models are a separate debate and imo much worse considering they're literally misinformation generators))
Note: if anybody wants to reply with "actually AI models learn like people so it's fine", please don't. No they don't. Bugger off. https://arxiv.org/pdf/2212.03860.pdf here have a source.
This paper is just about stock photos or video game art with enough dupes or variations that they didn't get cut from the training set. The repeated images were included frequently enough to overfit. Which is something we already knew. That doesn't really go to proving if diffusion models learn like humans or not. Not that I think they do.
Sure, it's not proof, but it gives a good starting point. Non-overfitted images would still have this effect (to a lesser extent), and this would never happen to a human. And it's not like the prompts were the image labels, the model just decided to use the stock image as a template (obvious in the case with the painting).
This is a bold claim to make with no evidence. When every trained image accounts for less than one byte of data in the model. Even the tiniest images file contain many thousands of bytes. One byte isn't even enough to store a single character of text, most Latin-based alphabets and some symbols, use two bytes.
There are plenty of artists that get stuck with same-face. Like Sam Yang for instance. Then there are the others who can't draw disabled people or people of color. If it isn't a beautiful white female character, they can't do it. It can take a lot of additional training for people to break out of their rut, some don't.
I'm not going to tell you that latent diffusion models learn like humans, but they are still learning. https://arxiv.org/pdf/2306.05720.pdf Have a source.
I recommend reading this article by Kit Walsh, a senior staff attorney at the EFF if you haven't already. The EFF is a digital rights group who most recently won a historic case: border guards in the US now need a warrant to search your phone.
This guy also does a pretty good job of explaining how latent diffusion models work, You should give this a watch too.
Here is an alternative Piped link(s):
explaining how latent diffusion models work
Piped is a privacy-respecting open-source alternative frontend to YouTube.
I'm open-source; check me out at GitHub.