this post was submitted on 16 Sep 2023
65 points (88.2% liked)
Games
16647 readers
1342 users here now
Video game news oriented community. No NanoUFO is not a bot :)
Posts.
- News oriented content (general reviews, previews or retrospectives allowed).
- Broad discussion posts (preferably not only about a specific game).
- No humor/memes etc..
- No affiliate links
- No advertising.
- No clickbait, editorialized, sensational titles. State the game in question in the title. No all caps.
- No self promotion.
- No duplicate posts, newer post will be deleted unless there is more discussion in one of the posts.
- No politics.
Comments.
- No personal attacks.
- Obey instance rules.
- No low effort comments(one or two words, emoji etc..)
- Please use spoiler tags for spoilers.
My goal is just to have a community where people can go and see what new game news is out for the day and comment on it.
Other communities:
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Kind of. The AI doesn't go out and find/do anything, people include images in its training data though. So it's the human that's finding the art and plugging it in — most likely through automated processes that just scrape massive amounts of images and add them to the corpus used for training.
Sorry, this is wrong. You definitely can train AI to produce works that are very nearly a direct copy. How "original" works created by the AI are is going to depend on the size of the corpus it got trained on. If you train the AI (or put a lot of weight on) training for just a couple works from one specific artist or something like that it's going to output stuff that's very similar. If you train the AI on 1,000,000 images from all different artists, the output isn't really going to resemble any specific artist's style or work.
That's why the company emphasized they weren't training the AI to replicate a specific artist's (or design company, etc) works.
As a general statement: No, I am not. You're making an over specific scenario to make it true. Sure, if I take 1 image and train a model just on that one image, it'll make that exact same image. But that's no different than me just pressing copy and paste on a single image file. The latter does the job whole lot better too. This entire counter argument is nothing more than being pedantic.
Furthermore, if I'm making such specific instructions to the AI, then I am the one who's replicating the art. It doesn't matter if I use a pencil to trace out the existing art, using photoshop, or creating a specific AI model. I am the one who's doing that.
You didn't qualify what you said originally. It either has the capability or not: you said it didn't, it actually does.
Not really. It isn't that far-fetched that a company would see an artist they'd like to use but also not want to pay that artist's fees so they train an AI on the artist's portfolio and can churn out very similar artwork. Training it on one or two images is obviously contrived, but a situation like what I just mentioned is very plausible.
So this isn't true. What you said isn't accurate with the literal interpretation and it doesn't work with the more general interpretation either. The person higher in the thread called it stealing: in that case it wasn't, but AI models do have the capability to do what most people would probably call "stealing" or infringing on the artist's rights. I think recognizing that distinction is important.
Yes, that's kind of the point. A lot of people (me included) would be comfortable calling doing that sort of thing stealing or plagiarism. That's why the company in OP took pains to say they weren't doing that.