this post was submitted on 23 Apr 2024
908 points (97.1% liked)

Technology

60164 readers
4018 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 2 years ago
MODERATORS
 

Instagram is profiting from several ads that invite people to create nonconsensual nude images with AI image generation apps, once again showing that some of the most harmful applications of AI tools are not hidden on the dark corners of the internet, but are actively promoted to users by social media companies unable or unwilling to enforce their policies about who can buy ads on their platforms.

While parent company Meta’s Ad Library, which archives ads on its platforms, who paid for them, and where and when they were posted, shows that the company has taken down several of these ads previously, many ads that explicitly invited users to create nudes and some ad buyers were up until I reached out to Meta for comment. Some of these ads were for the best known nonconsensual “undress” or “nudify” services on the internet.

you are viewing a single comment's thread
view the rest of the comments
[–] UnderpantsWeevil 9 points 8 months ago* (last edited 8 months ago) (1 children)

So we still dont know what these people actually look like naked.

I think the offense is in the use of their facial likeness far more than their body.

If you took a naked super-sized barbie doll and plastered Taylor Swift's face on it, then presented it to an audience for the purpose of jerking off, the argument "that's not what Taylor's tits look like!" wouldn't save you.

Technology is breaking our society

Unregulated advertisement combined with a clickbait model for online marketing is fueling this deluge of creepy shit. This isn't simply a "Computers Evil!" situation. Its much more that a handful of bad actors are running Silicon Valley into the ground.

[–] [email protected] 2 points 8 months ago (1 children)

Not so much computers evil! as just acknowledging there will always be malicious actors who will find clever ways to use technology to cause harm. And yes, there's a gathering of folk on 4Chan/b who nudify (denudify?) submitted pictures, usually of people they know, which, thanks to the process, puts them out on the internet. So this is already a problem.

Think of Murphy's Law as it applies to product stress testing. Eventually, some customer is going to come in having broke the part you thought couldn't be broken. Also, our vast capitalist society is fueled by people figuring out exploits in the system that haven't been patched or criminalized (see the subprime mortgage crisis of 2008). So we have people actively looking to utilize technology in weird ways to monetize it. That folds neatly like paired gears into looking at how tech can cause harm.

As for people's faces, one of the problems of facial recognition as a security tool (say when used by law enforcement to track perps) is the high number of false positives. It turns out we look a whole lot like each other. Though your doppleganger may be in another state and ten inches taller / shorter. In fact, an old (legal!) way of getting explicit shots of celebrities from the late 20th century was to find a look-alike and get them to pose for a song.

As for famous people, fake nudes have been a thing for a while, courtesy of Photoshop or some other digital photo-editing set combined with vast libraries of people. Deepfakes have been around since the late 2010s. So even if generative AI wasn't there (which is still not great for video in motion) there are resources for fabricating content, either explicit or evidence of high crimes and misdemeanors.

This is why we are terrified of AI getting out of hand, not because our experts don't know what they're doing, but because the companies are very motivated to be the first to get it done, and that means making the kinds of mistakes that cause pipeline leakage on sacred Potawatomi tribal land.

[–] UnderpantsWeevil 2 points 8 months ago (1 children)

This is why we are terrified of AI getting out of hand

I mean, I'm increasingly of the opinion that AI is smoke and mirrors. It doesn't work and it isn't going to cause some kind of Great Replacement any more than a 1970s Automat could eliminate the restaurant industry.

Its less the computers themselves and more the fear surrounding them that seem to keep people in line.

[–] [email protected] 3 points 8 months ago (3 children)

The current presumption that generative AI will replace workers is smoke and mirrors, though the response by upper management does show the degree to which they would love to replace their human workforce with machines, or replace their skilled workforce with menial laborers doing simpler (though more tedious) tasks.

If this is regarded as them tipping their hands, we might get regulations that serve the workers of those industries. If we're lucky.

In the meantime, the pursuit of AGI is ongoing, and the LLMs and generative AI projects serve to show some of the tools we have.

It's not even that we'll necessarily know when it happens. It's not like we can detect consciousness (or are even sure what consciousness / self awareness / sentience is). At some point, if we're not careful, we'll make a machine that can deceive and outthink its developers and has the capacity of hostility and aggression.

There's also the scenario (suggested by Randall Munroe) that some ambitious oligarch or plutocrat gains control of a system that can manage an army of autonomous killer robots. Normally such people have to contend with a principal cabinet of people who don't always agree with them. (Hitler and Stalin both had to argue with their generals.) An AI can proceed with a plan undisturbed by its inhumane implications.

[–] UnderpantsWeevil 2 points 8 months ago

I can see how increased integration and automation of various systems consolidates power in fewer and fewer hands. For instance, the ability of Columbia administrators to rapidly identify and deactivate student ID cards and lock hundreds of protesters out of their dorms with the flip of a switch was really eye-opening. That would have been far more difficult to do 20 years ago, when I was in school.

But that's not an AGI issue. That's a "everyone's ability to interact with their environment now requires authentication via a central data hub" issue. And its illusionary. Yes, you're electronically locked out of your dorm, but it doesn't take a lot of savvy to pop through a door that's been propped open with a brick by a friend.

There’s also the scenario (suggested by Randall Munroe) that some ambitious oligarch or plutocrat gains control of a system that can manage an army of autonomous killer robots.

I think this fear heavily underweights how much human labor goes into building, maintaining, and repairing autonomous killer robots. The idea that a singular megalomaniac could command an entire complex system - hell, that the commander could even comprehend the system they intended to hijack - presumes a kind of Evil Genius Leader that never seems to show up IRL.

Meanwhile, there's no shortage of bloodthirsty savages running around Ukraine, Gaza, and Sudan, butchering civilians and blowing up homes with sadistic glee. You don't need a computer to demonstrate inhumanity towards other people. If anything, its our human-ness that makes this kind of senseless violence possible. Only deep ethnic animus gives you the impulse to diligently march around butchering pregnant women and toddlers, in a region that's gripped by famine and caught in a deadly heat wave.

Would that all the killing machines were run by some giant calculator, rather than a motley assortment of sickos and freaks who consider sadism a fringe benefit of the occupation.

[–] iquanyin 2 points 8 months ago

hmmm . i’m not sure we will be able to give emotion to something that has no needs, no living body, and doesn’t die. maybe. but it seems to me that emotions are survival tools that develop as beings and their environment develop, in order to keep a species alive. i could be wrong.

[–] iquanyin 1 points 8 months ago

it’s totally smoke and mirrors. i’m amazed that so many people seem to believe it. for a few things, sure. most things? not a chance in hell.