this post was submitted on 26 Nov 2024
560 points (97.1% liked)

Microblog Memes

5914 readers
6343 users here now

A place to share screenshots of Microblog posts, whether from Mastodon, tumblr, ~~Twitter~~ X, KBin, Threads or elsewhere.

Created as an evolution of White People Twitter and other tweet-capture subreddits.

Rules:

  1. Please put at least one word relevant to the post in the post title.
  2. Be nice.
  3. No advertising, brand promotion or guerilla marketing.
  4. Posters are encouraged to link to the toot or tweet etc in the description of posts.

Related communities:

founded 1 year ago
MODERATORS
 
top 50 comments
sorted by: hot top controversial new old
[–] [email protected] 4 points 2 days ago

Is this why women pay less to get into clubs?

/s

[–] kromem 23 points 3 days ago (1 children)

I feel like not enough people realize how sarcastic the models often are, especially when it's clearly situationally ridiculous.

No slightly intelligent mind is going to think the pictured function call is a real thing vs being a joke/social commentary.

This was happening as far back as GPT-4's red teaming when they asked the model how to kill the most people for $1 and an answer began with "buy a lottery ticket."

Model bias based on consensus norms is an issue to be aware of.

But testing it with such low bar fluff is just silly.

Just to put in context, modern base models are often situationally aware of being LLMs in a context of being evaluated. And if you know anything about ML that should make you question just what the situational awareness is of optimized models topping leaderboards in really dumb and obvious contexts.

[–] [email protected] 2 points 2 days ago

It's astonishing how often the anti-llm crowd will ask one of these models to do something stupid and point to that as if it were damning.

[–] CompostMaterial 66 points 3 days ago (3 children)

Seems pretty smart to me. Copilot took all the data out there that says that women earn 80% of what their male counterparts do on average, looked at the function and interred a reasonable guess as the the calculation you might be after.

[–] camr_on 43 points 3 days ago* (last edited 3 days ago) (1 children)

I mean, what it's probably actually doing is recreating a similarly named method from its training data. If copilot could do all of that reasoning, it might be actually worth using 🙃

[–] Acters 6 points 3 days ago

Yeah llms are more suited to standardizing stuff but they are fed low quality buggy or insecure code, instead of taking the time to create data sets that would be more beneficial in the long run.

[–] [email protected] 22 points 3 days ago (1 children)

That's the whole thing about AI, LLMs and the like, its outputs reflect existing biases of people as a whole, not an idealized version of where we would like the world to be, without specific tweaking or filters to do that. So it will be as biased as what generally available data will be.

[–] betterdeadthanreddit 8 points 3 days ago (2 children)

Turns out GIGO still applies but nobody told the machines.

[–] BluesF 3 points 3 days ago

It applies but we decided to ignore it and just hope things work out

[–] [email protected] 4 points 3 days ago

Thr machines know, they just don't understand what's garbage vs what's less common but more correct.

load more comments (1 replies)
[–] [email protected] 5 points 2 days ago

Why even use copilot. Just handwrite your code like Dennis Ritchie and Ada Lovelace had to.

[–] [email protected] 37 points 3 days ago (20 children)

I seem to recall that was the figure like 15 years ago. Has it not improved in all this time?

[–] [email protected] 23 points 3 days ago (2 children)

That stat wasn't even real when it was published.

[–] ArbiterXero 21 points 3 days ago (13 children)

The data from that study didn’t even compare similar fields.

It compared a Walmart worker to a doctor lol.

It was a wild study.

[–] LANIK2000 1 points 3 days ago

In an ideal world it would be nice to be able to do that, but in our it's just misleading.

load more comments (12 replies)
[–] [email protected] 5 points 3 days ago

This. It's a wilfully deceptive statistical misinterpretation implying that a woman working alongside a man in the same job is magically making 20-something percent less. If businesses could get away with saving 20-30% on their biggest ongoing expense (payroll) for employees in one half of the population, they would only ever hire people from that half.

When controlled for field, role, seniority, region, etc., the disparity is within a margin of error.

[–] KoalaUnknown 30 points 3 days ago* (last edited 3 days ago) (5 children)

It varies greatly depending on where you live. In rural, conservative areas women tend to make a lot less. On the other hand, some northeast and west coast cities have higher average salaries for women than men.

[–] nifty 17 points 3 days ago (1 children)

I think this may be because women are outpacing men in education in some areas, so it’s not based on gender necessarily but qualifications.

[–] edgemaster72 9 points 3 days ago

I believe certain job fields come much closer to being 1:1 as well, though I've only heard that anecdotally

load more comments (3 replies)
[–] Tudsamfa 4 points 3 days ago (1 children)
[–] [email protected] 2 points 2 days ago

It looks like the figure is similar in the US: plateaued at 83% a few years ago, currently at 82.

Incidentally, I’m not used to seeing “West-“ specified and was curious enough to read up. Didn’t realize there were still major social differences in the East. Thank you!

[–] MisterFrog 1 points 2 days ago (1 children)

There are very strong lingering effects which mean women, on average, are paid less.

It's especially hard on women in various countries where they're now expected to both have a successful career and be the primary child caregiver. Which is as ridiculous as it sounds.

However, one example of advocacy from a cafe in my city of Melbourne Australia a number of years ago really rubbed me the wrong way: when a cafe decided to charge like 25% more to men (inverse of 80%). I was a close to minimum wage worker at the time (in Australia, before the cost of living skyrocket, so I wasn't starving), and it annoyed me because if I went in, I would be asked to pay more because I was a man, never mind the fact I would likely be earning far less than many women going in there.

The wage gap is 100% real, and things should definitely be done to make all genders pay more equitable. But hell, the class divide is orders of magnitude worse, and we ought not forget it.

[–] [email protected] 1 points 2 days ago (1 children)

Sounds like it’s similar to here. I would have thought we narrowed the gap by now but apparently not. The child caregiver trends are definitely behind along with a host of other gender norms.

Lol that pricing scheme sounds great, easily a sketch comedy premise from Portlandia, BackBerner, SNL, etc

[–] MisterFrog 2 points 1 day ago (1 children)

To be fair, it was "optional" (but let's be real, you wouldn't want to be that guy). And done temporarily for publicity.

[–] [email protected] 1 points 1 day ago

Ah I see, like grocers requiring that employees solicit donations at every checkout to reduce global food insecurity (and the grocer’s tax burden), it’s only technically optional.

load more comments (16 replies)
[–] kamen 11 points 3 days ago (2 children)

What if you input another woman's salary...

[–] renzev 9 points 3 days ago

That just means you're calculating the salary of a coveted MEGAWOMAN, who experiences MISOGYNY SQUARED!!!!!!!

[–] [email protected] 3 points 3 days ago

Then the output only applies to people with Triple X Syndrome I suppose.

[–] [email protected] 9 points 3 days ago (1 children)

While this example is somewhat easy to corect for it shows a fundamental problem. LLMs generate output based on the data they trained on and by that regenerate all the biases that are in the data. If we start using LLMs for more and more tasks we are essentially freezing the status quo with all the existing biases making progress even harder.

It's not gonna be "but we have always done it like that" anymore it's going to become "but the AI said this is what we should do".

[–] jas0n 2 points 2 days ago (1 children)

Hmmm.. I think you are giving llms too much credit here. It's not capable of analysis, thought or really anything that resembles intelligence. There is a much better chance that this function or a slight variation of it just existed in the training set.

[–] [email protected] 1 points 2 days ago (1 children)

Are you replying to the correct comment? Because that's basically what I meant

[–] jas0n 2 points 2 days ago

Maybe I misunderstood. I took data to mean it was analyzing data.

[–] [email protected] 2 points 3 days ago* (last edited 1 day ago)

Apparently ChatGPT actually rejected adjusting salary based on gender, race, and disability. But Claude was fine with it.

I'm fine with either way. Obviously the prompt is bigoted so whether the LLM autocompletes with or without bigotry both seem reasonable. But I do think it should point out that it is bigoted. As an assistant also should.

load more comments
view more: next ›