this post was submitted on 08 Sep 2024
435 points (98.0% liked)

Microblog Memes

5989 readers
1862 users here now

A place to share screenshots of Microblog posts, whether from Mastodon, tumblr, ~~Twitter~~ X, KBin, Threads or elsewhere.

Created as an evolution of White People Twitter and other tweet-capture subreddits.

Rules:

  1. Please put at least one word relevant to the post in the post title.
  2. Be nice.
  3. No advertising, brand promotion or guerilla marketing.
  4. Posters are encouraged to link to the toot or tweet etc in the description of posts.

Related communities:

founded 1 year ago
MODERATORS
top 36 comments
sorted by: hot top controversial new old
[–] GardenVarietyAnxiety 86 points 3 months ago* (last edited 3 months ago) (6 children)

This is being done by PEOPLE. PEOPLE are using AI to do this.

I'm not defending AI, but we need to focus on the operator, ~~not the tool.~~

The operator as much as the tool.

[–] TootSweet 51 points 3 months ago (1 children)
[–] GardenVarietyAnxiety 17 points 3 months ago (1 children)

I had the same thought after I posted it, lol

[–] TexasDrunk 1 points 3 months ago (2 children)

Step one for gun control should be a fully functioning mental healthcare system. That's not the final step by any means, but if people are getting the mental help they need there will be fewer shootings.

[–] [email protected] 6 points 3 months ago

Step one for gun control should be gun control.

Sure, a functioning mental healthcare system is important and should be pursued in parallel. But, clearly, there's a major issue with the availability of powerful guns. That needs to be addressed before, or at least at the same time as mental health.

[–] GardenVarietyAnxiety 1 points 3 months ago

Preach! lol

[–] [email protected] 30 points 3 months ago* (last edited 3 months ago) (1 children)

Technology is not neutral.

Especially for a tool that’s specifically marketed for people to delegate decision-making to it, we need to seriously question the person-tool separation.

That alleged separation is what lets gig economy apps abuse their workers in ways no flesh-and-blood boss would get away with, as well as RealPage’s decentralized price-fixing cartel, and any number of instances of “math-washing” justifying discrimination.

The entire big tech ethos is basically to do horrible shit in such tiny increments that there is no single instance to meaningfully prosecute. (Edit: As always, Mike Judge is relevant: https://youtu.be/yZjCQ3T5yXo)

We need to take this seriously. Language is perhaps the single most important invention of our species, and we’re at risk of the social equivalent of Kessler Syndrome. And for what? So we can write “thank you” notes quicker?

[–] GardenVarietyAnxiety 2 points 3 months ago* (last edited 3 months ago)

Respect.

Also: I just realized I need a Mike Judge marathon night.

[–] zib 28 points 3 months ago (1 children)

You bring up a good point. In addition to regulating the tool, we should also punish the people who maliciously abuse it.

[–] GardenVarietyAnxiety 4 points 3 months ago* (last edited 3 months ago)

Regulate it because it's being abused, and hold the abusers accountable, yeah.

I always see the names of the models being boogey-manned, but we only ever see the names of the people behind the big, seemingly untouchable ones.

"Look at this scary model" vs "Look at this person being a dick"

We're being told what to be afraid of and not who is responsible for it, because fear sells and we can't do anything with it.

Just my perception, of course.

[–] [email protected] 5 points 3 months ago (3 children)

I mean the tool is also being made by people. And there is people who pointed out, that a tool that is great at spurting out plausible sounding things with no factual bearing could be abused badly for spreading misinformation. Now there have been ethic boards among the people who make these tools who have taken these concerns in and raised them in their companies, subsequently getting ousted for putting ethical concerns before short term profits.

The question is, how much is it just a tool and how much of it is intrinsically linked with the unethical greedy people behind pushing it onto the world?

E.g. a cybertruck is also just a car, and one could say the truck itself is not to blame. But it is the very embodiment of the problems of the people involved.

[–] Anticorp 2 points 3 months ago

Corporate ethics only exist within the realm of theoretical, and training videos. Ethics will not be tolerated in actuality.

[–] GardenVarietyAnxiety 1 points 3 months ago

It is all intrinsically linked. But we need to see who the people behind it are or it's just a boogey-man.

[–] [email protected] 1 points 3 months ago

subsequently getting ousted for putting ethical concerns before short term profits.

The irony is that there are no profits. The companies selling generative AI are losing such vast sums of money it's difficult to wrap your head around.

What they're focused on isn't short-term profits, it's being the biggest, most dominant firm whenever AI does eventually become profitable, which might take decades.

[–] [email protected] 1 points 3 months ago (1 children)

People seem to've already forgotten about Transmetropolitan. 🤷🏽‍♂️

I mean, sure, fuck Ellis, but still. Idiocracy came after, and even that's fading from modern awareness, it seems. 😶‍🌫️

[–] CitizenKong 4 points 3 months ago

Ellis is like Gaiman, at some point you have to seperate the work from the author.

[–] saltesc 1 points 3 months ago (1 children)

Yep. Machines will only ever do what they're told to do. This is AI literally doing the job it's been instructed to do under the rules it has been given.

[–] [email protected] 2 points 3 months ago* (last edited 3 months ago)

Machines are not designed by hermits who have no knowledge of the outside world. They're tools, but they're tools designed with a purpose and with or without safeties designed to keep them from maiming or killing people. The design of the machine can be used to talk about the responsibility and morals of the machine's designer. And, certain machines are so unsafe that even if they theoretically can have a useful purpose, the dangers of the machine being misused are so great that the machine shouldn't be permitted to be sold.

In Arrested Development, George Bluth designs and sells the Cornballer, a machine to deep-fry cornballs. It was made illegal after it caused serious burns to anybody who used it. Part of the purpose of showing this device on the show is to reveal the character of George Bluth. It shows that he's the kind of guy who doesn't care enough to design a safe device, and who continues to try to sell it in Mexico even after it's made illegal in the US because of how unsafe it is.

Yes, in this case it is people who are submitting papers full of fabricated data using ChatGPT as a tool. But, that doesn't mean that ChatGPT is simply "neutral" in this whole thing. They've released a tool that lacks safeties and that is effectively "burning" science. The positive potential uses of ChatGPT are what, writing a dirty limerick in the style of Shakespeare? Meanwhile, the potential pitfalls of using it are things like having it convince a suicidal person to kill themselves, sowing confusion and making it harder to find good science, giving people unsafe medical diagnoses?

[–] [email protected] 50 points 3 months ago (1 children)

Leaving the information age and entering the disinformation age.

[–] franklin 6 points 3 months ago* (last edited 3 months ago)

A deadly weapon given how much the ruling class is trying to turn a class war into an identity war.

[–] [email protected] 18 points 3 months ago (2 children)

AI content, AI bots in the forums, AI telemarketing, AI answering machines, AI everything. AI will make IRL and stuff like audited national encyclopedias important again. Gone is the promise of the internet. And this is the real reason why anonymity will not be possible online. If we can't identify the poster as a human, it will mean nothing...

[–] Anticorp 8 points 3 months ago

And since we don't have any of our constitutional rights to privacy online, an internet without anonymity isn't an acceptable solution either. What a waste. The most exciting and promising creation of the last hundred years, squandered for advertising and selfish means.

[–] [email protected] 0 points 3 months ago* (last edited 3 months ago)

I honestly think anonymity is even better now, given the fact that you can self-host LLMs that can change the linguistic style of your writing into something entirely different.

[–] peopleproblems 10 points 3 months ago

The good news is think of all the possibilities in regards to funding new research to prove AI wrong!

Or think of the millions the rest of the world will have to spend on software engineers fixing their fucked up AI generated code!

It's like outsourcing to an even worse firm!

[–] Pacattack57 7 points 3 months ago (1 children)

Does anyone else hate when words are cut with hyphens, especially longer words? Just truncate the same word. Makes it easier to read.

[–] PriorityMotif 2 points 3 months ago

It should have said "questionable, gpt fabricated, scientific"

[–] darthelmet 5 points 3 months ago

Question: Does Google Scholar only list published papers from reputable journals or does it just grab anything people throw out there? We have already seen that some journals will publish complete nonsense without looking at it. AI or not, there's a core problem with how academic work gets peer reviewed and published at the moment.

[–] whotookkarl 4 points 3 months ago (2 children)

Imagine if peer review actually had to include a reproducible study and reproduce the same result.

[–] cucumber_sandwich 3 points 3 months ago

That basically doubles the money necessary for everything.

[–] AnUnusualRelic 1 points 3 months ago

If you run the same prompt through the engine, you ought to get fairly similar results. There's your reproductibility.

[–] [email protected] 3 points 3 months ago (1 children)

What app is this that justifies the text with hyphens? Is it in a fixed-width display, or does it detect syllables automatically?

[–] [email protected] 1 points 3 months ago

Tusky for Android.

[–] Anticorp 3 points 3 months ago (1 children)

There's no point in existing anymore. ChatGPT can take it from here.

[–] TriflingToad 1 points 3 months ago

homie we're not out yet. Chatgpt cant even do basic math

[–] TheBat 1 points 3 months ago

Skynet skipped judgement day and chose a different method to ruin humanity

[–] homesweethomeMrL 1 points 3 months ago

Me: What’s your source for this?

Them: Google Scholar

Me: fires