this post was submitted on 12 Sep 2024
17 points (61.0% liked)

Technology

59713 readers
5858 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
17
submitted 2 months ago* (last edited 2 months ago) by jamyang to c/technology
 

Not everyone needs to have an opinion on AI


National Novel Writing Month is an American initiative that has become a worldwide pastime, in which participants attempt to write a 50,000-word manuscript in the month of November. Some of these first drafts eventually become novels — the initial version of what became Erin Morgenstern’s The Night Circus started life as a NaNoWriMo effort — but most don’t. And many participants cheerfully admit they are writing for the pleasure of creation rather than out of any expectation that they will gain either money or prestige from the activity.

In recent years, NaNoWriMo has been plagued by controversies. This year, the organisation has been hit by an entirely self-made argument, after declaring that while it does not have an explicit position on the use of generative artificial intelligence in writing, it believes that to “categorically condemn the use of AI writing tools” is both “ableist and classist”. (The implication that working-class people and people with disabilities can only write fiction with the help of generative AI, however, is apparently A-OK.)

The resulting blowback saw one of its board, the writer Daniel José Older, resign in disgust. (NaNoWriMo has since apologised for, and retracted, its initial statement.)

There is very little at stake when you participate in NaNoWriMo, other, perhaps, than the goodwill of the friends and relations you might ask to read your work afterwards. Sign-ups on the website can talk to other participants on their discussion forums and are rewarded for hitting certain milestones with little graphics marking their achievement. If you want to write an experimental novel called A Mid-Career Academic’s Reflections Upon His Divorce that is simply the same four-letter expletive repeated over and over again, nothing is stopping you from doing so. If you want to type the words “write the first 50,000 words of a coming-of-age novel in the style of Paul Beatty” into ChatGPT and submit the rest, you can do so. In both cases, it is your own time you are wasting.

The whole argument is exceptionally silly but does hold two useful lessons.

One is that organisations and companies should have fewer opinions. Quite why NaNoWriMo needs to have an opinion about the use of generative AI is beyond me. Organisations should have a social conscience, but that should be limited to things they actually directly control. They should care about fairness when hiring, about the effects that their supply chains have on the world, just as NaNoWriMo should care about whether its discussion forums are well moderated (the subject of another previous controversy). But they should have little or no interest in issues that they have no meaningful way to stop or prevent, like what participants do with AI.

A good rule of thumb for an organisation considering whether to make a statement about a topic is to ask itself what material changes within its control it proposes to make as a result of doing so — and why. Those changes might range from donating money to hiring. For example, the cosmetics retailer Lush has given large amounts of money to police reform charities, while Julian Richer, the founder of Richer Sounds, home entertainment chain, went so far as to turn his business into an employee-owned trust in 2019.

But if an organisation is either unwilling or incapable of making real changes to how it operates or spends money, then nine times out of ten that is an indication that it will gain very little and add very little from speaking out.

The second lesson concerns how organisations should respond to the widespread use and adoption of generative AI. Just as NaNoWriMo can’t stop me asking Google Gemini to write a roman-à-clef about a dashingly handsome columnist who solves crimes, employers can’t reliably stop someone from writing their cover letter by the same method. That doesn’t mean they should necessarily embrace it, but it does mean that some forms of assessment have, inevitably, become a test of your ability to work well with generative AI as much as your ability to write or to research independently. Hiring, already one of the most difficult things any organisation does, is already becoming more difficult, and probation periods will become more important as a result.

Both lessons have something in common: they are a reminder that organisations shouldn’t sweat the stuff outside of their control. Part of writing a good novel is choosing the right words in the right places at the right time. So too is knowing when it is time for an organisation to speak — and when it should stay silent.


Posting != Endorsing the writer's views.

you are viewing a single comment's thread
view the rest of the comments
[–] Humanius 22 points 2 months ago* (last edited 2 months ago) (1 children)

People who have a more in-the-middle opinion generally don't talk about AI a lot. People with the most extreme opinions on something tend to be the most vocal about them.

Personally I think it's a neat technology, and there probably exist use-cases where it will work decently well. I don't think it'll be able to do everything and anything that the AI companies are promising right now, but there are certainly some tasks where an AI tool could help increase efficiency.
There are also issues with the way the companies behind the Large Language Models are sourcing their training data, but that is not an inherent issue of the technology. It's more an issue with incorrectly licensing the material.

I'm just curious to see where it all goes.

[–] [email protected] 3 points 2 months ago (1 children)

It can do some neat stuff but to me it has been pretty disappointing. Some stuff it has explained with real clarity and made understanding some stuff really simple. Even explained things from a viewpoint I hadn't considered before. But when I asked precise questions about things I know a lot about, it just outright confidently lied to me and told me I was wrong even though it had just moments before shared the hard facts it now contradicted fully. That kinda broke the spell and made me question everything the prompt returns to a degree that it's hard to use it for anything serious.

It does good summaries though and can concisely explain some simple stuff that I don't need to be verified. Shows promise but as a serious research tool wrangling it to reveal when it is lying and making it see that it's contradicting itself, that's just more work than doing the research myself.

[–] kennebel 2 points 2 months ago

I tried the bing chat (part of the work license), asked it some random questions, asked for more accurate information and pointed out the flawed answers it gave. It told me that I was being rude and ended the session. (smh)