this post was submitted on 08 Aug 2023
658 points (96.1% liked)

Technology

59592 readers
5634 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

Source: https://front-end.social/@fox/110846484782705013

Text in the screenshot from Grammarly says:

We develop data sets to train our algorithms so that we can improve the services we provide to customers like you. We have devoted significant time and resources to developing methods to ensure that these data sets are anonymized and de-identified.

To develop these data sets, we sample snippets of text at random, disassociate them from a user's account, and then use a variety of different methods to strip the text of identifying information (such as identifiers, contact details, addresses, etc.). Only then do we use the snippets to train our algorithms-and the original text is deleted. In other words, we don't store any text in a manner that can be associated with your account or used to identify you or anyone else.

We currently offer a feature that permits customers to opt out of this use for Grammarly Business teams of 500 users or more. Please let me know if you might be interested in a license of this size, and I'II forward your request to the corresponding team.

you are viewing a single comment's thread
view the rest of the comments
[–] brygphilomena 28 points 1 year ago (4 children)

Let's ignore the ethical implications of this for a moment.

Grammarly is training it's AI off of the poorly written grammar of it's users that it has to already adjust?

It seems like this would be a flawed set of training data. It's training on what it already either produced or on something written by someone who may not have used proper grammar in the first place.

Am I to expect this AI will improve over time?

[–] [email protected] 15 points 1 year ago

Depending on how they're training it, they're likely looking at when grammarly corrections were accepted or rejected and the context around that. That's what I'd be using from the dataset anyhow

[–] [email protected] 6 points 1 year ago

Remember that people have said GPT4 is getting dumber because of interacting with humans.

[–] [email protected] 2 points 1 year ago

On average, people's grammar is correct, kind of by definition.

[–] funktion 1 points 1 year ago* (last edited 1 year ago)

I can actually tell you a little about how this worked. The training data went through a team of grammarly writing experts before being fed to the AI. Writing experts would get short snippets out of context, often from publicly available text (e.g., reddit comments, classic novels, poems, scientific papers) and correct it for both grammar and clarity, then that would then become training data. Later on the team would do the same for content generated by the AI.

Source: Was one of the writing experts. For a couple of weeks I was correcting snippets that were very very obviously from r/squaredcircle. Very weird reading about Dave Batista's giant dong at work.