this post was submitted on 29 Jun 2023
236 points (96.8% liked)
Technology
59714 readers
6102 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Scraping social media posts and reddit posts doesn’t sound like stealing, they’re public posts.
I doubt it’s only about some Reddit posts. The scrapping was done on the whole web, capturing everything it could. So besides stealing data and presenting it as its own, it seems to have collected some even more problematic data which wasn’t properly protected.
But that really isn't OpenAI's fault. Whoever was in charge of securing the patients data really fucked up.
Leaving your front door open isn't prudent but doesn't grant permission to others to enter and take/copy your belongings or data.
The security teams may have royally screwed up, but OpenAI has a legal obligation to respect copyright and laws regarding data ownership.
Likewise, they could have scraped pages that included terms of use, copyright, disclaimers, etc., and failed to honor them.
All parties can be in the wrong for different reasons.
That’s like saying you didn’t lock your front door so whoever robs you is innocent.
But does leaving your front door open allow one to legally take a picture of the inside from across the street? I'd say scraping is more akin to that than it is theft. Nothing is removed in scraping, just copied
Bad analogy. This is like leaving your couch out on the sidewalk, then complaining when someone takes a picture of it.
I think it's a little closer to being mad that the Google street car drove by and snapped a picture of the front of your house, tbh.
Except pii and spi are protected under law, just like your possessions.
It's more like leaving an important letter in the open for everyone to read. It's certainly your fault for leaving it that open.
Yeah, but what were all these people whose data was scraped wearing?
It’s certainly their fault that they used it, though.
If they cared, they could have ensured they weren’t using sensitive or otherwise highly problematic information, but they chose not to. That’s on them.
It's called "disrupting" the established norms. You wouldn't get it because you're not on the bleeding edge of a revolutionary platform that's seeing scalable vertical growth due to its paradigm shift.
You forgot to mention something about blockchain
I can't see AI as anything but the next crypto. It seems incredibly overhyped to me
My sarcasm detector is making strange noises. We may have a false positive here!
They certainly fucked up, but it might well be OpenAI's post too.
if it was unsecured it's basically public. whomever put that data on a publicly accessible server is at fault
That's not necessarily true. Even if a company makes the mistake of not securing data correctly, those that make use of this data can still be at fault.
If a company leaves a server wide open, you still can't legally steal information from it.
that's kind of a grey area - digitally copying something that's public domain isnt stealing.
undefined> If a company leaves a server wide open, you still can’t legally steal information from it.
I don't see how this is any different than if Google search included text from a page that shouldn't be public.
Just because something is posted online doesn't mean it can be taken a resold. Copyright law prevents that. Of course, copyright law and generative AI is new and gray area.
@Hick I have one problem with that in terms of this generative ai. It's similar to when microsoft trained copilot on github data. Of course it was open source code, it was on Microsoft's servers but with this ai revolution you couldn't expect that someone will be able to create such tool. I mean we're randomly leaving our DNA in multiple different places but does it mean we agreed to be cloned once the technology that makes it possible will arrive?
@L4s
Here is not just scraping though, it is also using that data to create other content and to potentially also re-publish that data (we have no way of knowing whether chatGPT will spit out any of that nor where did it take what is spitting out).
The expectation that social media data will be read by anybody is fair, but the fact is that the data has been written to be read, not to be resold and published elsewhere too.
It is similar for blog articles. My blog is public and anybody can read it, but that data is not there to be repackaged and sold. The fact that something is public does not mean I can do whatever I want with it.
I could read your blog post and write my own blog post, using yours as inspiration. I could quote your post, add a link back to your blog post and even add affiliate links to my blog post.I could be hired to do something like that for the whole day
ChatGPT doesn't get inspired, the process is different and it could very well spit verbatim the content. You can do all the rest (depending on the license) without issues, but once again this is not what chatGPT does, as it doesn't provide attribution.
It's exactly the same with software, in fact.