this post was submitted on 08 Oct 2023
507 points (97.0% liked)
Technology
59291 readers
4752 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I’d rather have ChatGPT know about news content than not. I appreciate the convenience. The news shouldn’t have barriers.
So they have automated Fox then.
You just described news
More data fixes that flaw, not less.
It is not "a flaw", it is the way language learning models work. They try to replicate how humans write by guessing based on a language model. It has no knowledge of what is a fact or not, and that is why using LLMs to do research or use them as a search engine is both stupid and dangerous
How would it hallucinate information from an article you gave it. I haven't seen it make up information by summarizing text yet. I have seen it happen when I ask it random questions
It does not hallucinate, it guesses based on the model to make you think the text could be written by a human. Personal experience when I ask into summarize a text. It has errors in it, and sometimes it adds stuff to it. Same if you for instance ask it to make an alphabetic a list of X numbers of items. It may add random items.
I've had it make up things if I ask it for a list of say 5 things but there's only 4 things worth listing. I haven't seen it stray from summarizing something I've fed it though. If its giving text, its been pretty accurate. Only gets funky when you ask it things where information isn't available. Then it goes with what you probably want
Yes. The LLM doesn't know what year it currently is, it needs to get that info from a service and then answer.
It's a Large Language Model. Not an actual sentient being.
It's not an excuse, relax, it's just how it works and I don't see where I'm endorsing it to get your news.
It's not more data, the underlying architecture isn't designed for handling facts
Who get their news from chatgpt lol
A disturbing number of people.
You don't get your news from it but building tools can be useful. Scrapping news websites to measure different articles for thinga like semantic analysis or identify media tricks that manipulate readers is a fun practice. You can use llm to identify propaganda much easier. I can get why media would be scared that regular people can run these tools on their propaganda machine easily.
I do
Why?
It’s funny seeing Apollo and spez_ fighting on a topic regarding ChatGPT.
Natural enemies must fight
Because ChatGPT doesn't do clickbait headlines or have auto-play video ads, auto play video news that follows me if I try to scroll past it, or a house ad that tries to convince me to stop reading the news and instead read a puff piece about how to clean my water bottle. Which I'd bet fifty bucks will result in me seeing ads for new water bottles every day for the next month. No thanks.
With the "Web Browsing" plugin, which essentially does a Bing search then summarises the result, ChatGPT is a far better experience if you want to find out what's going on in Israel today for example.
Neither does lemmy, here (and in other instances) there's plenty of communities for news, and with better control of misinformation.
Reuters is pretty good. No autoplay vids, only 1-2 quiet ads an article, and is mainly cut-and-dry news.
No news source is 100% reliable, but I can easily see AI picking up bad information or misinterpreting human text. Nothing wrong with AI news by itself, but it's a good habit to verify any source by yourself.
Regardless I recommend UBlock for any device or browser. Ads are over the line nowadays so I don't feel bad blocking them when possible.
The pure ChatGPT output would probably be garbage. The dataset will be full of all manner of sources (together with their inherent biases) together with spin, untruths and outright parody and it’s not apparent that there is any kind of curation or quality assurance on the dataset (please correct me if I’m wrong).
I don’t think it’s a good tool for extracting factual information from. It does seem to be good at synthesising prose and helping with writing ideas.
I am quite interested in things like this where the output from a “knowledge engine” is paired with something like ChatGPT - but it would be for eg writing a science paper rather than news.
I don't think its generating news. Sounds like people are using it to reformat articles already writing to remove all the bullshit propganada from the news. Like taking a fox news article and just pulling out key information