this post was submitted on 18 Oct 2023
24 points (92.9% liked)
Chinese language 中文 漢語
198 readers
1 users here now
Discussions and resources for studying or learning about Chinese languages (Mandarin, Cantonese, Hokkien, Hakka, Classical Chinese, etc.).
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
All hallucinating digital parrots are upsetting precisely because people are using these artificial bullshit generators to answer questions instead of doing a bit of legwork.
The problem with these massively obfuscated madlibs machines is that their drivel is constructed in (relatively) good sentences so ignorant people come out of sessions with them more ignorant, not less, and their spewings tend to start displacing actual knowledge and information.
The code they generate is shit.
The information they supply is shit.
The writing they perform is shit.
There is literally nothing good about them beyond "I can be lazy and look smaarrtt (sic)!"
The current limitations of ai are those of the operators using it.
You don’t look smarter because you post something that is wrong, you just show that you are clueless about the subject. And other clueless people will remain as clueless after as before.
Ignorant people will come out of any session more ignorant because thats what ignorant people do, its nothing worse then the echo chambers that already exist, at least ai is relatively neutral compared to some cults.
Most people are clueless about many subjects, everyone is an expert at something but especially online there is no way of knowing for sure if something is fact or not besides being an expert yourself an knowing. I have long argued that the ease that people accept or dismiss knowledge is a fundamental issues within our society, this starts in school where teachers may have outdated knowledge or are sometimes just plain wrong as all humans can be. This continues in adult Life where how people look, are dressed and even how tall they are and social Class all play a factor in wether someone will dismiss or take their words for granted.
Ai, being a parrot as you say is a mirror to this problem, it is a proof of our own bias and stupidity, it is my hope that instead of increasing the issue, it will drive the coming generation to become far more aware of the dangers of misinformation and the need to learn and understand information for yourself in order to trust it.
Chatgpt can write amazing code, that is trough a process of back and forth code generation and because i know how to code. 9/10 its outputs go straight to the trash bin. 1/10 i specific section in its output saves me a day worth of work.
Same with writing, i am really bad at writing emails, chatgpt is good at writing but not at speaking.
I usually draft a mail, let chatgpt rewrite it. Then puzzle my draft and the outputs into a third new form and then i ask chatgpt to check grammar spelling and provide any further insight on how people may interpret the message.
Its by all means my email, chatgpt is a tool in the process akin to a vast dictionary of sentences and text strings. Its it job to provide the most relevant ones and the humans job to puzzle it together coherently.
Thats not what i did here though, the vision feature is new so i just thought of trying that out. I also specifically mention it was chatgpt as a warning to take all it says with a grain of rice.
So your hot take is hallucinating digital parrot bullshit generator madlibs machines are a good thing because they're full of shit.
Wow.
That's gotta win some "dumbest thing ever said" award somewhere!
Way to misquote what I said, maybe I should win a "dumbest thing ever said" award but instead for attempting to reason with you while you have been acting emotional and dismissive from the start.
I am sharing some of my real-world professional experience working with top-tier AI. What you should have realized by reading my comment is that I am very untrusting in the so-called expertise of internet strangers (you), instead trusting in my own real-world provable knowledge and results. You should do the same, be it human or AI. You are dealing with absolute strangers online; people can lie all they want whenever they want.
The bullshit mirror (which is really only relevant when a non-expert ignorant person MIS-uses this kind of tech) might help force people to reflect on how complacent they have become towards any kind of information they hear and read online. Similar to smiling to yourself in the mirror and the reflection showing you that your teeth are dirty. That's "an" optimistic perspective I hold because the idea of excluding AI for the scientific elite only seems rather shitty. I hold some pessimistic perspectives as well because exploring multi-faceted perspectives aids me in my research.
I will give you 1 point because of your own inability to properly read my comment (most specifically for reading "it is my hope" as "are a good thing") If you cannot even understand this level of nuance, I can understand why smart-looking unreliable text may pose a difficulty to you. In which case you should probably ask a more cognitively mature trusted family member/friend to help you fact-check the stuff you read.