Technology
This is the official technology community of Lemmy.ml for all news related to creation and use of technology, and to facilitate civil, meaningful discussion around it.
Ask in DM before posting product reviews or ads. All such posts otherwise are subject to removal.
Rules:
1: All Lemmy rules apply
2: Do not post low effort posts
3: NEVER post naziped*gore stuff
4: Always post article URLs or their archived version URLs as sources, NOT screenshots. Help the blind users.
5: personal rants of Big Tech CEOs like Elon Musk are unwelcome (does not include posts about their companies affecting wide range of people)
6: no advertisement posts unless verified as legitimate and non-exploitative/non-consumerist
7: crypto related posts, unless essential, are disallowed
view the rest of the comments
Never having heard the term AI panic makes this kinda meaningless. But I guess AI panic is evil, as it is promoted by the typically more evil companies?
How is OpenAI evil?
They published a deliberately harmful tool against the advice of civil society, experts and competitors. They are not only reckless but tasked since their foundation with the mission to create chaos. Don't forget the idea behind OpenAI in the beginning was to damage the advantage that Google and Facebook had on AI by releasing machine learning technology in open source. They definitely did it and now they are expanding their goals. They are not in for the money (ChatGPT will never be profitable), they are playing a bigger game.
Pushing the AI panic is not just a marketing strategy but a way to build power. The more they are considered dangerous, the more regulations will be passed that will impact the whole sector. https://fortune.com/2023/05/30/sam-altman-ai-risk-of-extinction-pandemics-nuclear-warfare/
What’s this about OpenAI having a mission to create chaos? That sounds like “AI panic” or conspiratorial thinking on the surface at least.
deliberately harmful tool ???
I am using it, and yes, it can be inaccurate sometimes, but deliberately harmful?
The link that you gave is not about this AI, but potential danger of some future AGI, which would have to be more powerful than this one.
This paper explain a taxonomy of harms created by LLMs: https://dl.acm.org/doi/pdf/10.1145/3531146.3533088
OpenAI released ChatGPT without systems to prevent or compensate these harms and being fully aware of the consequences, since this kind of research has been going on for several years. In the meanwhile they've put some paper-thin countermeasures on some of these problems but they are still pretty much a shit-show in terms of accountability. Most likely they will get sued into oblivion before regulators outlaw LLMs with dialogical interfaces. This won't do much for the harm that open-source LLMs will create but at least will limit large-scale harm to the general population.
I can only imagine what would happen if these authors were to write about internet.
There are entire fields of research on that. Or do you believe the internet, a technology developed for military purposes, an infrastructure that supports most of the economy, the medium through billions of people experience most of reality and build connections, is free from ideology and propaganda?
That’s my point, nearly everything in life have good and bad sides, you have to use it accordingly. Would you believe if I say that a banal kitchen knife can be used to murder people? Those kitchen knife manufacturers released a product which is a harmful tool! And they knew that!
it's answered in other comments
You might have heard of singularity, sentient AI, uprising of the ai, job losses due to automation. That's all propaganda that sits under the concept of AI panic.
Oh yeah this has never happened. Brb, gonna go tell all my fellow assembly line workers this concept is total propaganda
automation never reduces jobs. It fragments them, it reduces their quality, it increases deskilling and replaceability. We are not going to work less as we never worked less thanks to automation. If we want to work less, we need unionization, not machines.
But how are Microsoft and other LLM companies marketing on AI Panic?
I honestly don’t understand what this graph means. I don’t get what the four sectors mean, how the author decided to distribute companies among the four sectors, or why the four sectors are divided into two equivalent circles.
Neither do I. Not a very good diagram.
All I can figure out is the pink side is pure evil and the blue side are our saviors. Given the color scheme, perhaps this is yet another failed gender reveal?
Microsoft bought OpenAI. The AI panic pushed by Sam Altman is sanctioned by Microsoft.
It's ridiculous to call ideas that have existed for half a century propaganda now that we're actually approaching those things...
I've never heard the singularity referred to as AI panic. It's usually talked about as a good thing. The point when technology becomes infinitely self-improving. If someone was a promoter of the singularity, that would mean they are trying to achieve/advocate for it not prevent it.