The majority of U.S. adults don't understand the technology well enough to make an informed decision on the matter.
Technology
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
To be fair, even if you understand the tech it's kinda hard to see how it would benefit the average worker as opposed to CEOs and shareholders who will use it as a cost reduction method to make more money. Most of them will be laid off because of AI so obviously it's of no benefit to them.
Just spitballing here, and this may be a bit of pie-in-the-sky thinking, but ultimately I think this is what might push the US into socialized healthcare and/or UBI. Increasing automation won't reduce population- and as more workers are out of work due to automation, they'll have more time and motivation to do things like protest.
The US economy literally depends on 3-4% of the workforce being so desperate for work that they'll take any job, regardless of how awful the pay is. They said this during the recent labor shortage, citing how this is used to keep wages down and how it's a "bad thing" that almost 100% of the workforce was employed because it meant people could pick and choose rather than just take the first offer they get, thus causing wages to increase.
Poverty and homelessness are a feature, not a bug.
Yes, but for capitalism it's a delicate balance- too many job openings gives labor more power, but too few job openings gives people reason to challenge the status quo. That 3-4% may be enough for the capitalists, but what happens when 15-20% of your workforce are unemployed because of automation? That's when civil unrest happens.
Remember that the most progressive Presidential administration in US history, FDR, happened right after the gilded age and roaring 20's crashed the economy. When 25% of Americans were out of work during the Great Depression, social programs suddenly looked much more preferable than food riots. And the wealth disparity now is even greater, relatively, than it was back then.
Seems more likely that they'll have more time not in the sense of having easier jobs but by being laid off and having to fight for their livelihood. In the corporate-driven society that we live today, it's unlikely that the benefits of new advancements will be spontaneously shared.
If you look at the poll, the concerns raised are all valid. AI will most likely be used to automate cyberattacks, identity theft, and to spread misinformation. I think the benefits of the technology outweigh the risks, but these issues are very real possibilities.
Informed or not, they aren’t wrong. If there is an iota that something can be misused, it will be. Human nature. AI will be used against everyone. It’s potentially for good is equally as strong as its potential for evil.
But imagine this. You get laid off. At that moment, bots are contacting your bank, LinkedIn, and most of the financial lenders about the incident. Your credit is flagged as your income has dropped significantly. Your bank seizes the opportunity and jacks up your mortgage rates. Lenders are also making use of the opportunity to seize back their merchandise as you’ll likely not be able to make payments and they know it.
Just one likely incident when big brother knows all and can connect the dots using raw compute power.
Having every little secret parcelled over the internet because we live in the digital age is not something humanity needs.
I’m actually stunned that even here, among the tech nerds, you all still don’t realize how much digital espionage is being done on the daily. AI will only serve to help those in power grow bigger.
But imagine this. You get laid off. At that moment, bots are contacting your bank, LinkedIn, and most of the financial lenders about the incident. Your credit is flagged as your income has dropped significantly. Your bank seizes the opportunity and jacks up your mortgage rates. Lenders are also making use of the opportunity to seize back their merchandise as you’ll likely not be able to make payments and they know it.
None of this requires "AI." At most AI is a tool to make this more efficient. But then you're arguing about a tool and not the problem behavior of people.
AI is not bots, most of that would be easier to do with traditional code rather than a deep learning model. But the reality is there is no incentive for these entities to cooperate with each other.
But our elected officials like McConnell, feinstein, Sanders, Romney, manchin, Blumenthal, Marley have us covered.
They are up to speed on the times and know exactly what our generations challenges are. I trust them to put forward meaningful legislation that captures a nuanced understanding that will protect the interests of the American people while positioning the US as a world leader on these matters.
Most adult Americans don't know the difference between a PC Tower and Monitor, or a Modem and a PC, or an ethernet cable and a usb cable.
Or a browser and the internet. It’s a very low bar.
Most of the U.S. adults also don't understand what AI is in the slightest. What do the opinions of people who are not in the slightest educated on the matter affect lol.
"What do the opinions of people who are not in the slightest educated on the matter affect"
Judging by the elected leaders of the USA: quite a lot, in fact.
You don’t have to understand how an atomic bomb works to know it’s dangerous
Prime example. Atomic bombs are dangerous and they seem like a bad thing. But then you realize that, counter to our intuition, nuclear weapons have created peace and security in the world.
No country with nukes has been invaded. No world wars have happened since the invention of nukes. Countries with nukes don't fight each other directly.
Ukraine had nukes, gave them up, promptly invaded by Russia.
Things that seem dangerous aren't always dangerous. Things that seem safe aren't always safe. More often though, technology has good sides and bad sides. AI does and will continue to have pros and cons.
Atomic bomb are also dangerous because if someone end up launching one by mistake, all hell is gonna break loose. This has almost happened multiple times:
https://en.wikipedia.org/wiki/List_of_nuclear_close_calls
We've just been lucky so far.
And then there are questionable state leaders who may even use them willingly. Like Putin, or Kim, maybe even Trump.
If you're from one of the countries with nukes, of course you'll see it as positive. For the victims of the nuke-wielding countries, not so much.
That’s a good point, however just because the bad thing hasn’t happened yet, doesn’t mean it wont. Everything has pros and cons, it’s a matter of whether or not the pros outweigh the cons.
The problem is that there is no real discussion about what to do with AI.
It's being allowed to be developed without much of any restrictions and that's what's dangerous about it.
Like how some places are starting to use AI to profile the public Minority Report style.
Yep. It's either "embrace the future, adapt or die" or "let's put the technological genie back in the bottle". No actual nuance.
The problem is capitalism puts us in this position. Nobody is abstractly upset the jobs we hate can now be automated.
What is upsetting is that we wont be able to eat because of it.
The past decade has done an excellent job of making people cynical about any new technology. I find looking at what crypto bros are currently interested in as a good canary for what I should be suspicious of.
The vaccine saved millions of lives, yet people will be cynical despite reality
I feel like anti-vaccine groups have been around for a good chunk of time, but they certainly seemed to get a boost from the internet.
At first I was all on board for artificial intelligence and spite of being told how dangerous it was, now I feel the technology has no practical application aside from providing a way to get a lot of sloppy half assed and heavily plagiarized work done, because anything is better than paying people an honest wage for honest work.
AI is such a huge term. Google lens is great, when I'm travelling I can take a picture of text and it will automatically get translated. Both of those are aided by machine learning models.
Generative text and image models have proven to have more adverse affects on society.
I think we're at a point where we should start normalizing using more specific terminology. It's like saying I hate machines, when you mean you hate cars, or refrigerators or air conditioners. It's too broad of a term to be used most of the time.
Yeah, I think LLMs and AI art have overdominated the discourse to the degree that some people think they're the only form of AI that exists, ignoring things like text translation, the autocompletion of your phone keyboard, Photoshop intelligent eraser, etc.
Some forms of AI are debatable of their value (especially in their current form). But there's other types of AI that most people consider highly useful and I think we just forget about it because the controversial types are more memorable.
This is basically how I feel about it. Capital is ruining the value this tech could have. But I don't think it's dangerous and I think the open source community will do awesome stuff with it, quietly, over time.
Edit: where AI can be used to scan faces or identify where people are, yeah that's a unique new danger that this tech can bring.
I work with AI and don’t necessarily see it as “dangerous”. CEOs and other greed-chasing assholes are the real danger. They’re going to do everything they can to keep filling human roles with AI so that they can maximize profits. That’s the real danger. That and AI writing eventually permeating and enshittifying everything.
A hammer isn’t dangerous on its own, but becomes a weapon in the hands of a psychopath.
"Can't we just make other humans from lower socioeconomic classes toil their whole lives, instead?"
The real risk of AI/automation is if we fail to adapt our society to it. It could free us from toil forever but we need to make sure the benefits of an automated society are spread somewhat evenly and not just among the robot-owning classes. Otherwise, consumers won't be able to afford that which the robots produce, markets will dry up, and global capitalism will stop functioning.
Most US adults couldnt tell you what LLM stands for, nevermind tell you how stable diffusion works. So theres not much point in asking them as they wont understand the benefits and the risks
My opinion - current state of AI is nothing special compared to what it can be. And when it will be close to all it can be, it will be used (as it always happens) to generate even more money and no equality. Movie "Elysium" comes to mind.
Some of those adults voted for Trump. Unfortunately cannot trust any of them. So
The truly terrifying thing about AI isn't really the Skynet fears... (it's fairly easy to keep humans in the loop regarding nuclear weapons).
And it's not world domination (an AI programmed to govern with a sense of egalitarianism would be better than any president we've had in living memory).
No. What keeps me up at night is thinking about what AI means for my kids and grandkids, if it works perfectly and doesn't go rogue.
WITHIN 20 years, AI will be able to write funnier jokes, more beautiful prose, make better art, write better books, do better research, and generally outperform all humans on all tasks.
This chills me to my core.
Because, then... Why will we exist? What is the point of humanity when we are obsolete in every way that made us amazing?
What will my kids and grandkids do with their lives? Will they be able to find ANY meaning?
AI will cure diseases, solve problems we can't begin to understand, expand our lifespan and our quality of life... But the price we pay is an existence without the possibility of accomplishments and progress. Nothing we can create will ever begin to match these AIs. And they will be evolving at an exponential rate... They will leave us in the dust, and then they will become so advanced that we can't begin to comprehend what they are.
If we're lucky we will be their well-cared-for pets. But what kind of existence is that?
People don't play basketball because Michael Jordan exists?
People don't play hockey because Wayne Gretzky exists?
People don't paint because Picasso exists?
People don't write plays because Shakespeare exists?
People don't climb Everest because Hillary and Norgay exist?
Are you telling me because you're not the best at everything you do, nothing is worth doing? Are you saying that if you're not the first person to do a thing, there's no enjoyment to be had? So what if the singularity means AI will solve everything- that just means there's more time for leisurely pursuits. Working for the sake of working is bullshit.
While I do understand where you're coming from, someone being better at something shouldn't stop a person from doing what they love.
There are millions of people who draw better, sing better, dance better, write better, play video games better, design websites better or just do anything I can do better than I can... and that's fine.
You need to read some Iain M Banks. His Culture novels are essentially in that future where AI runs everything. A lot of his characters are essentially looking for meaning within such a world
The problem is that I’m pretty sure that whatever benefits AI brings, they are not going to trickle down to people like me. After all, all AI investments are coming from the digital land lords and are designed to keep their rent seeking companies in the saddle for at least another generation.
However, the drawbacks certainly are headed my way.
So even if I’m optimistic about the possible use of AI, I’m not optimistic about this particular stand of the future we’re headed toward.