this post was submitted on 17 Feb 2024
85 points (75.4% liked)

Technology

60102 readers
3134 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 2 years ago
MODERATORS
 

I can see some minor benefits - I use it for the odd bit of mundane writing and some of the image creation stuff is interesting,, and I knew that a lot of people use it for coding etc - but mostly it seems to be about making more cash for corporations and stuffing the internet with bots and fake content. Am I missing something here? Are there any genuine benefits?

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 5 points 10 months ago (1 children)

I pasted your question verbatim into Bing Chat. Here's what it responded with:

Artificial Intelligence (AI) indeed has a wide range of benefits that extend beyond the ones you’ve mentioned. Here are some areas where AI is making a significant impact:

  • Healthcare: AI is used in predicting disease outbreaks, drug discovery, personalized treatment plans, and improving patient care. For example, machine learning models can analyze medical images to detect diseases at early stages.
  • Education: AI can provide personalized learning experiences, identify gaps in learning materials, and automate administrative tasks. It can adapt to individual learning styles, making education more accessible.
  • Environment: AI can help in climate modeling, predicting natural disasters, and monitoring wildlife. It’s also used in optimizing energy usage in buildings and manufacturing processes, contributing to sustainability.
  • Transportation: Autonomous vehicles use AI for navigation, safety, and traffic management. AI can also optimize logistics, leading to reduced costs and environmental impact.
  • Security: AI can enhance cybersecurity by detecting unusual patterns or anomalies in data, helping to prevent cyber attacks. It’s also used in surveillance systems to identify potential threats.
  • Accessibility: AI can help people with disabilities by providing tools that improve their ability to interact with the world. Examples include speech recognition for those unable to use a keyboard, and visual recognition systems that can describe the environment to visually impaired individuals.

While it’s true that AI can be used to generate profits for corporations, it’s important to remember that many of these advancements also lead to societal benefits. However, like any technology, AI can be misused, and it’s crucial to have regulations and ethical guidelines in place to prevent such misuse. The creation of “bots and fake content” you mentioned is one such misuse, and efforts are ongoing to combat these issues.

In conclusion, AI has the potential to greatly benefit society in many ways, but it’s equally important to be aware of and address its challenges.

Seems like a pretty comprehensive list of the things I'm aware of myself. There's also tons of interesting future applications being worked on that, if they pan out, will be hugely beneficial in all sorts of ways. From what I've seen of what the tech is capable of we're looking at a revolution here.

[–] computergeek125 8 points 10 months ago (3 children)

Seems a bit biased to ask an AI for the benefits of AI......
Not saying anything specific is wrong, just that appearances matter

[–] [email protected] 2 points 10 months ago

Was thinking the same.. let's ask Honest Joe the car seller which one is the best mean of transport.

[–] [email protected] 1 points 10 months ago (1 children)

I think implying that it has a bias is giving the Advanced Auto Prediction Engine a bit too much credit.

[–] computergeek125 1 points 10 months ago* (last edited 10 months ago) (1 children)

Oh I am in fact giving the giant auto complete function little credit. But just like any computer system, an AI can reflect the biases of it's creators and dataset. Similarly, the computer can only give an answer to the question it has been asked.

Dataset wise, we don't know exactly what the bot was trained on, other than "a lot". I would like to hope it's creators acted in good judgement, but as creators/maintainers of the AI, there may be an inherent (even if unintentional) bias towards the creation and adoption of AI. Just like how some speech recognition models have issues with some dialects or image recognition has issues with some skin tones - both based on the datasets they ingested.

The question itself invites at least some bias and only asks for benefits. I work in IT, and I see this situation all the time with the questions some people have in tickets: the question will be "how do I do x", and while x is a perfectly reasonable thing for someone to want to do, it's not really the final answer. As reasoning humans, we can also take the context of a question to provide additional details without blindly reciting information from the first few lmgtfy results.

(Stop reading here if you don't want a ramble)


AI is growing yes and it's getting better, but it's still a very immature field. Many of its beneficial cases have serious drawbacks that mean it should NOT be "given full control of a starship", so to speak.

  • Driverless cars still need very good markings on the road to stay in lane, but a human has better pattern matching to find lanes - even in a snow drift.
  • Research queries are especially affected, with chatbots hallucinating references that don't exist despite being formatted correctly. To that specifically:
    • Two lawyers have been caught separately using chatbots for research and submitting their work without validating the answer. They were caught because they cited a case which supported their arguments but did not exist.
    • A chatbot trained to operate as a customer support representative invented a refund policy that did not exist. As decided by small claims court, the airline was forced to honor this policy
    • In an online forum while trying to determine if a piece of software had a specific functionality, I encountered a user who had copied the question into chatgpt and pasted the response. It was a command option that was exactly what I and the forum poster needed, but sadly did not exist. On further research, there was a bug report open for a few years to add this functionality that was not yet implemented
    • A coworker asked an LLM if a specific Windows powershell commands existed. It responded with documentation about a very nicely formatted command that was exactly what we needed, but alas did not exist. It had to be told that it was wrong four times before it gave us an answer that worked.

While OP's question is about the benefits, I think it's also important to talk about the drawbacks at the same time. All that information could be inadvertently filtered out. Would you blindly trust the health of you child or significant other to a chatbot that may or may not be hallucinating? Would you want your boss to fire you because the computer determined your recorded task time to resolution was low? What about all those dozens of people you helped in side chats that don't have tickets?

There's a great saying about not letting progress get in the way of perfection, meaning that we shouldn't get too caught on getting the last 10-20% of completion. But with decision making that can affect peoples' lives and livelihoods, we need to be damn sure the computer is going to make the right decision every time or not trust it to have full controls at all.

As the future currently stands, we still need humans constantly auditing the decisions of our computers (both standard procedural and AI) for safely's sake. All of those examples above could have been solved by a trained human gating the result. In the powershell case, my coworker was that person. If we're trusting the computers with at much decision making as that Bing answer proposes, the AI models need to be MUCH better trained at how to do their jobs than they currently are. Am I saying we should stop using and researching AI? No, but not enough people currently understand that these tools have incredibly rough edges and the ability for a human to verify answers is absolutely critical.

Lastly, are humans biased? Yes absolutely. You can probably see my own bias in the construction of this answer.

[–] [email protected] 2 points 10 months ago

But with decision making that can affect peoples' lives and livelihoods, we need to be damn sure the computer is going to make the right decision every time or not trust it to have full controls at all.

👏👏👏

Yes, dystopia already arrived and we will all going to suffer. Here are just a few simple examples of blind trust of algorithms which ruined people's lives. And day by day more are coming.

Before AI: https://sg.finance.yahoo.com/news/prison-bankruptcy-suicide-software-glitch-080025767.html

After AI: https://news.yahoo.com/man-raped-jail-ai-technology-210846029.html

https://www.washingtonpost.com/technology/2019/10/22/ai-hiring-face-scanning-algorithm-increasingly-decides-whether-you-deserve-job/

[–] [email protected] 0 points 10 months ago* (last edited 10 months ago)

It was in part a demonstration. I see a huge number of questions posted these days that could be trivially answered by an AI.

Try asking Bing Chat for negative aspects of AI, it'll give you those too.