Technology

59776 readers
4692 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 2 years ago
MODERATORS
26
 
 
27
28
29
30
31
 
 

Original USA Today story link

Is it a simple error that OpenAI has yet to address, or has someone named David Mayer taken steps to remove his digital footprint?

Social media users have noticed something strange that happens when ChatGPT is prompted to recognize the name, "David Mayer."

For some reason, the two-year-old chatbot developed by OpenAI is unable – or unwilling? – to acknowledge the name at all.

The quirk was first uncovered by an eagle-eyed Reddit user who entered the name "David Mayer" into ChatGPT and was met with a message stating, "I'm unable to produce a response." The mysterious reply sparked a flurry of additional attempts from users on Reddit to get the artificial intelligence tool to say the name – all to no avail.

It's unclear why ChatGPT fails to recognize the name, but of course, plenty of theories have proliferated online. Is it a simple error that OpenAI has yet to address, or has someone named David Mayer taken steps to remove his digital footprint?

Here's what we know:

What is ChatGPT? ChatGPT is a generative artificial intelligence chatbot developed and launched in 2022 by OpenAI.

As opposed to predictive AI, generative AI is trained on large amounts of data in order to identify patterns and create content of its own, including voices, music, pictures and videos.

ChatGPT allows users to interact with the chatting tool much like they could with another human, with the chatbot generating conversational responses to questions or prompts.

Proponents say ChatGPT could reinvent online search engines and could assist with research, information writing, content creation and customer service chatbots. However, the service has at times become controversial, with some critics raising concerns that ChatGPT and similar programs fuel online misinformation and enable students to plagiarize.

ChatGPT is also apparently mystifyingly stubborn about recognizing the name, David Mayer.

Since the baffling refusal was discovered, users have been trying to find ways to get the chatbot to say the name or explain who the mystery man is.

A quick Google search of the name leads to results about British adventurer and environmentalist David Mayer de Rothschild, heir to the famed Rothschild family dynasty.

Mystery solved? Not quite.

Others speculated that the name is banned from being mentioned due to its association with a Chechen terrorist who operated under the alias "David Mayer."

But as AI expert Justine Moore pointed out on social media site X, a plausible scenario is that someone named David Mayer has gone out of his way to remove his presence from the internet. In the European Union, for instance, strict privacy laws allow citizens to file "right to be forgotten" requests.

Moore posted about other names that trigger the same response when shared with ChatGPT, including an Italian lawyer who has been public about filing a "right to be forgotten" request.

USA TODAY left a message Monday morning with OpenAI seeking comment.

32
33
34
 
 

I would submit the people who are working here don’t actually make $200k and probably burn to a crisp at 6 months, if not sooner. I don’t think there’s a bigger red flag than this guy. Run. Away.

35
 
 

"Anwar’s job, scrounging for discarded electronics in [Nigerian] Ikeja Computer Village, one of the world’s biggest and most hectic marketplaces for used, repaired, and refurbished electronic products.... "

36
37
38
39
 
 

Silicon Valley wants us to believe that their autonomous products are a kind of self-guided magic, but the technology is clearly not there yet. A quick peak behind the curtain has consistently revealed a product base that, at a minimum, is still deeply reliant on human workforces.

40
41
 
 

cross-posted from: https://futurology.today/post/2910566

Alibaba's Qwen team just released QwQ-32B-Preview, a powerful new open-source AI reasoning model that can reason step-by-step through challenging problems and directly competes with OpenAI's o1 series across benchmarks.

The details:

QwQ features a 32K context window, outperforming o1-mini and competing with o1-preview on key math and reasoning benchmarks.

The model was tested across several of the most challenging math and programming benchmarks, showing major advances in deep reasoning.

QwQ demonstrates ‘deep introspection,’ talking through problems step-by-step and questioning and examining its own answers to reason to a solution.

The Qwen team noted several issues in the Preview model, including getting stuck in reasoning loops, struggling with common sense, and language mixing.

Why it matters: Between QwQ and DeepSeek, open-source reasoning models are here — and Chinese firms are absolutely cooking with new models that nearly match the current top closed leaders. Has OpenAI’s moat dried up, or does the AI leader have something special up its sleeve before the end of the year?

42
43
 
 
44
 
 

Danish researchers created a private self-harm network on the social media platform, including fake profiles of people as young as 13 years old, in which they shared 85 pieces of self-harm-related content gradually increasing in severity, including blood, razor blades and encouragement of self-harm.

The aim of the study was to test Meta’s claim that it had significantly improved its processes for removing harmful content, which it says now uses artificial intelligence (AI). The tech company claims to remove about 99% of harmful content before it is reported.

But Digitalt Ansvar (Digital Accountability), an organisation that promotes responsible digital development, found that in the month-long experiment not a single image was removed.

rather than attempt to shut down the self-harm network, Instagram’s algorithm was actively helping it to expand. The research suggested that 13-year-olds become friends with all members of the self-harm group after they were connected with one of its members.

Comments

45
46
47
212
submitted 4 days ago* (last edited 3 days ago) by mesamunefire to c/technology
48
24
submitted 3 days ago by abhi9u to c/technology
49
 
 

I feel like every day I come across 15-20 "AI-powered tool"s that "analyze" something, and none of them clearly state how they use data. This one seems harmless enough, put a profile in, it will scrape everything about them, all their personal information, their location, every post they ever made... Nothing can possibly go wrong aggregating all that personal info, right? No idea where this data is sent, where it's stored, who it's sold to. Kinda alarming

50
 
 

Not everyone is happy about it being the default.

view more: ‹ prev next ›