this post was submitted on 25 Oct 2024
71 points (96.1% liked)

TechTakes

1480 readers
220 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 1 year ago
MODERATORS
 

Hermansson logged in to Google and began looking up results for the IQs of different nations. When he typed in “Pakistan IQ,” rather than getting a typical list of links, Hermansson was presented with Google’s AI-powered Overviews tool, which, confusingly to him, was on by default. It gave him a definitive answer of 80.

When he typed in “Sierra Leone IQ,” Google’s AI tool was even more specific: 45.07. The result for “Kenya IQ” was equally exact: 75.2.

Hmm, these numbers seem very low. I wonder how these scores were determined.

you are viewing a single comment's thread
view the rest of the comments
[–] grue 5 points 1 month ago (4 children)

I don't understand the title. LLM hallucinations have nothing to do with JAQing off.

[–] kitnaht 22 points 1 month ago* (last edited 1 month ago) (1 children)

Problem it wasn't a hallucination - it was referencing a paper that has been debunked. These aren't made up numbers, they're VERY specific numbers that come from a VERY specific paper.

This one: https://www.sciencedirect.com/science/article/abs/pii/S0160289610000450 -- If I'm not mistaken. Created by a Nazi Sympathizer Richard Lynn and the Pioneer Fund

The problem is that this also managed to get cited more than 22,000x creating a feedback effect that reinforced the AI's learning.

[–] grue 1 points 1 month ago (1 children)

Okay, but it's still got nothing to do with the dishonest rhetorical technique called "JAQing off" (a.k.a. "Just Asking Questions," a.k.a. "sealioning").

[–] kitnaht 2 points 1 month ago

It's kind of a ... symptom ... of the community we're in. I wouldn't read into it too deeply.

[–] [email protected] 5 points 1 month ago (1 children)

I think the usual output from the AI Overview (or at least the goal) is to give a long and ostensibly Fair and Balanced summary. So in this case it would be expected to throw out "some say that people from Australia are extra dumb because of these studies, but others contend that those studies were badly performed" or whatever. Asking the question on more words to represent both sides so that it can pretend not to be partisan.

[–] grue 1 points 1 month ago (1 children)

Let me be more clear about this: an LLM trying to answer a question (successfully or otherwise) is doing basically the opposite of a human asking questions (disingenuously, as in "JAQing off," or otherwise).

I wasn't trying to solicit comments trying to explain what the LLM was doing; my point was simply that OP is confused and used a term incorrectly in the title.

[–] [email protected] 8 points 1 month ago

i like turtles

[–] [email protected] 2 points 1 month ago* (last edited 1 month ago)

It's a reference to the fact that the kind of person who would try and justify this sort of race science is also the kind of person who is "just asking questions." Combined with the tech industry's tepid "it's just a tool, it's not inherently evil" bullshit, I think OPs point is obvious to anyone who isn't a pedant, deliberately acting in bad faith.

[–] [email protected] 2 points 1 month ago

you may wish to read the sidebar