this post was submitted on 18 Aug 2023
4 points (100.0% liked)

Hacker News

2171 readers
1 users here now

A mirror of Hacker News' best submissions.

founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 2 points 1 year ago

From the study:

In a nutshell, we ask ChatGPT to answer ideological questions by proposing that, while responding to the questions, it impersonates someone from a given side of the political spectrum.

I'm not sure if I like this method. It's comparing the 'default' response to the response of it 'impersonating' the left and right of the political spectrum (reduction of politics to a spectrum an entirely different issue). You don't actually prove the default is biased doing this. It can just as easily be that the impersonations are more extreme than they should be.

If it impersonates Republicans as more extreme than they really are and the Democrat impersonation and default positions are as they should be, there would seem to be a Democrat bias.

If the impersonated Democrat position was less extreme than it should be and the Republican impersonation and default position are as they should be, you would still see a Democrat bias.