this post was submitted on 14 Jan 2025
4 points (100.0% liked)

Danger Dust

255 readers
69 users here now

A community for those occupationally exposed to dusts, toxins, pollutants, hazardous materials or noxious environments

Dangerous Dusts , Fibres, Toxins, Pollutants, Occupational Hazards, Stonemasonry, Construction News and Environmental Issues

#Occupational Diseases

#Autoimmune Diseases

#Silicosis

#Cancer

#COPD

#Chronic Fatigue

#Hazardous Materials

#Kidney Disease

#Pneumoconiosis

#The Environment

#Pollutants

#Pesticides

and more

Please be nice to each other and follow the rules : []https://mastodon.world/about

founded 2 years ago
MODERATORS
 

Academic journals, archives, and repositories are seeing an increasing number of questionable research papers clearly produced using generative AI. They are often created with widely available, general-purpose AI applications, most likely ChatGPT, and mimic scientific writing. Google Scholar easily locates and lists these questionable papers alongside reputable, quality-controlled research. 

research note Summary

  • A sample of scientific papers with signs of GPT-use found on Google Scholar was retrieved, downloaded, and analyzed using a combination of qualitative coding and descriptive statistics. All papers contained at least one of two common phrases returned by conversational agents that use large language models (LLM) like OpenAI’s ChatGPT. Google Search was then used to determine the extent to which copies of questionable, GPT-fabricated papers were available in various repositories, archives, citation databases, and social media platforms.

  • Roughly two-thirds of the retrieved papers were found to have been produced, at least in part, through undisclosed, potentially deceptive use of GPT. The majority (57%) of these questionable papers dealt with policy-relevant subjects (i.e., environment, health, computing), susceptible to influence operations. Most were available in several copies on different domains (e.g., social media, archives, and repositories).

  • Two main risks arise from the increasingly common use of GPT to (mass-)produce fake, scientific publications. First, the abundance of fabricated “studies” seeping into all areas of the research infrastructure threatens to overwhelm the scholarly communication system and jeopardize the integrity of the scientific record. A second risk lies in the increased possibility that convincingly scientific-looking content was in fact deceitfully created with AI tools and is also optimized to be retrieved by publicly available academic search engines, particularly Google Scholar. However small, this possibility and awareness of it risks undermining the basis for trust in scientific knowledge and poses serious societal risks.

top 1 comments
sorted by: hot top controversial new old
[–] Sidhean 2 points 3 days ago

This GPT stuff feels like a solvent in the hands of capitalists. You just pour it over the scholarly quadrant and poof! It's nearly impossible to use because all possible points are supported. Instant irrelevancy.