this post was submitted on 28 Dec 2023
19 points (91.3% liked)

Privacy

96 readers
5 users here now

Privacy is the ability for an individual or group to seclude themselves or information about themselves, and thereby express themselves selectively.

Rules

  1. Don't do unto others what you don't want done unto you.
  2. No Porn, Gore, or NSFW content. Instant Ban.
  3. No Spamming, Trolling or Unsolicited Ads. Instant Ban.
  4. Stay on topic in a community. Please reach out to an admin to create a new community.

founded 2 years ago
MODERATORS
 

Google has filed an application with the US Patent and Trademark Office for a tool that would use machine learning (ML, a subset of AI) to detect what Google decides to consider as “misinformation” on social media.

Google already uses elements of AI in its algorithms, programmed to automate censorship on its massive platforms, and this document indicates one specific path the company intends to take going forward.

The patent’s general purpose is to identify information operations (IO) and then the system is supposed to “predict” if there is “misinformation” in there.

Judging by the explanation Google attached to the filing, it at first looks like blames its own existence for proliferation of “misinformation” – the text states that information operations campaigns are cheap and widely used because it is easy to make their messaging viral thanks to “amplification incentivized by social media platforms.”

But it seems that Google is developing the tool with other platforms in mind.

The tech giant specifically states that others (mentioning X, Facebook, and LinkedIn by name in the filing) could make the system train their own “different prediction models.”

Machine learning itself depends on algorithms being fed a large amount of data, and there are two types of it – “supervised” and “unsupervised,” where the latter works by providing an algorithm with huge datasets (such as images, or in this case, language), and asking it to “learn” to identify what it is it’s “looking” at.

(Reinforcement learning is a part of the process – in essence, the algorithm gets trained to become increasingly efficient in detecting whatever those who create the system are looking for.)

The ultimate goal here would highly likely be for Google to make its “misinformation detection,” i.e., censorship more efficient while targeting a specific type of data.

The patent indeed states that it uses neural networks language models (where neural networks represent the “infrastructure” of ML).

Google’s tool will classify data as IO or benign, and further aims to label it as coming from an individual, an organization, or a country.

And then the model predicts the likelihood of that content being a “disinformation campaign” by assigning it a score.

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 6 points 11 months ago (1 children)

Well that's an asshole move. First of all it would be stupid to allow such a system to be patented, as it's kinda generic.

Second, if they get it they can effectively obstruct attempts by other actors to limit misinformation on different platforms.

[–] skygirl 1 points 11 months ago

You can trust us! Only our platform is able to dynamically remove misinformation.

Yep, seems fine to me. What could go wrong?