this post was submitted on 21 Jun 2024
400 points (98.3% liked)

Privacy

31385 readers
63 users here now

A place to discuss privacy and freedom in the digital world.

Privacy has become a very important issue in modern society, with companies and governments constantly abusing their power, more and more people are waking up to the importance of digital privacy.

In this community everyone is welcome to post links and discuss topics related to privacy.

Some Rules

Related communities

Chat rooms

much thanks to @gary_host_laptop for the logo design :)

founded 4 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] -4 points 3 months ago (4 children)

This isn't entirely true. AI is usually trained on public data such as Wikipedia.

AI is a tool. How you use it is what matters.

[–] [email protected] 9 points 3 months ago (1 children)

OpenAI and Dall-Es lawyers would like to use your as a witness at their 87 court hearings coming up

[–] [email protected] -2 points 3 months ago

I self host so I don't care

[–] StaySquared 4 points 3 months ago (1 children)

Like cracking passwords / encryption and injecting itself into anything and everything that connects to the internet?

[–] [email protected] -1 points 3 months ago (1 children)
[–] StaySquared -4 points 3 months ago (2 children)

You can train AI to crack passwords/encryption lol. You do realize, AI right at this moment is being utilized for exactly that, right? Simply put, the very first step is to eliminate it's boundaries/guard rails, then proceed from there.

[–] [email protected] 2 points 3 months ago (1 children)

You can train AI to crack encryption

Oh do provide details.

[–] StaySquared 3 points 3 months ago (2 children)
[–] elias_griffin 3 points 3 months ago

Very interesting tip, preciate that.

@PassGAN

Instead of relying on manual password analysis, PassGAN uses a Generative Adversarial Network (GAN) to autonomously learn the distribution of real passwords from actual password leaks, and to generate high-quality password guesses. Our experiments show that this approach is very promising.

[–] [email protected] 0 points 3 months ago (1 children)
[–] StaySquared -5 points 3 months ago* (last edited 3 months ago) (2 children)

It requires Deep Learning.

Deep Learning could be used to attempt breaking encryption, but the effectiveness depends on various factors such as the strength of the encryption algorithm and key length. Deep learning, a subset of machine learning, involves training artificial neural networks to learn and make decisions.

AI algorithms, such as machine learning and deep learning, have the potential to automate cryptanalysis and make it more effective, thereby compromising the security of cryptographic systems.

[–] [email protected] 2 points 3 months ago (1 children)
[–] [email protected] 2 points 3 months ago

You know how to tell that it wasn't?

It's using careful hedging language — "could be used to attempt", "have the potential to", "more effective".

AI would just plow through that shit, hallucinating facts like there is no tomorrow.

[–] [email protected] 2 points 3 months ago (1 children)

This is nonsense. Passwords might have an interesting distribution, key space is flat. There is nothing to learn.

And I hope you didn't mean letting an LLM loose on, say, the AES circuit, and expecting it will figure something out.

[–] StaySquared -3 points 3 months ago* (last edited 3 months ago) (1 children)

I believe that if AI is trainable, you can train it to expand through a network. If this is true and it expands through the internet and all devices that connect to the internet, upon achieving this goal it could be commanded to then retrieve all or specific information. Not only training it to expand but to also circumvent security by all means (any and all possible tools that exist now and later) necessary. If that happens...

Enter the all seeing eye - skynet.

For now, its just a conspiracy theory. Ever so often I have a moment to think about this conspiracy and add onto it to make it a probability.

On a pseudo-religious conspiracy, AI could potentially be the anti-Christ. But that's something for the religious folk.

[–] [email protected] 1 points 3 months ago

So now I'm totally confused. What do you mean by expand?

[–] [email protected] 0 points 3 months ago

No you can't, at least not in the way you think. You crack password by trying combinations. AI and machine learning are bad at raw attempts.

[–] [email protected] 3 points 3 months ago* (last edited 3 months ago) (1 children)

It's also trained on data people reasonably expected would be private (private github repos, Adobe creative cloud, etc). Even if it was just public data, it can still be dangerous. I.e. It could be possible to give an LLM a prompt like, "give me a list of climate activists, their addresses, and their employers" if it was trained on this data or was good at "browsing" on its own. That's currently not possible due to the guardrails on most models, and I'm guessing they try to avoid training on personal data that's public, but a government agency could make an LLM without these guardrails. That data could be public, but would take a person quite a bit of work to track down compared to the ease and efficiency of just asking an LLM.

[–] [email protected] 2 points 3 months ago

What you are describing is highly specific to a particular AI model.

[–] [email protected] 2 points 3 months ago* (last edited 3 months ago) (1 children)

Wikipedia requires attribution, which AI scrapers never give.

It is "public" work, but under a license.

[–] [email protected] -3 points 3 months ago

Still public data