this post was submitted on 28 Oct 2023
258 points (94.2% liked)

Technology

60082 readers
4104 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 2 years ago
MODERATORS
 

IBM researchers said a ChatGPT-generated phishing email was almost as effective in fooling people compared to a man-made version.

all 34 comments
sorted by: hot top controversial new old
[–] MysticKetchup 140 points 1 year ago (2 children)

IBM researchers said a ChatGPT-generated phishing email was almost as effective in fooling people compared to a man-made version.

So it's less effective than a regular phishing email?

[–] [email protected] 45 points 1 year ago (2 children)

Yes, but being about the same means ChatGPT could be used to create massive amounts or personalized phishing emails at a low cost in a very short time by automation. Basically doing what they do now, but even faster.

[–] dyathinkhesaurus 5 points 1 year ago (2 children)

And with better spelling and punctuation.

[–] [email protected] 8 points 1 year ago (1 children)

No, those 'mistakes' are part of the phishing tactic. It weeds out those that are paying too much attention to the details.

[–] elbarto777 1 points 1 year ago

Better spelling and punctuation is a bug, not a feature.

Bad spelling = people who miss those may be easy to fool.

[–] afraid_of_zombies 1 points 1 year ago

I wonder how that would work. The last one I did some checking into had a bitcoin address and it (I really don't understand Bitcoin well) looked like the person moved the fake money from account to account over and over again.

[–] [email protected] 27 points 1 year ago (2 children)

And crafting a carefully targeted phishing email took a human team around 16 hours, they wrote, while ChatGPT took just minutes

This is significant because any person with the desire to scam can use ChatGPT from the comfort of their own home over lunch instead of hiring professionals for a few days.

[–] dack 8 points 1 year ago

No, it's significant because attackers can pump out way more emails while also making them customized to their targets and constantly changing to help avoid detectors.

[–] [email protected] 3 points 1 year ago

You don’t need a professional to write a scam email. You just need common sense, to be honest.

[–] [email protected] 40 points 1 year ago* (last edited 1 year ago) (4 children)

And crafting a carefully targeted phishing email took a human team around 16 hours

Ummm what? Back in college, I used to budget 30-45 minutes a page for essays. What the hell are they writing that took a team of people 16 fucking hours for a few paragraphs of text?

[–] [email protected] 28 points 1 year ago (2 children)

A targeted phishing email is usually pretty sophisticated and requires days or weeks of research. For example, you might send an email pretending to be from someone's IT department regarding a hardware audit, and ask a user to report back with the barcode sticker on their laptop, providing them with a photo of an example tag in similar format. You'll pretend to be a specific individual at the company, or a contractor the company actually uses, and show knowledge of the internal software and hardware, and refer to other real employees by name/email to establish trust. Most of this data will be scraped from publicly available sources like LinkedIn profiles, job listings, and photos shared on social media by employees. This process is called OSINT (Open-Source Intelligence) and it's a fascinating rabbithole to read about. Targeted phishing attempts are much, much more sophisticated than the ones you'll see in spam email.

[–] IphtashuFitz 6 points 1 year ago

This is pretty much what happened at the company I work for. The assistant to the CEO received an email that appeared like it came from the CEO requesting confidential financial information. The email contained mannerisms of the CEO, was sent when the CEO was out of the office, etc. The assistant almost fell for it… She would have if our mail system didn’t clearly flag external emails so that it’s obvious they weren’t sent internally.

[–] afraid_of_zombies 1 points 1 year ago

My old employer would get a call every few months from someone pretending to be our client and informing us we should change the banking information. No one could figure out how they figured out that there was a business relationship between the two companies let alone who was the financial person at my job.

[–] monk 20 points 1 year ago

How many people clicked the phishing links in your college papers?

[–] [email protected] 7 points 1 year ago

I guess they mean person hours since they are referring to a team. An initial brainstorming session, another review session or two and 16 hours are quickly gone.

[–] cybersandwich 2 points 1 year ago

What the hell are they writing that took a team of people 16 fucking hours for a few paragraphs of text?

An invoice full of billable hours.

[–] [email protected] 25 points 1 year ago (1 children)

To be honest, phishing emails are so bad that I don't see how any generational AI couldn't be better. Just making less than two typos per sentence would e enough.

Someone explained me that it may be intentional that phishing emails are so bad as it acts as a pre-filter, then you only spend time and ressources dealing with presumably very gullible people.

[–] [email protected] 4 points 1 year ago (1 children)

The typos are intentional. They filter out intelligent recipients who wouldn't fall for the scam.

[–] [email protected] 4 points 1 year ago

The typos have been theorized to be intentional (for that reason), but that isn’t the only theory, and afaik those theories aren’t based off conversations with the people crafting those emails.

It’s also been theorized that phishing emails frequently have typos (intentionally) to lower people’s resistance to well-crafted phishing emails, particular spear phishing.

There’s also the fact that many phishing emails are crafted by people for whom English is not their first language, and even given that, phishing emails are still better written than spam emails, so it’s quite likely that in many cases it isn’t intentional at all.

[–] Brendan 14 points 1 year ago (2 children)

Looking forward to the day when I use Darktrace’s AI threat detection to stop ChatGPT’s AI generated threats…

What a world we’ve built!

[–] [email protected] 10 points 1 year ago (1 children)

"I call it the phishing buster buster buster"

[–] atrielienz 1 points 1 year ago

It kills me that nobody I know has seen "The Big Hit" and yet everyone knows about the trace buster buster buster.

[–] afraid_of_zombies 1 points 1 year ago

Ok we should just go back to dudes on horseback yelling stuff for money.

Hear ye hear ye, timesheets are due on Friday

[–] [email protected] 12 points 1 year ago (1 children)

Why haven't people learned yet to simply never click a link in an email? Even if it's not malicious, it's still trying to track you.

[–] [email protected] 9 points 1 year ago (1 children)

Images in emails also track you fwiw, as your browser or email client has to send a request to load it. Disable loading images by default.

[–] [email protected] 4 points 1 year ago

This is the best summary I could come up with:


(tldr: 2 sentences skipped)

Case in point, IBM researchers posted an internal study that details how they unleashed a ChatGPT-generated phishing email on a real healthcare company to see if it could fool people as effectively as a human-penned one.

(tldr: 2 sentences skipped)

"Humans may have narrowly won this match, but AI is constantly improving," said IBM hacker Stephanie Carruthers wrote of the work.

"As technology advances, we can only expect AI to become more sophisticated and potentially even outperform humans one day."

Given these results and AI chatbots rapidly improving, what can individuals do against this inbox onslaught?

IBM's suggestions ranged from common sense, like calling the purported sender if something looks suspicious, to anemic, like looking out for "longer emails," which they said are "often a hallmark of AI-generated text."

The bottom line, though, is just to use your common sense — and to prepare yourself for an internet that looks set to be rapidly overrun with AI-generated content, malicious or otherwise.


The original article contains 250 words, the summary contains 163 words. Saved 35%. I'm a bot and I'm open source!

[–] Rhoeri 1 points 1 year ago (2 children)

The simple fact that people still fall for phishing scams is a great indicator that we’ve always been going nowhere.

[–] RGB3x3 6 points 1 year ago

Phishing scams are getting really good these days. It's no longer the Nigerian prince-type obvious scams.

They make emails nearly identical to real ones, they're able to fake sender names, they actually use real English.

If you think you wouldn't fall for a phishing email, you're kidding yourself. All it takes is one lapse of judgement while you're too busy to realize an email is fake.

[–] afraid_of_zombies 4 points 1 year ago

Oh please you can't be 100% mistrustful all the time. Eventually you are going to slip up and assume good faith. This is why it is important to stop people from doing it instead of blaming victims.

Also, who knows how many people who do fall for these things are mentally disabled.

[–] [email protected] 0 points 11 months ago

Does that show the development of ChatGPT when it plays like real people?