this post was submitted on 12 Jan 2025
1157 points (98.1% liked)

memes

10930 readers
3506 users here now

Community rules

1. Be civilNo trolling, bigotry or other insulting / annoying behaviour

2. No politicsThis is non-politics community. For political memes please go to [email protected]

3. No recent repostsCheck for reposts when posting a meme, you can only repost after 1 month

4. No botsNo bots without the express approval of the mods or the admins

5. No Spam/AdsNo advertisements or spam. This is an instance rule and the only way to live.

Sister communities

founded 2 years ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] Tyfud 17 points 3 days ago (2 children)

It's fake. Llms don't execute commands on the host machine. They generate text as a response, but don't ever have access to or ability to execute random code on their environment

[–] [email protected] 20 points 3 days ago* (last edited 3 days ago)

Some offerings like ChatGPT do actually have the ability to run code, which is running in a “virtual machine”.

Which sometimes can be exploited. For example: https://portswigger.net/web-security/llm-attacks/lab-exploiting-vulnerabilities-in-llm-apis

But getting out of the VM will most likely be protected. So you’ll have to find exploits for that as well. (Eg can you get further into the network from that point etc)

[–] Ziglin 5 points 3 days ago

Some are allowed to by (I assume) generating some prefix that tells the environment to run the following statement. ChatGPT seems to have something similar but I haven't tested it and I doubt it runs terminal commands or has root access. I assume it's a funny coincidence that the error popped up then or it was indeed faked for some reason.