this post was submitted on 12 Jan 2025
1157 points (98.1% liked)
memes
10930 readers
3506 users here now
Community rules
1. Be civil
No trolling, bigotry or other insulting / annoying behaviour
2. No politics
This is non-politics community. For political memes please go to [email protected]
3. No recent reposts
Check for reposts when posting a meme, you can only repost after 1 month
4. No bots
No bots without the express approval of the mods or the admins
5. No Spam/Ads
No advertisements or spam. This is an instance rule and the only way to live.
Sister communities
- [email protected] : Star Trek memes, chat and shitposts
- [email protected] : Lemmy Shitposts, anything and everything goes.
- [email protected] : Linux themed memes
- [email protected] : for those who love comic stories.
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
It's fake. Llms don't execute commands on the host machine. They generate text as a response, but don't ever have access to or ability to execute random code on their environment
Some offerings like ChatGPT do actually have the ability to run code, which is running in a “virtual machine”.
Which sometimes can be exploited. For example: https://portswigger.net/web-security/llm-attacks/lab-exploiting-vulnerabilities-in-llm-apis
But getting out of the VM will most likely be protected. So you’ll have to find exploits for that as well. (Eg can you get further into the network from that point etc)
Some are allowed to by (I assume) generating some prefix that tells the environment to run the following statement. ChatGPT seems to have something similar but I haven't tested it and I doubt it runs terminal commands or has root access. I assume it's a funny coincidence that the error popped up then or it was indeed faked for some reason.