this post was submitted on 23 Jun 2023
3 points (100.0% liked)
Sysadmin
81 readers
1 users here now
A community dedicated to the profession of IT Systems Administration.
founded 1 year ago
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
@adhdplantdev The risk as I understand it is compromising confidentiality. Most AI apps, including ChatGPT, use your prompts in their training sets creating a risk for a random end-user pasting confidential information into a prompt. I might be able to argue that in certain compliance/regulation requirements it could be considered an accidental disclosure and require a notice be sent out.
I mean I suppose it comes down to how much you trust your users. I do think it's going to be very very difficult to block out all AI solutions especially since they are now open source AI GPT models. It's a good point about accidentally using confidential information in a prompt, or having the AI recommend code that may be under a toxic license. If you're a massive company there's probably a much higher risk than if you're in a smaller company. I suppose it depends also on your company culture. Either way if you try and block it I would expect there to be a fight to unblock it.