"Employees enter classified correspondences or use the bot to optimize proprietary code. Given that ChatGPT's standard configuration retains all conversations, this could inadvertently offer a trove of sensitive intelligence to threat actors if they obtain account credentials."
As I recall, the current advice from pretty much everyone is: Don't give ChatGPT or any other LLM sensitive information ever. Even without stolen credentials, these things like to regurgitate information they have seen before. E.g. Grandma reading you windows keys to fall asleep.
As I recall, the current advice from pretty much everyone is: Don't give ChatGPT or any other LLM sensitive information ever. Even without stolen credentials, these things like to regurgitate information they have seen before. E.g. Grandma reading you windows keys to fall asleep.