wow that lasted for what, just under a year? Impressive, it took less than a year for them to lose their morals.
Generative AI
A community about news, questions, development and all related subjects of Generative AI
I remember a few years ago when Google or AWS staff rebelled because the Pentagon was going to use their (hosting?) services. I guess those types people at OpenAI have either been let go, or beaten into submission. I understand the feeling of futility, when no matter how much effort you put into something, someone above you who has little to no understanding of what they're doing, won't listen to your advice/recommendation
Hit the nail on the head. It doesn't matter what the software/tech people say, business will do whatever the fuck it wants. If you have a moral objection? Guess what lucky case is you're moved to an irrelevant project and your career trajectory is immediately stopped. Otherwise insubordination, not doing your job duties, there's the door.
For a lot of us software people (myself included), I've made decisions in the past of "Well, they're going to build it anyway, I might as well try to enforce what I can from my level here". I know 100% I've said things aren't technically possible at key junctures when they started breaking moral lines. "Sorry, I just don't know of a way technically to make that happen". They can think of me being stupid, I don't care.
"Mo-rals"? 404 not found.
The canary is dead.
When I see this sort of thing, I immediately remember something that I learned from discourse analysis: look at what is said and what is not said.
OpenAI knows that military and warfare are profitable and unpopular. So how do you profit from it without getting the associated bad rep ("OpenAI has bloods on its hands!")? Do it as silently as possible, and cover it under an explanation that it's "clearer" for you.
I viewed OpenAI as a disturbance and an annoyance until now.
You realize, of course, this means war.