this post was submitted on 19 Jul 2024
632 points (98.5% liked)

Technology

59674 readers
4515 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

IT administrators are struggling to deal with the ongoing fallout from the faulty CrowdStrike update. One spoke to The Register to share what it is like at the coalface.

Speaking on condition of anonymity, the administrator, who is responsible for a fleet of devices, many of which are used within warehouses, told us: "It is very disturbing that a single AV update can take down more machines than a global denial of service attack. I know some businesses that have hundreds of machines down. For me, it was about 25 percent of our PCs and 10 percent of servers."

He isn't alone. An administrator on Reddit said 40 percent of servers were affected, along with 70 percent of client computers stuck in a bootloop, or approximately 1,000 endpoints.

Sadly, for our administrator, things are less than ideal.

Another Redditor posted: "They sent us a patch but it required we boot into safe mode.

"We can't boot into safe mode because our BitLocker keys are stored inside of a service that we can't login to because our AD is down.

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 18 points 4 months ago (3 children)

It might be CrowdStrike's fault, but maybe this will motivate companies to adopt better workflows and adopt actual preproduction deployment to test these sort of updates before they go live in the rest of the systems.

[–] EnderMB 19 points 4 months ago* (last edited 4 months ago) (3 children)

I know people at big tech companies that work on client engineering, where this downtime has huge implications. Naturally, they've called a sev1, but instead of dedicating resources to fixing these issues the teams are basically bullied into working insane hours to manually patch while clients scream at them. One dude worked 36 hours straight because his manager outright told him "you can sleep when this is fixed", as if he's responsible for CloudStrike...

Companies won't learn. It's always a calculated risk, and much of the fallout of that risk lies with the workers.

[–] [email protected] 8 points 4 months ago

That dude should not have put up with that.

[–] [email protected] 6 points 4 months ago (1 children)

Sounds so illegal, that it makes labour authoririty happy

[–] EnderMB 2 points 4 months ago (1 children)

Is it illegal? I'm not American so I have no idea if there are laws in your country against on-call maximum hours.

[–] [email protected] 3 points 4 months ago
  1. It's not about oncall, they are literally in the office
  2. See 1
  3. Not sure about America, but it is very illegal in Russia.
[–] [email protected] 1 points 4 months ago (1 children)

That comment about sleep...that's about where I tell them to go fuck themselves. I'll find a new job, I'm not going to put up with bullshit like that.

[–] Entropywins 1 points 4 months ago
[–] Randelung 9 points 4 months ago

Oh sweet summer child.

[–] cheetah_cheetos 8 points 4 months ago

Might be hard to do. Crowdstrike release several updates per day to the channel files to match changes in adversarial behaviour. In this case, BCP and backup are what need to be done.