this post was submitted on 18 Oct 2024
358 points (99.7% liked)

Technology

60005 readers
3315 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 26 points 2 months ago (10 children)

Assuming that these have fairly impressive 100 MB/s sustained write speed, then it's going to take about 93 hours to write the whole contents of the disk - basically four days. That's a long time to replace a failed drive in a RAID array; you'd need to consider multiple disks of redundancy just in case another one fails while you're resilvering the first.

[–] [email protected] 8 points 2 months ago (1 children)

That's a pretty common failure scenario in SANs. If you buy a bunch of drives, they're almost guaranteed to come from the same batch, meaning they're likely to fail around the same time. The extra load of a rebuild can kill drives that are already close to failure.

Which is why SANs have hot spares that can be allocated instantly on failure. And you should use a RAID level with enough redundancy to meet your reliability needs. And RAID is not backup, you should have backups too.

[–] kalleboo 1 points 2 months ago

Also why you need to schedule periodical parity scrubs, then the "extra load of a rebuild" is exercised regularly so weak drives will be found long before a rebuild is needed.

load more comments (8 replies)