this post was submitted on 24 Oct 2024
85 points (100.0% liked)

Asklemmy

44151 readers
1437 users here now

A loosely moderated place to ask open-ended questions

Search asklemmy ๐Ÿ”

If your post meets the following criteria, it's welcome here!

  1. Open-ended question
  2. Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
  3. Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
  4. Not ad nauseam inducing: please make sure it is a question that would be new to most members
  5. An actual topic of discussion

Looking for support?

Looking for a community?

~Icon~ ~by~ ~@Double_[email protected]~

founded 5 years ago
MODERATORS
 

With everything that's happening there, I was wondering if it was possible. Obviously their size is massive, but I'm sure there's a ton of duplicated stuff. Also some things are more important to preserve than others, and some things are preserved elsewhere (Anna's Archive, Libgen, and Z-Lib come to mind that could preserve books if the IA disappeared).

But how could things get archived from the IA (assuming it's possible) on both a personal level (aka I want to grab a copy of that wayback snapshot) and on a more wide scale community level? Are there people already working on it? If not, what would be the best theoretical way to get started?

And what are the most important things in your opinion that should be prioritized if the IA was about to disappear and we only had so much time and storage to utilize?

you are viewing a single comment's thread
view the rest of the comments
[โ€“] JusticeForPorygon 20 points 2 months ago (1 children)

The Internet Archive is supposedly over 99 petabytes in size. That's an unfathomable amount of data.

[โ€“] [email protected] 12 points 2 months ago

I think it's actually about 150 PB of data that's then also georedundantly stored in the US and Netherlands. That sounds like a lot, but I think it would be possible to distribute that amount of data