this post was submitted on 11 Jul 2023
182 points (98.9% liked)

Lemmy

2172 readers
1 users here now

Everything about Lemmy; bugs, gripes, praises, and advocacy.

For discussion about the lemmy.ml instance, go to [email protected].

founded 4 years ago
MODERATORS
 

Over time, Lemmy instances are going to keep aquiring more, and more data. Even if, in the best case, they are not caching content and they are just storing the data posted to communities local to the server, there will still be a virtually limitless growth in server storage requirements. Eventually, it may get to a point where it is no longer economically feesible to host all of the infrastructure to keep expanding the server's storage. What happens at this point? Will servers begin to periodically purge old content? I have concerns that there will be a permanent horizon (as Lemmy becomes more popular, the rate of growth in storage requirements will also increase, thereby reducing the distance to this horizon) over which old -- and still very useful -- data will cease to exist. Is there any plan to archive this old data?

top 50 comments
sorted by: hot top controversial new old
[–] [email protected] 74 points 1 year ago* (last edited 1 year ago) (8 children)

Pictrs 0.4 recently added support for object storage. This is fantastic, because object storage is dirt cheap compared to traditional block storage (like a VM filesystem). This helps a lot for image storage, which is a large part of the problem, but it's not the whole problem.

I know Lemmy uses Postgres for everything else, but they should really invest time into moving towards something more sustainable for long term/permanent hosting. Paid Postgres services are obscenely upcharged and prohibitively expensive, so that's not an option.

I'm armchair architecting here so I'm not sure what that would look like for Lemmy (Cloudflare KV? Redis?)

Still, even my own private instance has been growing at a rate of about 700MB per day, and I don't even subscribe to that many things. I can't imagine what the major instances are dealing with. This isn't sustainable unless we want to start purging old data, which will kill Lemmy long term.


EDIT: Turns out ~90% of my Lemmy data is just for debugging and not needed:

https://github.com/LemmyNet/lemmy/issues/3103#issuecomment-1631643416

[–] [email protected] 16 points 1 year ago* (last edited 1 year ago)

I'm not really sure that a K/V service is a more scalable option than Postgres for storing text posts and the like. If you're not performing complex queries or requiring microsecond latencies then Postgres doesn't require that much compute or memory.

People can get unnecessary scared of relational databases if they've had bad experiences with databases that are used poorly, but attempting to force relational data into a K/V can lead to the application layer essentially just doing a less efficient job of the same types of queries that the database would normally handle. Maybe there'll be some future need to offload post and comment bodies into object storage or something but that seems incredibly premature.

Object storage for pictrs is definitely a fantastic addition, though.

[–] teolan 10 points 1 year ago (3 children)

The 700MB are the postgres data or everything including the images?

I'm under the impression that text should be very cheap to store inside postgres.

[–] [email protected] 6 points 1 year ago (1 children)

Keep in mind that you are also storing metadata for the post (i.e. creation time), relations (i.e. which used posted) and an index.

Might not be much now but these things really add up over the years.

load more comments (1 replies)
[–] [email protected] 3 points 1 year ago

On average, 500MB is Postgres, 200MB is Pictrs thumbnails. Postgres is growing faster than Pictrs is.

[–] [email protected] 2 points 1 year ago

My local instance that I run for myself is about a week old. Has 2.5G in pictrs, 609M in postgres. One of those things that'll vary for every setup

[–] [email protected] 6 points 1 year ago (1 children)

The largest table holds data that is only needed by Lemmy briefly. There is a scheduled job to clear it... Every 6 months. There are active discussions on how best to handle this.

On my instance I've set a cronjob to delete everything but the most recent 100k rows of that table every hour.

[–] [email protected] 3 points 1 year ago (7 children)

I saw that issue, and then I saw people having problems after clearing it, so I'm just going to wait until they figure that out in a stable version. Looking forward to it though!

load more comments (7 replies)
[–] [email protected] 4 points 1 year ago

It would be greater if it can also leverage IPFS. So we can have unique identifiers per media object and hence deduplication in a P2P network which in my opinion is more federvise affinitive. I have been thinking of making such an alternative media backend for a while.

[–] [email protected] 3 points 1 year ago (1 children)

There is a good writeup on how to do the migration here. I went through it myself since I host my tiny Lemmy instance on an AWS EC2 instance. It went pretty smoothly bu obviously larger instances will have to take a longer downtime to perform the migration.

[–] [email protected] 4 points 1 year ago* (last edited 1 year ago) (4 children)

Hey, that's a Vultr guide! I use Vultr, thanks!

By the way, how are your costs on EC2? My understanding is that hosting on EC2 would be cost prohibitive from data transfer costs alone, not to mention their monthly rates for instances are pretty much always below the cost of a VPS.

Now if only someone could do this for the Postgres data. I wonder if S3FS would be able to handle the load of a running database, that would be a nice way to save costs.

load more comments (4 replies)
[–] [email protected] 3 points 1 year ago

AWS Postgres instances aren't that expensive, and they handle upgrades and backups for you.

That said, I'm interested in distributed storage, and maybe this fall/winter when I get some time off I'll try making a lemmy fork that's based on a distributed hash table. There are going to be a ton of issues (i.e. data will be immutable), but I have a few ideas on how to mitigate most of the issues I know about.

[–] [email protected] 3 points 1 year ago (2 children)

Isn't it mostly pictures and movies taking up space, posts and comments that is just text doesn't take up much.

I would be fine with text is forever but pictures and movies are deleted after time.

[–] WhatASave 6 points 1 year ago

Just think of all those old, helpful forum posts from years past with tinypic and Photobucket links that are dead. I agree memes can probably die out after time but anything informative would be bad to lose imo

[–] [email protected] 5 points 1 year ago

For large instances pictures is probably the bigger consumer of space, but for small instances the database size is the bigger issue because of federation. Also, mass storage for media is cheap, fast storage for databases is not. With my host I can get 1TB of object storage for $5 a month. Attached NVMe storage is $1 per month per 10 GB.

For my small instance the database is almost 4x as large as pictrs, and growing fast.

[–] [email protected] 2 points 1 year ago

I'm in a similar boat and I'm gaining about 300 MB/day on my small instance doesn't yet have any local communities.

[–] [email protected] 27 points 1 year ago (1 children)

One way to approach the geometric storage growth would be to not cache everything everywhere all at once. With 1000+ instances, storing an object in a few instances would be ok if others can pull it in on demand. Can use some typical caching methodology like use frequency, aging etc.

[–] [email protected] 4 points 1 year ago

This is a great idea. Instances will need eventually to agree to common storage areas, even if they dont all allow the same content on their instance. That savings would be huge in the long run.

[–] [email protected] 22 points 1 year ago

Premature optimization is not good. Content here is not very storage intensive, so I would not yet make it to issue. Postgre can handle billions of rows when indexed right.

[–] [email protected] 21 points 1 year ago (5 children)

Sounds like the federated instances should consider opening up for donations and paid features like comment awards and animated shit like Discord or Reddit.

[–] Kalcifer 21 points 1 year ago (5 children)

I worry that these sorts of things would end up turning the site into a popularity contest (or, well, more of a popularity contest than these sorts of sites already are. That being said, I'm quite proud of Lemmy, currently, as it appears to be resisting that). Also I'm not entirely sure how things like payed comment awards would work with everything being federated.

load more comments (5 replies)
[–] [email protected] 11 points 1 year ago (1 children)

Yeah, no. No to paid features. That's how we ended up with reddit.

[–] [email protected] 3 points 1 year ago

No, we ended up with Reddit from stupid top-down leadership stupidity. I have no problem with Reddit's awards and whatnot, I have a problem with them locking kicking out third party apps and limiting the mobile web experience.

If you don't like an instance's profit model, just move to another one or host your own. There are plenty to choose from, so competition will keep away the worst of it.

My preference is for monetary subscribers to get access to new features early and to get more of limited features (i.e. if everyone gets X awards to distribute/day, subs get 2X or something), but for features to eventually make their way to everyone else. This allows A/B testing in an interesting way while funding the project. Since the project is FOSS, individual instances can decide to roll them out to everyone or block usage of these optional features from other instances.

If instance admins will be paying hundreds per month for hosting, they should have some way outside of donations to recoup that cost (plus their time spent). I doubt anyone will get rich from it, but hopefully it'll be enough to help admins offset the costs.

[–] livedeified 10 points 1 year ago

I'd be ok with something like a "donor" flair.

[–] [email protected] 3 points 1 year ago

I think we could do some kind of profile badge if you donated. So it wouldn't influence the way people vote, only if you clicked on their profile will you see it.

[–] Yoz 15 points 1 year ago

May be lemmy.world Admins can answer this ? @michelleG

[–] [email protected] 5 points 1 year ago (5 children)

@Kalcifer

The long term solution is something like IPFS object storage that's read only for everyone but the author instance. One copy of the data but all instances can read it and it's stored forever in a redundant medium with bitrot protection.

load more comments (5 replies)
load more comments
view more: next ›