Firstly, I really do apologise!
CompuVerse has been down for a number of hours now, and I'm really sorry.
I'll be 100% transparent on this, the server ran out of disk space!
This sent it into a sort of "safe mode", which unfortunately took down CompuVerse.
I have migrated all image storage over to an external server with a large amount more storage.
However, even with this, just the text content and metadata for CompuVerse is still rather large.
I'm going to investigate solutions but am averse to running a DB server over mounted storage as this is very susceptible to latency and speeds.
~~By my approximations we have around 2 months' worth of disk space remaining now that Pictrs is removed. I'm looking for solutions in the meantime that will balance performance and cost effectively.~~
Edit: The PostgreSQL database has been moved. Please read https://compuverse.uk/post/277751 for more info.
I've seen others mention how much storage Lemmy uses. Frankly their numbers seem staggering to me. Gigabytes per day? Of mostly text? That seems an unsustainable rate, ripe for optimization. I'm really hoping there will be something done to minimize that in the next couple of months.
If you end up needing donations, don't be afraid to ask. I can't speak for others, but I have no pr9boem paying for services I like to use.
Hi Steve, Many thanks for this.
The storage required for lemmy is indeed quite significant!
I have no doubt further optimisations could be done there, but at present it's an unfortunate truth that lemmy likes its storage!
So far it's eaten 21GB of database space in less than 2 months, and again, that's all text!
I'm a software developer and my largest production database I've ever had has only been around 100GB after around 15 years of usage.
Posts and messages from hundreds of thousands of users globally just take a lot of space I guess haha! (Though it would be nice if lemmy only cached remote posts for say, a day or a week, and then wiped the content from its local cache and called out to the remote server thereafter)
I'm very averse to asking for donations! I don't want anyone to have to pay to access CompuVerse, and donations, whilst greatly appreciated would feel wrong to take. Plus there's then further complications regarding how to receive said donations, any taxes involved etc. etc. which I quite frankly can't be bothered to deal with haha!
I really do want to avoid taking donations if at all possible :)
So every instance of Lemmy has a copy of every other instance, used for caching and faster access? That's a surprising design choice.
Does the data get copied to your server when one of the CompuVerse users views it, or does everything from all of Lemmy get copied without any interaction?
From what I gather, the way it works is that once a user here has interacted with a remote community at least once, and subscribed to it, from then on, all activity that happens in that community is then automatically pulled across and stored inside this instance also.
So if I subscribed to the "technology" community on "lemmy.world", every post and comment, edit and deletion and I believe even votes and such made afterwards will then be synchronised across to this instance.
Checking the database however, it actually seems that the post and comment data etc isn't too large.
The majority of the storage is actually taken by the "Activity" table.
From what I can see, this table is used to basically store a log of everything the server has been told about. The actual contents are taken out into other tables. (Which are only a couple of hundred MB, rather than 20GB!)
Lemmy does have an automatic cleanup of this table, but it only removes content older than 6 months.
Since we only started in June, we've got an entire 4 months more data to go, and by the way things are going, that's liable to total to potentially hundreds of gigabytes of data.
The activity table had more data in it from the last 2 weeks than it did for the entire month and a half preceeding! (12 million activity records in 2 weeks, compared to only 11 million between the start of June and 2 weeks ago)