this post was submitted on 05 Jul 2023
26 points (100.0% liked)

Lemmy Server Performance

420 readers
1 users here now

Lemmy Server Performance

lemmy_server uses the Diesel ORM that automatically generates SQL statements. There are serious performance problems in June and July 2023 preventing Lemmy from scaling. Topics include caching, PostgreSQL extensions for troubleshooting, Client/Server Code/SQL Data/server operator apps/sever operator API (performance and storage monitoring), etc.

founded 2 years ago
MODERATORS
 

It looks like the lack of persistent storage for the federated activity queue is leading to instances running out of memory in a matter of hours. See my comment for more details.

Furthermore, this leads to data loss, since there is no other consistency mechanism. I think it might be a high priority issue, taking into account the current momentum behind growth of Lemmy...

all 10 comments
sorted by: hot top controversial new old
[–] phiresky 4 points 2 years ago (1 children)

a large part of the queue going unbounded is due to the retry queue and missing checks if the receiving servers are actually available. quick fix is disabling the retry queue which is currently making it not go unbounded on lemmy.world

storing the queue persistently is somewhat of a separate issue since that doesn't much affect whether or not it can be processed in time.

also a ton of the memory use was (and is) due to inefficient sql queries.

[–] DreadTowel 2 points 2 years ago* (last edited 2 years ago)

I guess that works as an emergency measure. Persistent storage doesn't affect whether the updates are processed in time, but it would act as a sort of swap to keep the memory usage manageable.

For scalability, perhaps, you could run dijkstra and route the updates using the shortest path to each federated node, in a multicast sort of way? That would make the updates scale in a O(log(N)) way, provided that activity isn't too centralised. It would also be great to run periodic "deep scrubs" between instances to sync up each other's activities and provide actual eventual consistency. I guess that's kind of a liberal interpretation of ActivityPub, but I think that's the only way to ensure real scalability.

[–] Gashole711 3 points 2 years ago (1 children)

Using Couchbase as your eventual consistency database is perfect for this scenario. It's designed for this type of thing. Even if systems are offline for a few days they will queue up and replicate when they come back online. Cruise ships use it for this very reason.

[–] [email protected] 2 points 2 years ago (2 children)

Can you provide a link? Only thing I see on Couchbase is for NoSQL databases.

[–] Gashole711 1 points 2 years ago (2 children)

It is a NoSQL database but the SQL syntax is ANSI SQL compliant. If you moved the queues to Couchbase and let it handle the replication and consistency, you wouldn't have to code for it.

[–] [email protected] 2 points 1 year ago (2 children)

Thanks for checking.
We don't need full database replication. We are just replicating activities that other servers are subscribed to. So the Comments table, only some of the rows will be "replicated". Not sure if/how CouchBase handles this.

[–] Gashole711 1 points 1 year ago

Buckets are like databases in traditional relational databases. Here's the doc on how to filter which documents get replicated.

https://docs.couchdb.org/en/stable/replication/intro.html#controlling-which-documents-to-replicate

[–] Gashole711 1 points 1 year ago

Normally with XDCR you can specify which documents to replicate out of a bucket. It doesn't have to be the entire bucket. So if you had certain types (comments, upvotes, etc) then only those would sync when the target comes online.

I did check into Apache CouchDB, the open source upstream, and replication is there. We use Enterprise Couchbase at work and it's a dream but there are some tools that I use that only use Apache CouchDB (Inkdrop for example). It's worth looking into.

[–] Gashole711 1 points 2 years ago

Actually XDCR is not available in the community edition so that option is gone. Sorry about that.