this post was submitted on 22 Jun 2023
9 points (100.0% liked)

Lemmy Server Performance

420 readers
1 users here now

Lemmy Server Performance

lemmy_server uses the Diesel ORM that automatically generates SQL statements. There are serious performance problems in June and July 2023 preventing Lemmy from scaling. Topics include caching, PostgreSQL extensions for troubleshooting, Client/Server Code/SQL Data/server operator apps/sever operator API (performance and storage monitoring), etc.

founded 1 year ago
MODERATORS
 

Federation likes (votes) are wildly different from server to server as it stands now. And unless something is way off on my particular server, ~~0.4 seconds is what PostgreSQL is reporting as the mean (average) time per single comment vote INSERT~~, and post vote INSERT is similar. (NOTE: my server is classic hard drives, 100MB/sec bencharked, not a SSD)

Discussion of the SQL statement for a single comment vote insert: https://lemmy.ml/post/1446775

Every single VOTE is both a HTTP transaction from the remote server and a SQL transaction. I am looking into Postgress supporting batches of inserts to not check all the index constraints at each single insert: https://www.postgresql.org/docs/current/sql-set-constraints.html

Can the Rust code for inserts from federation be reasonably modified to BEGIN TRANSACTION only every 10th comment_like INSERT and then do a COMMIT of all of them at one time? and possibly a timer that if say 15 seconds passes with no new like entries from remote servers, do a COMMIT to flush based a timeout.

Storage I/O writing for votes alone is pretty large...

you are viewing a single comment's thread
view the rest of the comments
[โ€“] [email protected] 0 points 1 year ago (1 children)

Well, I'm pulling out "like/dislike" (votes), because I consider it less of a priority. The actual comments and postings are taking over 1.0 second to INSERT, but that's the bulk of the site's purpose - to share actual content. If the likes lag by 15 seconds, is that such a big deal?

[โ€“] [email protected] 3 points 1 year ago

The way I'd imagine this working is the data is supplemented (augmented) into the query results.

Basically, there'd be a new cache / injection layer that sits between the application and the database. Instead of application directly working with the existing ORM to work with the DB (I'm not actually sure how Rust does this, so I'm just speaking in broader terms), the application would work against this layer that creates the buffer, and interface with the ORM. Then, on write actions, it fills up the buffer until buffer fills up or some time has past before bulk performing the write action; on read actions, it interfaces with ORM, and then weave the buffered data into the response back to the application.

Thus, from the user's perspective, nothing should be changed, and they wouldn't know any wiser.