this post was submitted on 05 Jul 2023
22 points (100.0% liked)

Belgium

836 readers
3 users here now

Welcome!

This is a community about Belgium. Feel free to post news, memes or anything related to Belgium.

Community Links:

founded 1 year ago
MODERATORS
22
submitted 1 year ago* (last edited 1 year ago) by antik to c/belgium
 

Noticeable improvements here on our home instance at Lemmy.world!

cross-posted from: https://lemmy.world/post/1061471

Another day, another update.

More troubleshooting was done today. What did we do:

  • Yesterday evening @phiresky@[email protected] did some SQL troubleshooting with some of the lemmy.world admins. After that, phiresky submitted some PRs to github.
  • @[email protected] created a docker image containing 3PR's: Disable retry queue, Get follower Inbox Fix, Admin Index Fix
  • We started using this image, and saw a big drop in CPU usage and disk load.
  • We saw thousands of errors per minute in the nginx log for old clients trying to access the websockets (which were removed in 0.18), so we added a return 404 in nginx conf for /api/v3/ws.
  • We updated lemmy-ui from RC7 to RC10 which fixed a lot, among which the issue with replying to DMs
  • We found that the many 502-errors were caused by an issue in Lemmy/markdown-it.actix or whatever, causing nginx to temporarily mark an upstream to be dead. As a workaround we can either 1.) Only use 1 container or 2.) set proxy_next_upstream timeout; in nginx.

Currently we're running with 1 lemmy container, so the 502-errors are completely gone so far, and because of the fixes in the Lemmy code everything seems to be running smooth. If needed we could spin up a second lemmy container using the proxy_next_upstream timeout; workaround but for now it seems to hold with 1.

Thanks to @[email protected] , @[email protected] , @[email protected], @[email protected] , @[email protected] , @[email protected] for their help!

And not to forget, thanks to @[email protected] and @[email protected] for their continuing hard work on Lemmy!

And thank you all for your patience, we'll keep working on it!

Oh, and as bonus, an image (thanks Phiresky!) of the change in bandwidth after implementing the new Lemmy docker image with the PRs.

top 4 comments
sorted by: hot top controversial new old
[–] shreknel 4 points 1 year ago

feels much better, well done to all involved!

[–] antik 3 points 1 year ago

Just fyi I am still on holiday so I was not involved in any way but I just wanted to highlight the immense work that was going on behind the scenes to get this instance stable. The difference is night and day for me. Such a smooth experience now. This is a crosspost from @[email protected]'s thread so feel free to go and show some appreciation there 😁

[–] DV8 3 points 1 year ago* (last edited 1 year ago) (1 children)

As someone who has to generate and use stats about resource usage on the weekly those graphs are lovely. And maddening in a sense to see how it was before the fixes if you could blame a developer for it. (I can because if it happens with me it's colleagues who develop on their own pc and think that because they did a unit test with a couple of requests it's fine withouth considering scale.

Edit: It would probably also help to put up a post saying that if you receive errors to update your app client. Connect was going crazy with errors today and it took seeing just the title of this post to put things together and manually make my phone check for a Connect update. And that update fixed my issues with Connect too.

[–] antik 3 points 1 year ago

Well the Lemmy devs never had an instance as big as lemmy.world to test this on so I guess a lot of issues stayed under the radar to them. There is/were also only two devs on the project with more than likely limited monitoring. On the lemmy.world admin channel there were experts from different fields jumping in and trying to monitor, evaluate and fix the issues it was really cool to see. These changes are also being pushed upstream so every Lemmy instance will benefit now from this work.

The graph is very beautiful, that drop in bandwidth is immense.