this post was submitted on 04 Jul 2023
2343 points (99.1% liked)

Lemmy.World Announcements

29156 readers
5 users here now

This Community is intended for posts about the Lemmy.world server by the admins.

Follow us for server news 🐘

Outages πŸ”₯

https://status.lemmy.world

For support with issues at Lemmy.world, go to the Lemmy.world Support community.

Support e-mail

Any support requests are best sent to [email protected] e-mail.

Report contact

Donations πŸ’—

If you would like to make a donation to support the cost of running this platform, please do so at the following donation URLs.

If you can, please use / switch to Ko-Fi, it has the lowest fees for us

Ko-Fi (Donate)

Bunq (Donate)

Open Collective backers and sponsors

Patreon

Join the team

founded 2 years ago
MODERATORS
2343
submitted 1 year ago* (last edited 1 year ago) by ruud to c/lemmyworld
 

Status update July 4th

Just wanted to let you know where we are with Lemmy.world.

Issues

As you might have noticed, things still won't work as desired.. we see several issues:

Performance

  • Loading is mostly OK, but sometimes things take forever
  • We (and you) see many 502 errors, resulting in empty pages etc.
  • System load: The server is roughly at 60% cpu usage and around 25GB RAM usage. (That is, if we restart Lemmy every 30 minutes. Else memory will go to 100%)

Bugs

  • Replying to a DM doesn't seem to work. When hitting reply, you get a box with the original message which you can edit and save (which does nothing)
  • 2FA seems to be a problem for many people. It doesn't always work as expected.

Troubleshooting

We have many people helping us, with (site) moderation, sysadmin, troubleshooting, advise etc. There currently are 25 people in our Discord, including admins of other servers. In the Sysadmin channel we are with 8 people. We do troubleshooting sessions with these, and sometimes others. One of the Lemmy devs, @[email protected] is also helping with current issues.

So, all is not yet running smoothly as we hoped, but with all this help we'll surely get there! Also thank you all for the donations, this helps giving the possibility to use the hardware and tools needed to keep Lemmy.world running!

top 50 comments
sorted by: hot top controversial new old
[–] czarrie 384 points 1 year ago (3 children)

I'm just excited to be back in the Wild West again -- all of the big players had bumps, at least this one is working to fix them.

[–] Today 120 points 1 year ago (1 children)
[–] G_Wash1776 31 points 1 year ago

I’d rather have to deal with hiccups and bumps along the way, because the community only grows more each time.

[–] TurnItOff_OnAgain 42 points 1 year ago (1 children)

I still remember early reddit days of 502 it went through, 504 try once more.

load more comments (1 replies)
load more comments (1 replies)
[–] Kalcifer 160 points 1 year ago* (last edited 1 year ago) (14 children)

That is, if we restart Lemmy every 30 minutes. Else memory will go to 100%

Lemmy has a memory leak? Or, should I say, a "lemmory leak"?

[–] [email protected] 65 points 1 year ago (1 children)

A pretty bad one at that...

[–] [email protected] 58 points 1 year ago (8 children)
[–] ObviouslyNotBanana 94 points 1 year ago (1 children)

Rust makes holes and that's how leaks happen

load more comments (1 replies)
[–] donalonzo 34 points 1 year ago

Rust protects you from segfaulting and trying to access deallocated memory, but doesn't protect you from just deciding to keep everything in memory. That's a design choice. The original developers probably didn't expect such a deluge of users.

load more comments (6 replies)
load more comments (13 replies)
[–] Eczpurt 138 points 1 year ago

Really appreciate all the time and effort you all put in especially while Lemmy is growing so fast. Couldn't happen without you!

[–] Shartacus 107 points 1 year ago (3 children)

I want this to succeed so badly. I truly feel like it’s going to be sink or swim and will reflect how all enshitification efforts will play out.

Band together now and people see there’s a chance. Fail and we are doomed to corporate greed in every facet of our lives.

load more comments (3 replies)
[–] SnowFoxx 98 points 1 year ago (1 children)

Thank you so much for your hard work and for fixing everything tirelessly, so that we can waste some time with posting beans and stuff lol.

Seriously, you're doing a great job <3

load more comments (1 replies)
[–] cristalcommons 95 points 1 year ago (1 children)

i just wanted to thank you for doing your best to fix lemmy.world as soon as possible.

but please, don't feel forced to overwork yourselves. i understand you want to do it soon so more people can move from Reddit, but i wouldn't like that Lemmy software and community developers overwork and feel miserable, as those things are some of the very motives you escaped from Reddit in first place.

in my opinion, it would be nice that we users understand this situation and, if we want lemmy so bad, we actively help with it.

this applies to all lemmy instances and communities, ofc. have a nice day you all! ^^

[–] Cinner 37 points 1 year ago (1 children)

Plus, slow steady growth means eventual success. Burnout is very real if you never take a break.

load more comments (1 replies)
[–] cartmansbellybuttom 81 points 1 year ago (1 children)

As a game dev for bigwigs I know all too well about memory leaks, and so very much appreciate your patch notes, updates, and transparency. You're doing great with such fast exponential growth

πŸ’™ Thanks for your hard work!

load more comments (1 replies)
[–] Clbull 80 points 1 year ago (2 children)

As somebody who flocked to Voat during the height of the Ellen Pao controversy and remembered the site being rendered unusable for whole days at a time from the Reddit Hug of Death, I'm remarkably surprised at how well Lemmy.world has held up. I thought the fediverse would have truly crumbled from this exodus.

[–] [email protected] 31 points 1 year ago (1 children)

I remember when Voat came out and the slight exedous that brought. I made an account and everything but it never properly took off. I checked on it two or three years later and it was just filled with alt-right/racist/transphobic garbage. Sad it never took off as a reddit alternative, reddit likely would have greatly benefited from a proper alternative, not sad it closed down after I saw what it ended up.

So far the fediverse feels really different tho, very explicitly anti that type of shit. I'm sure it will pop up, they always do, but maybe now people know how to deal with it. Block it, defederate, deplatform.

[–] itboss 28 points 1 year ago (4 children)

FYI, it has popped up explodingheads is a great example but many servers including lemmy.world became proactive in defederating from the instance

load more comments (4 replies)
load more comments (1 replies)
[–] Frostwolf 74 points 1 year ago (1 children)

This is the level of transparency that most companies should strive for. Ironic that in terms of fixing things, volunteer and passion projects seem to be more on top of issues compared to big companies with hundreds of employees.

load more comments (1 replies)
[–] Candelestine 71 points 1 year ago (12 children)

What was that? We're going to need more and better hardware soon, and you have a Patreon and a paypal on the sidebar?

Yeah, that sounds pretty reasonable, we can work with that.

load more comments (12 replies)
[–] Cshock159 67 points 1 year ago

Could I get a discord invite? I’m an ex sysadmin with a. Lot of free time

[–] [email protected] 60 points 1 year ago (1 children)

@ruud > That is, if we restart Lemmy every 30 minutes. Else memory will go to 100%

Hmm, makes me curious if there is a Lemmy memory leak, or simply that the load wants to stabilize above of the RAM you have? I hope contributions can help you with another 32 GB RAM? Thank you for your work! 🍻

[–] ruud 65 points 1 year ago (4 children)

We have 128GB of RAM. It just skyrockets after a while!

[–] [email protected] 28 points 1 year ago

@ruud Oh damn. This spontaneously sounds crazy but I’m admittedly a novice at servers on this scale.

load more comments (3 replies)
[–] Gingerlegs 55 points 1 year ago (1 children)

It’s a ton better this afternoon! Thank you!!

load more comments (1 replies)
[–] [email protected] 54 points 1 year ago (1 children)

Thanks for all of your effort. Even though we are on different instances, it’s important for the Fediverse community that you succeed. You are doing valuable work, and I appreciate it.

load more comments (1 replies)
[–] Sausage_Mahoney 53 points 1 year ago (2 children)

The work you're doing is greatly appreciated! It's like you invited half the internet into your house. I feel like I should've brought a cake or something

load more comments (2 replies)
[–] [email protected] 48 points 1 year ago* (last edited 1 year ago) (1 children)

Huge respect for what you've built here, but it might be worth reaching out to the lemm.ee admin. I only know enough DevOps and cloud hosting to be dangerous, not helpful. But his instance seems stable and scalable. He might be able to offer some insight into the issues here

[–] ruud 67 points 1 year ago

Yes he's one of the other admins in our Discord, he's very helpful!

[–] Glunkbor 46 points 1 year ago

Of course these performance issues are a bit annoying, but I gotta say that I love these updates and explanations here. Great communication, keep it up, please!

[–] LeHappStick 40 points 1 year ago* (last edited 1 year ago)

.world is definitely running smoother than when I joined 3 days ago, back then it was impossible to comment and the lag was immense, right now I just have to occasionally reload the page, but that's nothing in comparison.

You guys are doing an amazing work! I'm broke, so here are some ~~coins πŸͺ™πŸͺ™πŸͺ™πŸͺ™~~ beans 🫘🫘🫘🫘

[–] [email protected] 40 points 1 year ago (2 children)

Keep up the good work!

I created an account in lemm.ee until the issues are fixed. Then I will happily go back to my lemmy.world account.

[–] ruud 56 points 1 year ago

Lemm.ee is also a good choice!

load more comments (1 replies)
[–] AlmightySnoo 39 points 1 year ago* (last edited 1 year ago) (2 children)

That is, if we restart Lemmy every 30 minutes. Else memory will go to 100%

who'd have thought memory leaks would be possible in Rust 🀯

(sorry not sorry Rust devs)

load more comments (2 replies)
[–] TomFrost 38 points 1 year ago (3 children)

Cloud architect hereβ€” I’m sure someone’s probably already brought it up, but I’m curious if any cloud native services have been considered to take the place of what I’m sure are wildly expensive server machines. E.g. serve frontends from cloudfront, host the read-side API on Lambda@Edge so you can aggressively and regionally cache API responses, anything other than an SQL for the database β€” model it in DynamoDB for dirt cheap wicked speed, or Neptune for a graph database that’s more expensive but more featureful. Drop sync jobs for federated connections into SQS, have a lambda process that too, and it will scale as horizontally as you need to clear the queue in reasonable time.

It’s not quite as simple to develop and deploy as docker containers you can throw anywhere, but the massive scale you can achieve with that for fractions of the cost of servers or fargate with that much RAM is pretty great.

Or maybe you already tried/modeled this and discovered it’s terrible for you use case, in which case ignore me ;-)

[–] Olap 34 points 1 year ago (4 children)

You were so close until you mentioned trying to ditch SQL. Lemmy is 100% tied hard to it, and trying to replicate what it does without ACID and Joins is going to require a massive rewrite. More importantly - Lemmy's docs suggest a docker-compose stack, not even k8s for now, it's trying really hard not to tie into a single cloud provider and avoid having three cloud deployment scripts. Which means SQS, lambdas and cloudfront out in the short term. Quick question, are there any STOMP compliant vendors for SQS and lambda equivalent yet?

Also, the growth lemmy.world has seen has been far outside what any team could handle ime. Most products would have closed signups to handle current load and scale, well done to all involved!

load more comments (4 replies)
load more comments (2 replies)
[–] HiddenTower 36 points 1 year ago

Please keep working on it, thank you for your effort.

[–] AlmightySnoo 30 points 1 year ago (5 children)
load more comments (5 replies)
[–] nielsn 29 points 1 year ago (1 children)

Thank you for your effort!

[–] ruud 26 points 1 year ago (1 children)

You're welcome! (testing comments now... )

load more comments (1 replies)
[–] FlyingSquid 27 points 1 year ago (1 children)

I am very forgiving of the bugs I encounter on Lemmy instances because Lemmy is still growing and it's essentially still in beta. I am totally unforgiving of Reddit crashing virtually every day after almost two decades.

load more comments (1 replies)
[–] RomanRoy 26 points 1 year ago (5 children)

System load: The server is roughly at 60% cpu usage and around 25GB RAM usage. (That is, if we restart Lemmy every 30 minutes. Else memory will go to 100%)

Shouldn't we be discussing closing registrations?

[–] tincansandtwine 67 points 1 year ago (9 children)

There's a lot of momentum to move away from reddit right now, and closing registrations would be a wet blanket. Personally, I'll take the performance issues and transparency in the process over closing registrations.

[–] unknown_name 48 points 1 year ago

This. Don't stop the train. People need to be able to come over freely.

load more comments (8 replies)
load more comments (4 replies)
[–] [email protected] 25 points 1 year ago (2 children)

The need to restart server every so often to avoid excessive ram usage bit is very interesting to me. This sounds like some issue with memory management. Not necessarily a leak, but maybe something like server keeping unnecessary references so the object cannot be dropped.

Anyway, from my experience Rust developers love debugging such kind of problems. Are Lemmy Devs aware of this issue? And do you publish server usage logs somewhere to look deeper into that?

load more comments (2 replies)
load more comments
view more: next β€Ί