Selfhosted
A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.
Rules:
-
Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.
-
No spam posting.
-
Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.
-
Don't duplicate the full text of your blog or github here. Just post the link for folks to click.
-
Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).
-
No trolling.
Resources:
- selfh.st Newsletter and index of selfhosted software and apps
- awesome-selfhosted software
- awesome-sysadmin resources
- Self-Hosted Podcast from Jupiter Broadcasting
Any issues on the community? Report it using the report flag.
Questions? DM the mods!
view the rest of the comments
Basically the limit would be the speed of the database and the drive it runs on. If you connect a SATA SSD via usb3 it shouldn't be too bad. Can't tell you exact figures but a few hundred users is probably ok if you don't expect the site to be super responsive.
Well, "ish".
My experience with databases in general (granted, more the big ones than stuff like Postgres and mySQL) is that a lot if not most of the stuff that's important for performance is held in memory (certainly they'll tend to keep the most frequently fetched stuff in memory, along with the most used indexes) so I suspect the bigger Pi devices (with 4GB and 8GB) might just have enough memory to handle a good number of people doing common usage stuff (say, checking All in Active mode).
With a really big database and usage profile which has a random uniform distribution (i.e. any data piece is just as likely to need to be fetched as any other) then for the DB to be I/O bound in a Pi makes sense, but it's my impression (or maybe its just me ;)) that Lemmy data access is very concentrated in a just a few things (which do change over time but the DB engine wll naturally adjust the memory cache contents for that kind of change)
From the little that I know about the structure of the Lemmy software, I expect it's the image server that'll have problems with slow I/O rather than the database.
Of course, all this is just conjecture, as while I worked in high performance computing, it wasn't exactly done with Raspberry Pi devices ;)
Thanks. Might be useful for there to be a table outling diffrent hardware configs and acceptable user loads as more people people consider creating instances.
its difficult because different users have different usage patterns.
for example, two users who never post and are never online at the same time really take no resources from each other. they are effectively "one" user.
one user who posts 10gb of content a day, and is constantly posting would be equivalent to hundreds of "normal" users.
Yes, sure, didn't want to complicate the question by adding that :)