this post was submitted on 31 Jul 2023
36 points (90.9% liked)

Selfhosted

40739 readers
511 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS
 

Hi y'all. I've got an Intel Nuc 10 here. I want to run a few apps on it, like BitWarden, PiHole, NextCloud, Wireguard, and maybe more, just for my own use, inside my home.

Is there a way to guage whether the hardware is up to the task in advance? Like, if love to be able to plan this by saying, "this container will use x MB of ram and 5% of the cpu" and so on?

I want to run everything on this one PC since that's all I have right now.

EDITED TO ADD: T****hank you all! Great info. :thumbsup

all 23 comments
sorted by: hot top controversial new old
[–] OrNotToBee 23 points 1 year ago

Torque it until you hear the crack, then back a quarter turn

[–] [email protected] 14 points 1 year ago (2 children)

I don’t have an answer for you, but I will tell you from my experience, you can probably run a lot more on that thing than you might think.

I run all of my services on docker and I think I have 30+ services up at all times. What you should remember is that even under your most demanding workload, you’re probably only hitting like 5 services at a time while the rest sit idle. And if you are picking good, efficient apps (I really like the linuxserver.io apps), they’re not pulling much under load and certainly not while idling.

Your NUC sounds much more capable than my BeeLink and mine doesn’t break a sweat. The other commenter had it right, just keep adding stuff until you see a degradation of performance, I’m yet to hit one.

[–] [email protected] 4 points 1 year ago* (last edited 1 year ago)

I agree. Run everything you want and then when you see performance degradation then you'll know the limits of your hardware based on your workloads.

You already have the NUC so why not push it's limits? The alternative is to try and guestimate your workload needs and buy matching hardware... which is very difficult.

[–] [email protected] 2 points 1 year ago* (last edited 1 year ago) (1 children)

To add to this with another example: my server runs

  • jellyfin
  • Nextcloud
  • gitea
  • Monica (a CRM, look it up on awesome-selfhosted)
  • vaulwarden (rust implementation of Bitwarden)
  • code-server
  • qBitTorrent-nox
  • authelia (2FA)
  • pihole
  • smbd
  • sshd
  • Caddy

In total, I’m using about 1.5GB out of 6GB of RAM (with another 1GB out of 16GB of swap being used), and the idle CPU usage is only 1%-ish (i5-3470 with the BIOS-settings set to power saving).

Even on very old and low-powered hardware, you can still run a lot of services without any problems.

[–] mordred 1 points 1 year ago

Basically this for me on a raspberry 4GB

[–] ippokratis 8 points 1 year ago* (last edited 1 year ago) (2 children)

CPU wise: Monitor load average as you load services. If it stays below the number of cores you are fine. That being said nuc 10 has a 6 core cpu Its more than OK for a barebones. For reference I'm running smooth on a raspberry400 4 GB RAM.

Vaultwarden

nginx webdav

Photoprism

Librephotos

Owntracks

Traccar

Monocker

Brave go-aync

Mozilla sync

Wallabag

radicale

Baikal

Ncfpm

Wireguard

Jellyfin

Rsstt

Joplin webview

Just fine

So you'll be fine

[–] [email protected] 3 points 1 year ago (1 children)

Thank you, kind alien being. (Assuming this, based on your user name)

[–] ippokratis 1 points 1 year ago

Ah the name tag Yeah I like exotic fonts

[–] [email protected] -1 points 1 year ago (1 children)

Are you allergic to punctuation?

[–] ippokratis 4 points 1 year ago

There you go. Formated the comment, for your eyes only.

[–] [email protected] 4 points 1 year ago

ram is really the limiting factor for most servers

if you're gonna have less than 5 users on the services they're probably not all going to be used at the same time so cpu usage will depend on which are being hit at the moment

none of the services you've listed are particularly heavy so you'll be good for those and a bunch more no problem

[–] rambos 3 points 1 year ago

I should add more ram soon because Im running 30 services on 8GB atm and looks like Im about to hit the wall. Services I run atm are pihole, nextcloud, wireguard server, arr stack, jellyfin, homeassistant and more.

[–] Anonymouse 2 points 1 year ago

This is tangential to your question, but I've been playing with Kubernetes and its ability to ration resources like CPU and RAM. I'm guessing that Docker has a similar facility. Doing this, I hope, will allow me to have Plex transcode videos in the background without affecting the responsiveness of a web app I'm using or will kill and restart that one app I wrote that has a memory leak that I can't find.

[–] [email protected] 2 points 1 year ago* (last edited 1 year ago)

BitWarden+PiHole+NextCloud+Wireguard combined will add to like maybe 100MB of RAM or so.

Where it gets tricky, especially with something like NextCloud, is the performance you see from NextCloud will depend tremendously on what kind of hard drives you have and how much of it can be cached by the OS. If you have 4GB of RAM, then like 3.5GB-ish of that can be used as cache for NextCloud (and whatever else you have that uses considerable storage). If you have tiny NextCloud storage (like 3.5GB or less), then your OS can keep the entire storage in cache, and you'll see lightning-fast performance. If you have larger storage (and are actually accessing a lot of different files), then NextCloud will actually have to touch disk, and if you're using a mechanical (spinning rust) hard drive, you will definitely see the 1-second lag here and there for when that happens.

And then if you have something like Immich on top of that....

And then if you have transmission on top of that....

Anything that is using considerable filesystem space will be fighting over your OS's filesystem cache. So it's impossible to say how much RAM would be enough. 512MB could be more than enough. 1TB could be not enough. It depends on how you're using it and how tolerant you are of cache misses.

Mostly you won't have to think about CPU. Most things (like NextCloud) would be using like <0.1% CPU. But there are some exceptions.

Notably, Wireguard (or anything that requires encryption, like an HTTPS server) will have CPU usage that depends on your throughput. Wireguard, in particular, has historically been a heavy CPU user once you get up to like 1Gbit/s. I don't have any recent benchmarks, but if you're expecting to use Wireguard beyond 1Gbit/s, you may need to look at your CPU.

[–] [email protected] 2 points 1 year ago (1 children)

It's very hard to say anything definitive, because many of those can generate different load depending on how much traffic/activity it gets (and how it correlates with other service usage at the same time). Could be from minimal load (all services for personal use, so single user, low traffic) to very busy system (family and friends instance, high traffic) and hardware requirement estimates would change accordingly.

As you already have a machine - just put them all there and monitor resource utilization. If it fits - it fits, if it doesn't - you'll need to replace (if you're CPU-bound, I believe CPUs are not upgradeable on those?) or upgrade (if you're RAM-bound) your NUC. You won't have to reinstall them twice anyway.

[–] vegetaaaaaaa 1 points 1 year ago

This is the only real answer - it is not possible to do proper capacity planning without trying the same workload on similar hardware [1].

Some projects give an estimation of resource usage depending on a number of factors (simultaneous/total users...) but most don't, and even the estimations may be far from actual usage during peak load, with many concurrent services, etc.

The only real answer is close monitoring of resource usage and response times (possibly with alerting), and start adding resources or cutting down on resource-hungry features/programs if resource usage goes over a certain threshold (~80% is when you should start paying attention) and/or performance starts to degrade.

My general advice is to max out installed RAM from the start, virtualize your hosts (which make it easier to add/remove resources or migrate a hungry VM on more powerful hardware later), and watch out for disk I/O on certain workloads (databases... having db engines running off SSDs helps greatly).

[–] [email protected] 1 points 1 year ago* (last edited 1 year ago)

It sounds like it could easily run these. You could probably get away with a newer raspberry pi for them so the nuc should have no issues.

For reference the heaviest thing for me has been Home assistant os, which needs dedicated ram and cores for it's VM. I've had no issues with running almost a dozen services on a 4790k based system along HA including: Immich, plex, radarr/sonarr/prowlarr/etc, usually a dedicated game server for Valheim or Minecraft or something, and some other lighter services.

I think ram (16gb) is going to be the limiting factor in my case but I haven't hit that limit yet

[–] [email protected] 1 points 1 year ago

I just slowly add more services and watch my RAM and CPU.

For example, my setup is an older laptop for processing and I have a NAS for storage. The laptop has a 5th gen i5 with 8GB of RAM with a Linux OS. It's currently running 19 containers.

Just monitor it and play around. You'll get a feel of what your equipment can handle.

[–] [email protected] 0 points 1 year ago

I just use docker desktop for that, it shows memory usage.

For example I tried immich and saw that it takes 3 gb of ram to host a gallery with a single image