I'm pretty old school, but as I only have 1 server, I just use ssh
, df
, du
and top
.
Selfhosted
A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.
Rules:
-
Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.
-
No spam posting.
-
Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.
-
Don't duplicate the full text of your blog or github here. Just post the link for folks to click.
-
Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).
-
No trolling.
Resources:
- selfh.st Newsletter and index of selfhosted software and apps
- awesome-selfhosted software
- awesome-sysadmin resources
- Self-Hosted Podcast from Jupiter Broadcasting
Any issues on the community? Report it using the report flag.
Questions? DM the mods!
Not even htop? That is old school.
Prometheus.
It's open source, it's easy to setup, its agents are available for nearly anything including OpenWrt, it can serve the simplest use case of "is it down" as well as much more complicated ones that stem from its ability to collect data over time.
Personally I'm monitoring:
- Is it up?
- Is the storage array healthy?
- Are the services I care about running?
I used to run it ephemerallly - wiping data on restart. Recently started persisting its data so I can see data over the longer run.
What do you use to see the data? Prometheus itself is easy to set up, but getting to the data seemed complicated.
The Prometheus built-in web UI. I find it pretty simple.
You can use grafana to visualise the data.
Grafana isn't too hard to use.
Prometheus for metrics
Loki for logs
Grafana for dashboards.
I use node exporter for host metrics (Proxmox/VMs/SFFs/RaspPis/Router) and a number of other *exporters:
- exportarr
- plex-exporter
- unifi-exporter
- bitcoin node exporter
I use the OpenTelemetry collector to collect some of the above metrics, rather than Prometheus itself, as well as docker logs and other log files before shipping them to Prometheus/Loki.
Oh, I also scrape metrics from my Traefik containers using OTEL as well.
What does having OpenTelemetry improve? I have a setup similar to yours but data goes from Prometheus to Grafana and I never thought I would need anything else.
Not a whole lot to be honest. But I work with OpenTelemetry everyday for my day job, so it was a little exercise for me.
Though, OTEL does have some advantages in that It is a vendor agnostic collection tool. allowing you to use multiple different collection methods and switch out your backend easily if you wish.
Have you tried the proxmox exporter? I have tried it briefly for a grafana lab and it seemed pretty good.
I haven't, but it looks like I've got another exporter to install and dashboard to create 😁
If you want to run the exporter without docker (like I did) and you get problems with installing the exporter try using this guide: https://github.com/prometheus-pve/prometheus-pve-exporter/wiki/PVE-Exporter-on-Proxmox-VE-Node-in-a-venv
I'm a network guy, so everything in my labs use SNMP because it works with everything. Things that don't support SNMP are usually replaced and yeeted off the nearest bridge.
For that I use librenms. Simple, open source, and I find it easy to use, for the most part. I put it on a different system than what I'm monitoring because if it shares fate with everything else, it's not going to be very useful or give me any alerts if there's a full outage of my main homelab cluster.
Of course, access from the internet to it, is forbidden, and any SNMP is filtered by my firewall. Nothing really gets through for it, so I'm unconcerned about it becoming a target. For the rest of my systems security is mostly reliant on a small set of reverse proxies and firewall rules to keep everything secure.
I use a couple of VPN systems to access the servers remotely, all running on odd ports (if they need port forwards at all). I have multiple to provide redundancy to my remote access, so if one VPN isn't working due to a crash or something, I have others that should get me some measure of access.
Netdata (agent only/not the cloud-based features), and a bunch of scanners running from cron/systemd timers, rsyslog for logs (and graylog for larger setups)
My base ansible role for monitoring.
Since your question is also related to securing your setup, inspect and harden the configuration of all running services and the OS itself. Here is my common ansible role for basic stuff. Find (prefereably official) hardening guides for your distribution and implement hardening guidelines such as DISA STIG, CIS benchmarks, ANSSI guides, etc.
I'm running checkmk for monitoring but that won't help you with detection of unwanted logins. For security I'm running crowded.
What's crowded? I am having trouble searching for it because of its name
crowdsec, pretty sure what's meant
Ah thank you
I used zabbix at some point, but I never looked at the data so I stopped. Zabbix shows all kind of stuff.
I have cockpit on my bare-metal that has some stats, and netdata on my firewall, I do not track any of my VM's (except vnstat that runs on everything device).
I run Prometheus on a separate cluster, so I plug my servers with node_exporter and scrape metrics. I then alert with grafana. To be honest, the setup is heavier (resource usage-wise) than I would like for my use case, but it's what I am used to, and scales well to multiple machines.
UptimeKuma is great, I use it for the simple "are my services up?" and is what I pay most attention to.
I still use zabbix for finer grained monitors though like checking raid status, smartctl, disk space, temperatures, etc.
I've been trying out librenms with more custom snmp checks too and am considering going that route instead of zabbix in the future
Prometheus, Loki and Grafana.
Golden! We use the same :)
I've dabbled with some monitoring tools in the past, but never really stuck with anything proper for very long. I usually notice issues myself. I self-host my own custom new-tab page that I use across all my devices and between that, Nextcloud clients, and my home-assistant reverse proxy on the same vps, when I do have unexpected downtime, I usually notice within a few minutes.
Other than that I run fail2ban, and have my vps configured to send me a text message/notification whenever someone successfully logs in to a shell via ssh, just in case.
Based on the logs over the years, most bots that try to login try with usernames like admin or root, I have root login disabled for ssh, and the one account that can be used over ssh has a non-obvious username that would also have to be guessed before an attacker could even try passwords, and fail2ban does a good job of blocking ips that fail after a few tries.
If I used containers, I would probably want a way to monitor them, but I personally dislike containers (for myself, I'm not here to "yuck" anyone's "yum") and deliberately avoid them.
Icinga2 works reasonably well for us. It is easy to write new checks as small shell scripts (or any other binary that can print and set and exit status code).
It cannot notify you, you have to check it manually, but: I use DaRemote on my phone to periodically check my bare metal.
I used to pass all the data through to Home Assistant and show it on some dashboards, but I decided to move over to Zabbix.
Works well but is quite full-featured, maybe moreso than necessary for a self hoster. Made a mediatype integration for my announciator system so I hear issues happening with the servers, as well as updates on things, so I don't really need to check manually. Also a custom SMART template that populates the disk's physical location/bay (as the built in one only reports SMART data).
It's notified me of a few hardware issues that would have gone unnoticed on my previous system, and helped with diagnosing others. A lot of the sensors may seem useless, but trust me, once they flag up you should 100% check on your hardware. Hard drives losing power during high activity because of loose connections, and a CPU fan failure to name two.
It has a really high learning curve though so not sure how much I can recommend it over something like Grafana+Prometheus - something I haven't used but the combo looks equally as comprehensive as long as you check your dashboard regularly.
Just wish there were more android apps
PRTG has a community edition Elastiflow for netflow has free/community edition Grafana and influxdb open source
Uptime Kuma and ntfy.
Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I've seen in this thread:
Fewer Letters | More Letters |
---|---|
DNS | Domain Name Service/System |
SSL | Secure Sockets Layer, for transparent encryption |
VPN | Virtual Private Network |
VPS | Virtual Private Server (opposed to shared hosting) |
3 acronyms in this thread; the most compressed thread commented on today has 9 acronyms.
[Thread #421 for this sub, first seen 10th Jan 2024, 14:55] [FAQ] [Full list] [Contact] [Source code]
I don't do much in the way of monitoring. I guess I should do that.
Uptime-Kuma
Reduce your threat profile. Run sslh, 443 handles both SSL and ssh. Adjust your host based firewall to just 443 Attack yourself on that port, identify the logs Add the new profiles to fail2ban Enable fail2ban email If you don't like email, use a service that translates email to notification. Ntfy.sh is free notifications Or.. Use something like tailscale and don't offer a remote login to the general Internet.
I submitted your post to got here's what it thought