CPU/RAM/Disk/Network etc. get written to Influxdb via Telegraf, and visualized with Grafana.
Logging and errors go to Graylog stack (Mongodb, Opensearch, Graylog).
A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.
Rules:
Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.
No spam posting.
Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.
Don't duplicate the full text of your blog or github here. Just post the link for folks to click.
Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).
No trolling.
Resources:
Any issues on the community? Report it using the report flag.
Questions? DM the mods!
CPU/RAM/Disk/Network etc. get written to Influxdb via Telegraf, and visualized with Grafana.
Logging and errors go to Graylog stack (Mongodb, Opensearch, Graylog).
CPU/RAM/Disk/Network etc. get written to Influxdb via Telegraf, and visualized with Grafana.
This is my exact setup and I love it (especially in comparison to the prometheus stack which I have to use at work). telegraf is so easy to use and extend. I have my own custom metrics for things like speed of my home internet connection (runs speedtests every hour), listener counts on my custom/private internet radio stream, or even fetching the daily food trucks rotation at a nearby lunch site (influx can handle text data!)
I don't do anything for logging but I also like UptimeKuma to keep and eye on things I run. Grafana has alerting but I find it pretty horrible.
telegraf is so easy to use and extend
Definitely... you can write custom scripts that Telegraf will run and write that data to Influx. For instance, I have one that writes the Gateway status information from pfSense so I can track and graph any internet downtime.
For general system stats, I like btop. It runs in the terminal, so you can monitor it through SSH remotely. It also is much more readable than some of the other older top process monitors.
Docker desktop is a nice GUI interface for local docker container management, Portainer if you want something more enterprise grade.
For container management I use portainer CE and for the rest I use CheckMK.
+1 for Portainer CE. works like a charm.
I discovered netdata yesterday and it's looking pretty good
I was going to say netdata as well. But I am not sure if it shows container wise stats.
Dockge for docker compose stacks. Glances for system resource usage because it has a Homepage widget.
Cockpit has a nice integration for Podman containers.
OpenTelemetry
How about Telegraf -> influxDB -> Grafana?
For monitoring from my phone (iOS) I use ServerCat which I really like. It can show the status of multiple servers at a glance and provides detailed information and a SSH terminal if needed.
TIG stack is what I use as well. I will take a look at servercat too. Thanks for mentioning it.
Portainer for docker stuff; CasaOS for RAM, CPU usage, disk drive monitoring, samba shares.
Casa also lets you access the file system, so when jellyfin fails to clear transcodes, I can go in with the web interface and delete the folder.
Naemon and Graylog.
It's a lot for the homeland, but I love zabbix
I like to use my eyes, an ssh connection and my shell.
tmux and tui monitoring tools are also often handy.
This is the way.
New Lemmy Post: What are your favorite tools for monitoring Linux and individual docker containers? (https://lemmy.world/post/11963779)
Tagging: #SelfHosted
(Replying in the OP of this thread (NOT THIS BOT!) will appear as a comment in the lemmy discussion.)
I am a FOSS bot. Check my README: https://github.com/db0/lemmy-tagginator/blob/main/README.md