vegetaaaaaaa

joined 2 years ago
[–] vegetaaaaaaa 2 points 4 days ago* (last edited 4 days ago) (1 children)

Hi, maintainer here. The markdown list is the historical/legacy format for this list. The preferred way to read the list is to use https://awesome-selfhosted.net/ (hosted in Europe).

We have mirrors of the git repo in place on other forges, in case things go bad on Github (already considered doing when Microsoft bought GH). But for now Github allows us to have a sane, large contributor base.

[–] vegetaaaaaaa 2 points 1 week ago* (last edited 1 week ago)

upgrades:

  • distribution packages: unattended-upgrades
  • third party software: subscribe to the releases RSS feed (in tt-rss or rss2email), read release notes, bump version number in my ansible playbook, run playbook, done.

vulnerabilities:

  • debsecan for distribution packages
  • trivy fort third-party applications/libraries/OCI images
  • wazuh for larger (work) setups
[–] vegetaaaaaaa 1 points 1 week ago (1 children)

Sometimes you need to understand the basics first. The points I listed are sysadmin 101. If you don't understand these very basic concepts, there is no chance you will be able to keep any kind of server running, understand how it works, debug certificate problems and so on. Once you're comfortable with that? Sure, use something "simpler" (a.k.a. another abstraction layer), Caddy is nice. The same point was made in the past about Apache ("just use nginx, it's simpler"). Meanwhile I still use apache, but if needed I'm able to configure any kind of web server because i taught me the fundamentals.

At some point we have to refuse the temptation to go the "easy" way when working with complex systems - IT and networking are complex. Just try the hard way first, read the docs, and if it's too complex/overwhelming/time-consuming, only then go for a more "noob-friendly" solution (I mean we're on c/selfhosted, why not just buy a commercial NAS or use a hosted service instead? It's easier). I use firewalld but I learned the basics of iptables a while ago. I don't build apache from source when I need to upgrade, but I would know how to get 75% there - the docs would teach me the rest.

[–] vegetaaaaaaa 6 points 1 week ago* (last edited 1 week ago) (3 children)

By default nginx will serve the contents of /var/www/html (a.k.a documentroot) directory regardless of what domain is used to access it. So you could build your static site using the tool of your choice, (hugo, sphinx, jekyll, ...), put your index.html and all other files directly under that directory, and access your server at https://ip_address and have your static site served like that.

Step 2 is to automate the process of rebuilding your site and placing the files under the correct directory with the correct ownership and permissions. A basic shell script will do it.

Step 3 is to point your domain (DNS record) at your server's public IP address and forwarding public port 80 to your server's port 80. From there you will be able to access the site from the internet at http://mydomain.org

Step 3 is to configure nginx for proper virtualhost handling (that is, direct requests made for mydomain.org to your site under the /var/www/html/ directory, and all other requests like http://public_ip to a default, blank virtualhost. You may as well use an empty /var/www/html for the default site, and move your static site to a dedicated directory.) This is not a strict requirement, but will help in case you need to host multiple sites, is the best practice, and is a requirement for the following step.

Step 4 is to setup SSL/TLS certificates to serve your site at https://my_domain (HTTPS). Nowadays this is mostly done using an automatic certificate generation service such as Let's Encrypt or any other ACME provider. certbot is the most well-known tool to do this (but not necessarily the simplest).

Step 5 is what you should have done at step 1: harden your server, setup a firewall, fail2ban, SSH keys and anything you can find to make it harder for an attacker to gain write access to your server, or read access to places they shouldn't be able to read.

Step 6 is to destroy everything and do it again from scratch. You've documented or scripted all the steps, right?

As for the question "how do I actually implement all this? Which config files and what do I put in them?", the answer is the same old one: RTFM. Yes, even the boring nginx docs, manpages and 1990's Linux stuff. Each step will bring its own challenges and teach you a few concepts, one at a time. Reading guides can still be a good start for a quick and dirty setup, and will at least show you what can be done. The first time you do this, it can take a few days/weeks. After a few months of practice you will be able to do all that in less than 10 minutes.

[–] vegetaaaaaaa 2 points 3 weeks ago

I wrote my own, using plain HTML/CSS. Actually the final .html file gets templated by ansible depending on what's installed on the server, but you can easily pick just the parts you need from the j2 template

[–] vegetaaaaaaa 2 points 1 month ago (1 children)
  1. You can verry well share bind mounts between containers
  2. named volumes are actually directories too, you know? Under /var/lib/docker/volumes/ by default

Still, use bind mounts. Named or anonymous volumes are only good for temporary junk.

[–] vegetaaaaaaa 18 points 1 month ago (4 children)
  • step 1: use named volumes
  • step 2: stop your containers or just wait for them to crash/stop unnoticed for some reason
  • step 3: run docker system prune --all as one should do periodically to clean up the garbage docker leaves on your system. Lose all your data (this will delete even named volumes if they are not in use by a running container)
  • step 4: never use named or anonymous volumes again, use bind mounts

The fact that you absolutely need to run docker system prune --all regularly to get rid of GBs of unused layers, test containers, etc, combined with the fact that it deletes explicitely named volumes makes them too unsafe for my taste. Just use bind mounts.

[–] vegetaaaaaaa 1 points 1 month ago* (last edited 1 month ago)

One has a total powered-on time of 51534 hours, and the other 49499 hours.
As for their actual age (manufacturing date), the only way to know is to look at the sticker on the drive, or find the invoice, can't tell you right now.

[–] vegetaaaaaaa 4 points 1 month ago (3 children)
$ for i in /dev/disk/by-id/ata-WD*; do sudo smartctl --all $i | grep Power_On_Hours; done
  9 Power_On_Hours          0x0032   030   030   000    Old_age   Always       -       51534
  9 Power_On_Hours          0x0032   033   033   000    Old_age   Always       -       49499
[–] vegetaaaaaaa 3 points 1 month ago

Follow the official documentation, nothing else comes close.

I have automated this process in my nextcloud ansible role

[–] vegetaaaaaaa 1 points 1 month ago
  • simple: rsyslog: all local logs to a central syslog file (using the imfile module), all syslogsfrom all server to a central rsyslog server (over TCP/SSL, example here). Use lnav or something similar to consume the logs
  • more complex, resource-heavy: Graylog Open as a replacement for the central rsyslog server, setup pipelines/alerts/whatever... Currently considering replacing my Graylog instance with Wazuh but I don't know yet if it will be able to replace it completely for me
[–] vegetaaaaaaa 1 points 2 months ago* (last edited 2 months ago)

security

with containers, software maintainers also need to keep their image up-to-date with latest security fixes (most of them don't) - whereas these are usually handled by unattended-upgrades or similar in a VM. Then put out a new release and expect users to upgrade ASAP. Or rebuild and encourage redeploying the latest image every day or so, which is bad for other reasons (no warning for breaking changes, the software must be tested thoroughly after every commit to master).

In short this adds the burden of proper OS/image maintenance for developers, something usually handled by distro maintainers.

trivy is helpful in assessing the maintenance/vulnerability level of OCI images.

 

Old article I found in my bookmarks. Although I didn't have the use for it, I thought it was interesting.

 

Synapse and Dendrite relicensed to AGPLv3

 

Hi c/selfhosted,

I just wanted to let you know that I have added a frequently requested feature to https://awesome-selfhosted.net - the ability to filter the list by programming language or deployment platform. For example:

You can navigate between platforms/languages by clicking the relevant link in each software project's metadata. There is no main list of platforms, but if someone creates an issue for it, it can be looked into (please provide details on where/how you expect the platforms list to show up).

A quick update on project news since the new website was released (https://lemmy.world/post/3622280): a lot of curation work has been done, some incorrect data has been fixed, a few additions and some general improvements have been made. A deb platform has been added for those who prefer to deploy software through their distribution's package management system, and we're working on a Manufacturing tag for software related to 3D printing, CNC machines and other physical manufacturing tools.

awesome-selfhosted is a list of Free Software network services and web applications which can be hosted on your own server(s).

The "old", markdown-formatted list remains available at https://github.com/awesome-selfhosted/awesome-selfhosted and will keep being updated automatically.

The project is maintained by volunteers under the CreativeCommons BY-SA 3.0 License, at https://github.com/awesome-selfhosted/awesome-selfhosted-data.

Thanks again to all contributors.

11
submitted 2 years ago* (last edited 2 years ago) by vegetaaaaaaa to c/selfhosted
 

Blog post about TLS certificates lifetime

 

This is a new, improved version of https://github.com/awesome-selfhosted/awesome-selfhosted/

Please check the release announcement for more details.

Maintainer here, happy to answer questions.

view more: next ›