vegetaaaaaaa

joined 2 years ago
[–] vegetaaaaaaa 2 points 9 months ago* (last edited 9 months ago)

Syslog over TCP with TLS (don't want those sweet packets containing sensitive data leaving your box unencrypted). Bonus points for mutual authentication between the server/clients (just got it working and it's 👌 - my implementation here

It solves the aggregation part but doesn't solve the viewing/analysis part. I usually use lnav on simple setups (gotty as a poor man's web interface for lnav when needed), and graylog on larger ones (definitely costly in terms of RAM and storage though)

[–] vegetaaaaaaa 1 points 9 months ago

Obfuscation can be helpful in not disclosing which are some services or naming schemes

The "obfuscation" benefits of wildcard certificates are very limited (public DNS records can still easily be found with tools such as sublist3r), and they're definitely a security liability (get the private key of the cert stolen from a single server -> TLS potentially compromised on all your servers using the wildcard cert)

[–] vegetaaaaaaa 3 points 9 months ago* (last edited 9 months ago)

VMs have a lot of additional overhead.

The overhead is minimal, KVM VMs have near-native performance (type 1 hypervisor). There is some memory overhead as each VM runs its own kernel, but a lot of this is cancelled by KSM [1] which is a memory de-duplication mechanism.

Each VM runs its own system services (think systemd, logging, etc) so there is some memory/disk usage overhead there - but it would be the same with Incus/LXC as they do the same thing (they only share the same kernel).

https://serverfault.com/questions/225719/so-really-what-is-the-overhead-of-virtualization-and-when-should-i-be-concerned

I usually go for bare-metal > on top of that, multiple VMs separated by context (think "tenant", production/testing, public/confidential/secret, etc. VMs provide strong isolation which containers do not. At the very minimum it's good to have at least separate VMs for "serious business" and "lab" contexts) > applications running inside the VMs (containerized or not - service/application isolation through namespaces/systemd has come a long way, see man systemd-analyze security) - for me the benefit of containerization is mostly ease of deployment and... ahem running inscrutable binary images with out-of-date dependencies made by strangers on the Internet)

If you go for a containerization solution on top of your VMs, I suggest looking into podman as a replacement for Docker (less bugs, less attack surface, no single-point-of-failure in the form of a 1-million-lines-of-code daemon running as root, more unix-y, better integration with systemd [2]. But be aware of the maintenance overhead caused by containerization, if you're serious about it you will probably end up maintaining your own images)

[–] vegetaaaaaaa 1 points 9 months ago* (last edited 9 months ago) (1 children)

“buggy as fuck” because there’s a bug that makes it so you can’t easily run it if your locate is different than English?

It sends pretty bad signals when it causes a crash on the first lxd init (sure I could make the case that there are workarounds, switch locales, create the bridge, but it doesn't help make it appear as a better solution than proxmox). Whatever you call it, it's a bad looking bug, and the fact that it was not patched in debian stable or backports makes me think there might be further hacks needed down the road for other stupid bugs like this one, so for now, hard pass on the Debian package (might file a bug on the bts later).

About the link, Proxmox kernel is based on Ubuntu, not Debian…

Thanks for the link mate, Proxmox kernels are based on Ubuntu's, which are in turn based on Debian's, not arguing about that - but I was specifically referring to this comment

having to wait months for fixes already available upstream or so they would fix their own shit

any example/link to bug reports for such fixes not being applied to proxmox kernels? Asking so I can raise an orange flag before it gets adopted without due consideration.

[–] vegetaaaaaaa 3 points 9 months ago

i was just worried that the libraries in the container image are outdated

They actually are: trivy scan on authelia/authelia:latest https://pastebin.com/raw/czCYq9BF

[–] vegetaaaaaaa 1 points 9 months ago* (last edited 9 months ago) (3 children)

DO NOT migrate / upgrade anything to the snap package

It was already in place when I came in (made me roll my eyes), and it's a mess. As you said, there's no proper upgrade path to anything else. So anyway...

you should migrate into LXD LTS from Debian 12 repositories

The LXD version in Debian 12 is buggy as fuck, this patch has not even been backported https://github.com/canonical/lxd/issues/11902 and 5.0.2-5 is still affected. It was a dealbreaker in my previous tests, and doesn't inspire confidence in the bug testing and patching process on this particular package. On top of it, It will be hard to convice other guys that we should ditch Ubuntu and their shenanigans, and that we should migrate to good old Debian (especially if the lxd package is in such a state). Some parts of the job are cool, but I'm starting to see there's strong resistance to change, so as I said, path of least resistance.

Do you have any links/info about the way in which Proxmox kernels/packages differ from Debian stable?

[–] vegetaaaaaaa 2 points 9 months ago* (last edited 9 months ago)

clustering != HA

The "clustering" in libvirt is limited to remote controlling multiple nodes, and migrating hosts between them. To get the High Availability part you need to set it up through other means, e.g. pacemaker and a bunch of scripts.

[–] vegetaaaaaaa 1 points 9 months ago

but more like playing a video game and it drops down to 15fps

Likely not a server-side problem (check CPU usage on the server), if the server was struggling to transcode I think it would result in the playback pausing, and resuming when the encoder catches up. Network/bandwidth problems would result in buffering. This looks like a bad playback performance problem, what client are you using? Try with multiple clients (use the web interface ina browser as a baseline) and see if it makes any difference.

[–] vegetaaaaaaa 1 points 9 months ago* (last edited 9 months ago) (1 children)

Why not self host vaultwarden?

How does that work when your vaultwarden instance goes down for some reason? Lose access to passwords? Or does the browser extension still have access to a cached copy of the db?

[–] vegetaaaaaaa 2 points 9 months ago* (last edited 9 months ago) (5 children)

The migration is bound to happen in the next few months, and I can't recommend moving to incus yet since it's not in stable/LTS repositories for Debian/Ubuntu, and I really don't want to encourage adding third-party repositories to the mix - they are already widespread in the setup I inherited (new gig), and part of a major clusterfuck that is upgrade management (or the lack of). I really want to standardize on official distro repositories. On the other hand the current LXD packages are provided by snap (...) so that would still be an improvement, I guess.

Management is already sold to the idea of Proxmox (not by me), so I think I'll take the path of least resistance. I've had mostly good experiences with it in the past, even if I found their custom kernels a bit strange to start with... do you have any links/info about the way in which Proxmox kernels/packages differ from Debian stable? I'd still like to put a word of caution about that.

[–] vegetaaaaaaa 1 points 9 months ago (1 children)

I should RTFM again... https://manpages.debian.org/bookworm/libvirt-clients/virsh.1.en.html has options for virsh migrate such as --copy-storage-all... Not sure how it would work for actual live migrations but I will definitely check it out. Thanks for the hint

[–] vegetaaaaaaa -1 points 9 months ago* (last edited 9 months ago) (2 children)

Did you read? I specifically said it didn't, at least not out-of-the-box.

view more: ‹ prev next ›