sudneo

joined 2 years ago
[–] sudneo 4 points 2 years ago

I do this, but on B2 I upload encrypted restic repo. Password manager backup is one of those instances where it's totally worth to have historical copies (for example, 1 weekly copy for the last 6 months), as it consumes very little space while saving the day in case of accidental overwrite or deletion.

[–] sudneo 14 points 2 years ago (6 children)

Yep, I think it's part of the beauty of the federation, which at least in part stems from free software, open protocols and platforms not run for profit. Nobody has a target of getting as many users as possible, nor to fence them in a specific platform with the current system of incentives. As long as platforms are interoperable, I am happy if more will come up.

[–] sudneo 5 points 2 years ago (3 children)

I see the blackout as a nudge to overcome addiction. A few days or weeks without content, and people start looking around. The the network effect (downward) will make the rest.

I want to specify that I have no interest in all the userbase of reddit moving to Lemmy, but just an initial influx of people who care will help making it reach a critical mass. After that, reddit can even reopen fully, at that point it won't matter.

[–] sudneo 1 points 2 years ago

Yeah, borg is very very similar, at least in the context I use it! I agree with the praise of restic, very solid tool. It's always possible to use rsync...but to sync restic repos!

[–] sudneo 6 points 2 years ago (2 children)

I use restic/borg (depending on servers) and push to a bunch of S3 buckets on Backblaze. This applies to my desktop, my NAS and in general my non-Kubernetes data.

For Kubernetes I wrote a small tool that...well does the same for PVCs. Packs up the data with restic (soon I hope to migrate to rustic, once the library gets polished) and pushes to Backblaze.

To give an idea of the pricing, for 730GB, with daily backups or more, I pay approximately $5 a month.

[–] sudneo 5 points 2 years ago

Very interesting! I will try to give it a go on my cluster, I was just asking myself if someone was doing this!

[–] sudneo 1 points 2 years ago

Not a podman user, so please take this with a whole bag of salt. That seems to me a namespace issue. Does podman by default uses user namespaces? Because if that's the case, it's normal that UIDs are remapped inside the container namespace, and 1000 inside it corresponds to something else (maybe 100999?) outside.

One way to check could be cat /proc/PROC_INSIDE_CONTAINER/status | grep uid or cat /proc/PROC_INSIDE_CONTAINER/uid_map.

This https://stackoverflow.com/questions/70770437/mapping-of-user-ids seems also to be somewhat relevant to your scenario?

[–] sudneo 8 points 2 years ago

Fairphone 3+ with /e/OS. It has been 3 years now, still working fine, and no major problems really. It is expensive for what you get, but if it can help reducing e-waste and spare me the burden of buying a phone every 2 years, I think it's worth.

[–] sudneo 1 points 2 years ago (1 children)

Oh, I am there with you on that. I got used in my previous job, where everything was done with Ansible, but I still find myself copy pasting and changing most of the times. I actually like way more a declarative approach a-la-terraform.

Overall though there is a lot of community material, and once the playbooks are written it's quite good!

[–] sudneo 1 points 2 years ago (3 children)

I have a bunch of different stuff, a dedicated server with Debian, 4 raspberry Pis + 1 micro computer that acts as a LB/Router/DHCP/DNS for the Pis.

In general I would say that my logic is as follows:

  • Every OS change is done through Ansible. This sometimes is a pain, you want to just apt install X and instead you might need to create a new playbook for it, but in the long term, it paid off multiple times. I do have some default playbook that does basic config (user, SSH key provisioning, some default packages) and hardening (SSH config, iptables).
  • I then try to keep the OS logic to a minimum, and do everything else as code. On my older dedicated server I run mostly docker-compose with Systemd + templated docker-compose files dropped by Ansible. The Pis instead run Kubernetes, with flux and all my applications are either directly managed via Flux or they have Helm in between. This means I can destroy a cluster, create another way, point it to my flux repository and I am pretty much back where I started.
[–] sudneo 2 points 2 years ago

Assolutamente, l'idea non era principalmente per "ridurre il rischio", quanto per avere funzionalità diverse (che potrebbero non essere necessarie). Ad esempio poter scaricare o accedere liberamente ai backup, avere backup più piccoli (presumibilmente i backup ora sono dump di tutto il disco?), avere flessibilità nel gestire pruning (tipo tenere 1 backup mensile, 1 settimanale, 3 giornalieri etc.) e anche quello di poter estrarre singoli contenuti dai backup (magari questo è possibile già ora).

Non so poi quanto sia facile ad oggi testare la procedura di restore (serve prendere un altro vps?), ma con restic e simili uno può anche testare e scriptare la procedura in locale.

Da un punto di vista " i miei dati sono al sicuro" credo non ci siano problemi di sorta al momento, sembra essere tutto coperto!

[–] sudneo 4 points 2 years ago* (last edited 2 years ago) (2 children)

Probabilmente non basta solo il dump del database per avere una copia direttamente funzionante, ma sicuramente è la parte più importante.

Direi che ci sono vari approcci:

  • Pgdump ogni tot
  • Pgbarman che usa un replication slot se uno vuole avere pieno PITR (Point in Time Recovery). Onestamente lo premetto, per un progetto del genere credo sia assolutamente accettabile avere chessò, 12h o anche 24h di potenziali dati persi nel caso uno dovesse fare il restore da backup. Quindi pgdump va più che bene a mio avviso.
  • Probabilmente un milioni di altri modi

Una volta che c'è il dump del DB, si prende quella e le altre directory utili, e si butta tutto dentro un repository restic/borg, che è cifrato e deduplicato. Si prende il repository e lo si sincronizza su https://www.borgbase.com/ (o Backblaze, o un'altra macchina qualsiasi - basta SSH).

Chiaramente questo è un modo, non Il modo, ma se @[email protected] è interessato a farsi una chiacchierata sull'argomento sarei felice di aiutare, anche praticamente.

view more: ‹ prev next ›