butitsnotme

joined 1 year ago
[–] butitsnotme 2 points 1 week ago

For no 1, that shouldn’t be dind, the container would be controlling the host docker, wouldn’t it?

If so, keep in mind that this is the same as giving root SSH access to the host machine.

As far as security goes, anything that allows GitHub to cause your server to download (pull) and use a set of arbitrary of Docker images with arbitrary configuration is remote code execution. It doesn’t really matter what you to secure access to the machine, if someone compromises your GitHub account.

I would probably set up SSH with a key dedicated to GitHub, specifically for deploying. If SSH is configured to only allow keys for access, it’s not much of a security risk to open it up to the internet. I would then configure that key to only be able to run a single command, which I would make a very simple bash script which runs git fetch, and then git verify-commit origin/main (or whatever branch you deploy), befor checking out the latest commit on that branch.

You can sign commits fairly easily using SSH keys now, which combined with the above allows you to store your data on GitHub without having to trust them to have RCE on your host.

[–] butitsnotme 4 points 1 week ago

The DMA doesn’t seem to have ever been about consumer choice, it’s about the choice of other competitors to have access to Apple’s customers without having to play by Apple’s rules. Just look at who was pushing for sideloading on iOS, I mostly saw Meta and Epic Games at the forefront. Why should Apple compromise my device’s integrity so that Meta can spy on me? I have no good answer to that.

[–] butitsnotme 3 points 1 week ago* (last edited 1 week ago)

My recommendation would be to utilize LVM. Set up a PV on the new drive and create an LV filling the drive (wit an FS), then move all the data off of one drive onto this new drive, reformat the first old drive as a second PV in the volume group, and expand the size of the LV. Repeat the process for the second old drive. Then, instead of extending the LV, set the parity option on the LV to 1. You can add further disks, increasing the LV size or adding parity or mirroring in the future, as needed. This also gives you the advantage that you can (once you have some free space) create another LV that has different mirroring or parity requirements.

[–] butitsnotme 2 points 3 weeks ago

I use the first option, but with the addition of using an LVM snapshot to guarantee that the database (or anything else in the backup) isn’t changed while taking the backup.

[–] butitsnotme 1 points 3 weeks ago (1 children)

I see an option to allow time sensitive notifications for apps not in my list of allowed apps, it shows up once the “allow notifications from” option is selected. I have no idea if it actually works though, as I’ve never used it.

[–] butitsnotme 1 points 1 month ago

Getting a domain name may not be enough, if you don’t have a static IP you’ll still need a DDNS service.

What do you get for the paid no-ip service? Is it just a nice subdomain? You can get a custom domain and use a CNAME record to point one or more subdomains to a free DDNS subdomain.

[–] butitsnotme 2 points 1 month ago

See my other reply here.

[–] butitsnotme 1 points 1 month ago

See my other reply here.

[–] butitsnotme 2 points 1 month ago

See my other reply here.

[–] butitsnotme 2 points 1 month ago* (last edited 1 month ago)

I followed the guide found here, however with a few modifications.

Notably, I did not encrypt the borg repository, and heavily modified the backup script.

#!/bin/bash -ue

# The udev rule is not terribly accurate and may trigger our service before
# the kernel has finished probing partitions. Sleep for a bit to ensure
# the kernel is done.
#
# This can be avoided by using a more precise udev rule, e.g. matching
# a specific hardware path and partition.
sleep 5

#
# Script configuration
#

# The backup partition is mounted there
MOUNTPOINT=/mnt/external

# This is the location of the Borg repository
TARGET=$MOUNTPOINT/backups/backups.borg

# Archive name schema
DATE=$(date '+%Y-%m-%d-%H-%M-%S')-$(hostname)

# This is the file that will later contain UUIDs of registered backup drives
DISKS=/etc/backups/backup.disk

# Find whether the connected block device is a backup drive
for uuid in $(lsblk --noheadings --list --output uuid)
do
        if grep --quiet --fixed-strings $uuid $DISKS; then
                break
        fi
        uuid=
done

if [ ! $uuid ]; then
        echo "No backup disk found, exiting"
        exit 0
fi

echo "Disk $uuid is a backup disk"
partition_path=/dev/disk/by-uuid/$uuid
# Mount file system if not already done. This assumes that if something is already
# mounted at $MOUNTPOINT, it is the backup drive. It won't find the drive if
# it was mounted somewhere else.
(mount | grep $MOUNTPOINT) || mount $partition_path $MOUNTPOINT
drive=$(lsblk --inverse --noheadings --list --paths --output name $partition_path | head --lines 1)
echo "Drive path: $drive"

# Log Borg version
borg --version

echo "Starting backup for $DATE"

# Make sure all data is written before creating the snapshot
sync


# Options for borg create
BORG_OPTS="--stats --one-file-system --compression lz4 --checkpoint-interval 86400"

# No one can answer if Borg asks these questions, it is better to just fail quickly
# instead of hanging.
export BORG_RELOCATED_REPO_ACCESS_IS_OK=no
export BORG_UNKNOWN_UNENCRYPTED_REPO_ACCESS_IS_OK=no


#
# Create backups
#

function backup () {
  local DISK="$1"
  local LABEL="$2"
  shift 2

  local SNAPSHOT="$DISK-snapshot"
  local SNAPSHOT_DIR="/mnt/snapshot/$DISK"

  local DIRS=""
  while (( "$#" )); do
    DIRS="$DIRS $SNAPSHOT_DIR/$1"
    shift
  done

  # Make and mount the snapshot volume
  mkdir -p $SNAPSHOT_DIR
  lvcreate --size 50G --snapshot --name $SNAPSHOT /dev/data/$DISK
  mount /dev/data/$SNAPSHOT $SNAPSHOT_DIR

  # Create the backup
  borg create $BORG_OPTS $TARGET::$DATE-$DISK $DIRS


  # Check the snapshot usage before removing it
  lvs
  umount $SNAPSHOT_DIR
  lvremove --yes /dev/data/$SNAPSHOT
}

# usage: backup <lvm volume> <snapshot name> <list of folders to backup>
backup photos immich immich
# Other backups listed here

echo "Completed backup for $DATE"

# Just to be completely paranoid
sync

if [ -f /etc/backups/autoeject ]; then
        umount $MOUNTPOINT
        udisksctl power-off -b $drive
fi

# Send a notification
curl -H 'Title: Backup Complete' -d "Server backup for $DATE finished" 'http://10.30.0.1:28080/backups'

Most of my services are stored on individual LVM volumes, all mounted under /mnt, so immich is completely self-contained under /mnt/photos/immich/. The last line of my script sends a notification to my phone using ntfy.

[–] butitsnotme 2 points 1 month ago (1 children)

Here are a few more details of my setup:

Components:

  • server
  • clients (phone/laptop)
  • domain name (we'll call it custom.domain)
  • home router
  • dynamic DNS provider

The home router has WireGuard port forwarded to server, with no re-mapping (I'm using the default 51820). It's also providing DHCP services to my home network, using the 192.168.1.0/24 network.

The server is running the dynamic DNS client (keeping the dynamic domain name updated to my public IP), and I have a CNAME record on the vpn.custom.domain pointing to the dynamic DNS name (which is an awful random string of characters). I also have server.custom.domain with an A record pointing to 10.30.0.1. All my DNS records are in public DNS (so no need to change the DNS settings on the computer or phone or use DNS overrides with WireGuard.)

Immich config:

version: "3.8"

services:
  immich-server:
    container_name: immich_server
    image: ghcr.io/immich-app/immich-server:release
    entrypoint: ["/bin/sh", "./start-server.sh"]
    volumes:
      - ${UPLOAD_LOCATION}:/usr/src/app/upload
    env_file:
      - .env
    ports:
      - target: 3001
        published: 2283
        host_ip: 10.30.0.1
    depends_on:
      - redis
      - database
    restart: always
    networks:
      - immich

WireGuard is configured using wg-quick (/etc/wireguard/wg0.conf):

[Interface]
Address = 10.30.0.1/16
PrivateKey = <server-private-key>
ListenPort = 51820

[Peer]
PublicKey = <phone-public-key>
AllowedIPs = 10.30.0.12/32

[Peer]
PublicKey = <laptop-public-key>
AllowedIPs = 10.30.0.11/32

Start WireGuard with systemctl enable --now wg-quick@wg0.

Phone WireGuard configuration (iOS):

[Interface]
Name = vpn.custom.domain

Private Key = <phone private key>
Public Key = <phone public key>

Addresses = 10.30.0.12/32
Listen port = <blank>
MTU = <blank>
DNS servers = <blank>

[Peer]
Public Key = <server public key>
Pre-shared key = <blank>
Endpoint = vpn.custom.domain:51820
Allowed IPs = 10.30.0.0/16
Persistent Keepalive = 25

[On Demand Activation]
Cellular = On
Wi-Fi = On
SSIDs = Any SSID

This connection is then left always enabled, and comes on whenever my phone has any kind of network connection.

My laptop (running Linux), is also using wg-quick (/etc/wireguard/wg0.conf):

[Interface]
Address = 10.30.0.14
PrivateKey = <laptop private key>

[Peer]
PublicKey = <server-public-key>
Endpoint = vpn.custom.domain:51820
AllowedIPs = 10.30.0.0/16

My wife's window's laptop is configured using the official WireGuard windows app, with similar settings.

No matter where we are (at home, on a WiFi hotspot, or using cellular data) we access Immich over the VPN: http://server.custom.comain:2283/.

Let me know if you have any further questions.

[–] butitsnotme 1 points 1 month ago (2 children)

You can still download the previous major version, which is free.

view more: next ›