pyrosis

joined 9 months ago
[–] pyrosis 3 points 6 months ago (2 children)

This takes a degree of understanding of what you are doing and why it fails.

I've done some research on this myself and the answer is the USB controller. Specifically the way the USB controller "shares" bandwidth. It is not the way a sata controller or a pci lane deals with this.

ZFS expects direct control of the disk to operate correctly and anything that gets in between the file system and the disk is a problem.

I the case of USB let's say you have two USB - nvme adapters plugged in to the same system in a basic zfs mirror. ZFS will expect to mirror operations between these devices but will be interrupted by the USB controller constantly sharing bandwidth between these two devices.

A better but still bad solution would be something like a USB to SATA enclosure. In this situation if you installed a couple disks in a mirror on the enclosure... They would be using a single USB port and the controller would at least keep the data on one lane instead of constantly switching.

Regardless if you want to dive deeper you will need to do reading on USB controllers and bandwidth sharing.

If you want a stable system give zfs direct access to your disks and accept it will damage zfs operations over time if you do not.

[–] pyrosis 4 points 6 months ago

Have a look at Stirling PDF. It's a self hosted alternative to most if not all Adobe functions that she might care about. It can be setup with docker.

https://github.com/Stirling-Tools/Stirling-PDF

[–] pyrosis 1 points 7 months ago

I thought it would. If it still requires sudo to run it is probably just docker wanting your user account added to the docker group. If the "docker" group doesn't exist you can safely create it.

You will likely need to log out and log back in for the system to recognize the new group permissions.

[–] pyrosis 1 points 7 months ago (3 children)

That doesn't make any sense to me. It can be installed directly from pacman. It may be something silly like adding docker to your user group. Have you done something like below for docker?

  1. Update the package index:

sudo pacman -Syu

  1. Install required dependencies:

sudo pacman -S docker

  1. Enable and start the Docker service:
sudo systemctl enable docker.service
sudo systemctl start docker.service
  1. Add your user to the docker group to run Docker commands without sudo:

sudo usermod -aG docker $USER

  1. Log out and log back in for the group changes to take effect.

    Verify that Docker CE is installed correctly by running:

docker --version

If you get the above working docker compose is just

sudo pacman -S docker-compose

[–] pyrosis 2 points 7 months ago* (last edited 7 months ago) (5 children)

What computer and OS do you have that can't run docker? You can run a full stack of services on a random windows laptop as easily as a dedicated server.

Edit

Autocorrect messing with OS.

[–] pyrosis 8 points 7 months ago (7 children)

Honestly at this point that is docker and docker compose.

As to what to run it on that very much depends on preference. I use a proxmox server but it could just as easily be pure Debian. A basic webui like cockpit can make system management operations a bit more simplified.

[–] pyrosis 2 points 7 months ago

My favorite is using the native zfs sync capabilities. Though that requires zfs and snapshots configured properly.

[–] pyrosis 2 points 7 months ago

At its core cockpit is like a modern day webmin that allows full system management. So yes it can help with creating raid devices and even lvms. It can help with mount points and encryption as well.

I do know it can help share whatever with smb and NFS. Just have a look at the plugins.

As for proxmox it's just using Debian underneath. That Debian already happens to be optimized for virtualization and has native zfs support baked in.

https://cockpit-project.org/applications

[–] pyrosis 2 points 7 months ago

I noticed some updates on live video streaming. I do wonder if that will help in how jellyfin interepts commercial breaks.

Let's say I have an m3u8 playlist with a bunch of video streams. I've noticed in jellyfin when they go to like a commercial the stream freaks out. It made me wonder if the player just couldn't understand the ad insertion.

Anyway wonderful update regardless and huge improvement.

[–] pyrosis 2 points 7 months ago (1 children)

Another thing to keep in mind with zfs is underlying vm disks will perform better if the zfs pool is a type of mirror or stripe of mirrors. Z1 Z2 type pools are better for media and files. Cm disk io will improve on the mirror type style dramatically. Just passing what I've learned over time in optimizing systems.

[–] pyrosis 2 points 7 months ago (1 children)

Bookmark this if you utilize zfs at all. It will serve you well.

https://jrs-s.net/2018/08/17/zfs-tuning-cheat-sheet/

You will be amused with zfs performance in proxmox due to all the tuning that is possible. If this is going to be an existing zfs pool keep in mind it's easier to just install proxmox with the zfs option and let it create a zfs rpool during setup. For the rpool tweak a couple options. Make sure ashift is at least 12 during the install or 13 if you are using some crazy fast SSD as proxdisk for the rpool.

It needs to be 12 if it's a modern day spinner and probably a good setting for most ssds. Do not go over 12 if it's a spinning disk.

Now beyond that you can directly import your existing zfs pool into proxmox with a single import command. Assuming you have an existing zfs pool.

In this scenario zfs would be fully maintaining disk operations for both an rpool and a media pool.

You should consider tweaking a couple things to really improve performance via the guide de I linked.

Proxmox vms/zvols live in their own dataset. Before you start getting to crazy creating vms make sure you are taking advantage of all the performance tweaks you can. By default proxmox sets a default record size for all datasets to 128k. qcow2, raw, and even zvols will benefit from record size of 64k because it tends to improve the underlying filesystem performance of things like ext4, XFS, even UFS. Imo it's silly to create vm filesystems like btrfs if you're vm is sitting on top of a cow filesystem.

Another huge improvement is tweaking the compression algorithm. lz4 is blazing fast and should be your default go to for zfs. The new one is pretty good but can slow things down a bit for active operations like active vm disks. So make sure your default compression is lz4 for datasets with vm disks. Honestly it's just a good default to specify for the entire pool. You can select other compressions for datasets with more static data.

If you have a media dataset full of files like music, vids, pics. Setting a record size of 1mb will heavily improve disk io operations.

In proxmox it will default to grabbing half of your memory for arc. Make sure you change that after install. It's a file that defines arc_max in byte number format. Set the max to something more reasonable if you have 64 gigs of memory. You can also define the arc_min

Some other huge improvements? If you are using an SSD for your proxmox install I highly recommend you install log2ram on your hypervisor. It will stop all those constant log writes on your SSD. It will also sync them to disk on a timer and shutdown/reboot. It's also a huge performance and SSD lifespan improvement to migrate /tmp and /var/tmp to tmpfs

So many knobs to turn. I hope you have fun playing with this.

[–] pyrosis 1 points 7 months ago

I agree with this. The only vm I have that has multiple interfaces is an opnsense router vm heavily optimized for kvm to reach 10gb speeds.

One of the interfaces beyond wan and lan is an interface that links to a proxmox services bridge. It's a proxbridge I gave to a container and is just a gateway in opnsense. It points traffic destined for services directly at the container ip. It keeps the service traffic on the bridge instead of having to hit the physical network.

view more: ‹ prev next ›