I have truenas running in a VM on my proxmox server. I'm getting 40MB/s reads/writes to my main pool (tested with dd on the truenas VM.) All the drives in my pool are passed through directly, and I think it could be a slow HBA or maybe I wired the JBODs up wrong. Any help fixing this would be appreciated.
System specs:
Server: HP dl380p gen8 (2 x xeon 2620 cpus)
Truenas VM has 8 cores, 200GB ram, and 43GB for boot disk allocated.
HBA: HP H221 660087-001 LSI SAS 9207-8e HBA (pcie 2.0. with 600MB/s transfers over pcie. Could be why it's slow?)
HP D2700 managed through proxmox with 25 1.2TB drives (24 in draid2:10:2) (for main VM/LXC storage.)
I've connected 1 port on the HBA to each JBOD (Changing wiring soon. Maybe this is the issue also?)
Pool configuration:
Pool is stored on a Netapp DS4246 JBOD with one connection to the hba
ashift=12, compression=lz4, encryption managed through truenas
2 X 6 raidz2 vdevs (1 hot spare) 8TB HGST drives with 4K sectors. Datasheet says ~200MB/s streaming transfers.
2 mirrored ZuesRAM 8GB log devices
1 512GB NVME L2ARC cache (performance issues preceded adding the l2arc)
CPU usage is low usually less than 10% during transfers. I'm getting 8MB/s transfers over my network. As I've said previously I'm getting slow r/w speeds on the truenas VM itself, I'll deal with any network issues after I deal with the pool issues.
The vdevs also aren't being used evenly if that's helpful. I can see raidz2-0 has 10.5TB of data on it and raidz2-1 has 8.40TB of data. While watching "zpool iostat -v tank 1" I can see that data is being read consistently from both vdevs just it's happening slowly.
IDrive Is the cheapest and it's what I use. They allow Linux backups and it's 1/3 the price of backblaze buckets. They also give the same amount of backup storage in cloud storage. Encryption is done locally before uploading. I think I got 5TB for $70? That's all I need for critical data. All the rest of my 100 TB of data is Linux ISOs.