So, last month, my kubernetes cluster decided to literally eat shit while I was out on a work conference.
When I returned, I decided to try something a tad different, by rolling out proxmox to all of my servers.
Well, I am a huge fan of hyper-converged, and clustered architectures for my home network / lab, so, I decided to give ceph another try.
I have previously used it in the past with relative success with Kubernetes (via rook/ceph), and currently leverage longhorn.
Cluster Details
- Kube01 - Optiplex SFF
- i7-8700 / 32G DDR4
- 1T Samsung 980 NVMe
- 128G KIOXIA NVMe (Boot disk)
- 512G Sata SSD
- 10G via ConnectX-3
- Kube02 - R730XD
- 2x E5-2697a v4 (32c / 64t)
- 256G DDR4
- 128T of spinning disk.
- 2x 1T 970 evo
- 2x 1T 970 evo plus
- A few more NVMes, and Sata
- Nvidia Tesla P4 GPU.
- 2x Google Coral TPU
- 10G intel networking
- Kube05 - HP z240
- i5-6500 / 28G ram
- 2T Samsung 970 Evo plus NVMe
- 512G Samsung boot NVMe
- 10G via ConnectX-3
- Kube06 - Optiplex Micro
- i7-6700 / 16G DDR4
- Liteon 256G Sata SSD (boot)
- 1T Samsung 980
Attempt number one.
I installed and configured ceph, using Kube01, and Kube05.
I used a mixture of 5x 970 evo / 970 evo plus / 980 NVMe drives, and expected it to work pretty decently.
It didn't. The IO was so bad, it was causing my servers to crash.
I ended up removing ceph, and using LVM / ZFS for the time being.
Here are some benchmarks I found online:
https://docs.google.com/spreadsheets/d/1E9-eXjzsKboiCCX-0u0r5fAjjufLKayaut_FOPxYZjc/edit#gid=0
https://www.proxmox.com/images/download/pve/docs/Proxmox-VE_Ceph-Benchmark-202009-rev2.pdf
The TLDR; after lots of research- Don't use consumer SSDs. Only use enterprise SSDs.
Attempt / Experiment Number 2.
I ended up ordering 5x 1T Samsung PM863a enterprise sata drives.
After, reinstalling ceph, I put three of the drives into kube05, and one more into kube01 (no ports / power for adding more then a single sata disk...).
And- put the cluster together. At first, performance wasn't great.... (but, was still 10x the performance of the first attempt!). But, after updating the crush map to set the failure domain to OSD rather then host, performance picked up quite dramatically.
This- is due to the current imbalance of storage/host. Kube05 has 3T of drives, Kube01 has 1T. No storage elsewhere.
BUT.... since this was a very successful test, and it was able to deliver enough IOPs to run my I/O heavy kubernetes workloads.... I decided to take it up another step.
A few notes-
Can you guess which drive is the samsung 980 EVO, and which drives are enterprise SATA SSDs? (look at the latency column)
Future - Attempt #3
The next goal, is to properly distribute OSDs.
Since, I am maxed out on the number of 2.5" SATA drives I can deploy... I picked up some NVMe.
5x 1T Samsung PM963 M.2 NVMe.
I picked up a pair of dual-spot half-height bifurcation cards for Kube02. This will allow me to place 4 of these into it, with dedicated bandwidth to the CPU.
The remaining one, will be placed inside of Kube01, to replace the 1T samsung 980 NVMe.
This should give me a pretty decent distribution of data, and with all enterprise drives, it should deliver pretty acceptable performance.
More to come....
Oh, I know the performance is drastically better doing that. I did play with it, and it works for the most part. Performance is dramatically better, but I have peace of mind knowing that is a host just magically craps itself, the data is already ready to go and the machine has already fired up on the new host without any issues.
Also, there is something fun about literally tossing over 6 million IOPs worth of SSDs into my cluster, just to barely squeeze 50k IOPs out of ceph!
I have 5 more "enterprise" NVMes arriving tuesday, which will complete my ceph cluster.
Current, I have 4 of the enterprise SATA SSDs in place, and a single 980 as a placeholder.
Nothing at all to write home about. BUT, I do think the lack of distributed drives is making an impact. My most powerful host, doesn't have any OSDs yet, still waiting on the NVMe to arrive.
During heavy benchmarking, the limitations of the consumer 980 evo became pretty apparent, when its latency spiked through the moon.
The addition of the new 5 NVMe should make a pretty dramatic difference. If I can squeeze 100k IOPs, I will be happy. (Despite.... over 6 million IOPs worth of SSDs...)