Hello Lemmings! I've been thinking about testing CEPH in my homelab, but to do it right I kinda want to build a cluster of systems, preferrably using SBCs to handle a CEPH storage drive each. Specifically, a single SATA disk would be preferred.
A while back I came across the ODROID HC1, which was perfect but I wasn't ready to pull the trigger at the time; the only thing I'd want above and beyond what the HC1 was capable of, is PoE to simplify power delivery. Unfortunately the HC1 is discontinued (and rather dated at this point), and I have yet to come across anything remotely similar. There are other boards along the same lines, like the HC4 from odroid, and others (often involving adding a SATA HAT to the SBC), but I'm not keen on that.
Essentially, I just want one drive per SBC, and build them into external drive-like enclosures with a single HDD each (3.5" is most likely), and just have a fleet of them. The idea would be to have a pair of "gateway" systems that are more robust, that can pull from the CEPH and portray that data as CIFS or NFS or iSCSI or whatever. Each SBC wouldn't need to be more than 1Gbps linked, but the gateway systems would likely be 10G linked off the same switch to take advantage of the bandwidth of the cluster.
Does anyone know of an SBC that's newer and similar in design to the HC1? Something newer/faster would be important, and something with PoE to power itself and the drive would be a nice-to-have (otherwise I'll rig up a high amperage DC rail for all the nodes so I can use a single "PSU" thing for it. If someone knows a better community to place this question, let me know.... still getting used to lemmy.
Dell or Lenovo micro form factor PCs are very popular as a low power homelab server.
Agreed, I have a few already, I was hoping for something cheaper, since even used tiny/mini/micro systems can be $100-200 on the low end, meanwhile SBC's can be far less than that, new.
If I only wanted to put 2 or 3 together for this, then that wouldn't be a problem. It gets a bit more of a financial problem, additionally, those mini systems tend to only support 2.5" drives, and I want the extra capacity of 3.5" drives.
Seems that there aren't many options still.
The SFF sizes sometimes have space for a 3.5" drive, and tend to be cheaper too. I often see 7th gen systems for $70 or so.
That presents an issue of the space required. It's not a terrible idea.
I may look into doing it with the raspberry Pi cm4 blade that's being developed, which can use M.2 nvme drives, but I was hoping for a 3.5" compatible version for larger sizes with less budget.
I'm torn about it. I may just experiment with some relatively small VMs to just test how to configure it and see if it's viable at all.