this post was submitted on 08 Aug 2023
248 points (95.6% liked)

Selfhosted

40434 readers
560 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS
 

I can't say for sure- but, there is a good chance I might have a problem.

The main picture attached to this post, is a pair of dual bifurcation cards, each with a pair of Samsung PM963 1T enterprise NVMes.

It is going into my r730XD. Which... is getting pretty full. This will fill up the last empty PCIe slots.

But, knock on wood, My r730XD supports bifurcation! LOTS of Bifurcation.

As a result, it now has more HDDs, and NVMes then I can count.

What's the problem you ask? Well. That is just one of the many servers I have laying around here, all completely filled with NVMe and SATA SSDs....

Figured I would share. Seeing a bunch of SSDs is always a pretty sight.

And- as of two hours ago, my particular lemmy instance was migrated to these new NVMes completely transparently too.

top 50 comments
sorted by: hot top controversial new old
[–] [email protected] 95 points 1 year ago* (last edited 1 year ago) (2 children)

Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I've seen in this thread:

Fewer Letters More Letters
NVMe Non-Volatile Memory Express interface for mass storage
PCIe Peripheral Component Interconnect Express
SATA Serial AT Attachment interface for mass storage
SSD Solid State Drive mass storage

4 acronyms in this thread; the most compressed thread commented on today has 3 acronyms.

[Thread #13 for this sub, first seen 8th Aug 2023, 21:55] [FAQ] [Full list] [Contact] [Source code]

[–] [email protected] 17 points 1 year ago (1 children)
[–] [email protected] 10 points 1 year ago

Fantastic bot, honestly.

[–] [email protected] 5 points 1 year ago
[–] [email protected] 23 points 1 year ago (1 children)

I dont see any issues!

/me hides his 16 4tb 12g SAS drives.....

[–] brygphilomena 7 points 1 year ago (2 children)

I think I'm at 7x 18tb drives. I'm slowly replacing all the smaller 8tb disks in my server. Only 5 more to go. After that it's a new server with more bays and/or a jbod shelf.

[–] [email protected] 1 points 1 year ago

the SAS drives are all SSD, I also have 8x 12tb in rust, and an LTO robot though its not currently in service.

[–] [email protected] 1 points 1 year ago

That's my next step. I have 8 8tb drives I need to start swapping, 2x512 NVMEs for system/app cache, and 1 2tb NVME for media cache.

[–] [email protected] 12 points 1 year ago (1 children)

If that’s a problem then I don’t want to be solved.

[–] [email protected] 3 points 1 year ago (2 children)

Its only a problem when you get the electric bill! (Or the wife finds your ebay receipts)

[–] [email protected] 4 points 1 year ago (1 children)

I doubt these use much power compared to their spinning rust anticedents.

[–] [email protected] 4 points 1 year ago

I meant my general electric bill. My server room averages 500-700watts.

[–] [email protected] 2 points 1 year ago (1 children)

Was curious how many watts this machine pulls? Also curious if you had ever filled it will spinning disks - would flash be less power hungry?

[–] [email protected] 2 points 1 year ago

This one averages around 220-250.

It's completely full of spinning disks. Flash would be less power usage, but, would cost significantly more, and would end up being drastically more expensive.

[–] [email protected] 9 points 1 year ago (2 children)

I dream of this kind of storage. I just added a second m.2 with a couple of TB on it and the space is lovely but I can already see I'll fill it sooner than I'd like.

[–] [email protected] 6 points 1 year ago (2 children)

I will say, it's nice not having to nickel and dime my storage.

But, the way I have things configured, redundancy takes up a huge chunk of the overall storage.

I have around 10x 1T NVMe and SATA SSDs in a ceph cluster. 60% storage overhead there.

Four of those 8T disks are in a ZFS Striped Mirror / Raid 10. 50% storage overhead.

The 4x 970 evo / evo plus drives are also in a striped mirror ZFS pool. 50% overhead.

But, still PLENTY of usable storage, and- highly available at that!

load more comments (2 replies)
[–] [email protected] 2 points 1 year ago

Cripes I was stoked I managed to upgrade from 4x 2tb to 4x 4tb recently.

[–] [email protected] 7 points 1 year ago (1 children)

Is your problem that you are bragging about your drives?

[–] [email protected] 2 points 1 year ago (1 children)

I'm out of room to add more drives!

Every one of my servers is basically completely full on disks. I need more servers.

[–] [email protected] 2 points 1 year ago

I need some drives

[–] [email protected] 5 points 1 year ago

I'll gladly take those problems out of your hands for free

[–] Heggico 5 points 1 year ago (1 children)

I'm confused. Why do those cards have a heatsink? I needed a card like that because my motherboard did not support bifurcation. So had to use a splitting card. The cards I know that require bifurcation do not even need a controller or heatsink. They are just wired pretty much directly to the pci-e bus.

[–] [email protected] 3 points 1 year ago (3 children)

I actually looked up the chip numbers, and its a "splitter".

I, don't know WHY there is a splitter, as a splitter isn't needed, and these cards are advertised to only work on motherboards supporting bifurcation. However, there is indeed, a splitter.

The documentation is also, REALLY horribly translated.

Note: Without pcie splitter function in this host adapter (ASM1182E chip), so motherboard must support PCIe Bifurcation. Otherwise, only one M.2 PCIe SSD will be recognized. If you are not sure PCIe Bifurcation of your motherboard, please consult motherboard munufacture or contact us via amazon message

Here is the documentation for the chip itself: https://www.asmedia.com.tw/product/213yQcasx8gNAzS4/b7FyQBCxz2URbzg0

I, am not 100% certain how, where, or why it fits in there. Perhaps, its for link power management? Or something.

But, I can confirm, these cards DO require bifurcation to be enabled. Without bifurcation, you only see the first drive.

load more comments (3 replies)
[–] [email protected] 4 points 1 year ago

This does seem like an issue, I can help you free up some PCIe slots if you'd like

[–] [email protected] 4 points 1 year ago (1 children)

Love this. Apart from hosting an instance, what are you using it for? Self-cloud?

[–] [email protected] 5 points 1 year ago (2 children)

I host a few handfuls of websites, some discord bots.

I hoard Linux isos. I use it for general purpose learning and experimentation.

There is also kubernetes running, source control, and a bit of everything else.

load more comments (2 replies)
[–] [email protected] 3 points 1 year ago (1 children)

The only problem I see is using 8x slots instead of 16x slots for double the storage

[–] [email protected] 7 points 1 year ago* (last edited 1 year ago) (2 children)

Whats the problem?

Each NVMe uses 4 lanes. For each of these x8 slots, they have two NVMes, for a total of 8 lanes.

The x16 slot already has 4x NVMe in it, lol. The other x16 slot has a GPU, which is located in that particular slot due to the lovely 3d-printed fan shroud.

One of the other full-height x8 slots also has a PLX switch, and is loaded with 4 more NVMes.

load more comments (2 replies)
[–] [email protected] 3 points 1 year ago (1 children)

Well this seems to be a good problem to have hahah. If you need to get rid of some of those ssds count with me.

[–] [email protected] 5 points 1 year ago (2 children)

ebay! You can pick up these "used" enterprise NVMe and SSDs for CHEAP. All 10 arrived with less than 5% wear.

[–] [email protected] 1 points 1 year ago

Good to know I will take a look thank you.

load more comments (1 replies)
[–] [email protected] 3 points 1 year ago (1 children)

Having a large flash pool really makes your life so much better.

Until you fill up all your space and have to buy more :p

[–] [email protected] 2 points 1 year ago

Hopefully that doesn't happen soon! I don't have too much room for more flash, lol.

But, I have quite a bit of available space, so, there shouldn't be any concerns. Also- tomorrow, after a few adapters arrives, I'll be adding another 2x 1T flash drives my Optiplex 5060 SFF.

[–] Vake 2 points 1 year ago (1 children)

Wondering what software you’re running to have all the storage managed and then your containers and things on top? Is it all on the 730XD?

[–] [email protected] 4 points 1 year ago

The picture of the GUI at the end is Proxmox.
Proxmox is really powerful and great for a few servers.

[–] [email protected] 2 points 1 year ago (1 children)

Do you happen to have a link to those cards?

[–] [email protected] 4 points 1 year ago* (last edited 1 year ago) (3 children)

Dual Slot Bifurcation Card Those are the ones I just picked up.

If you have a x16 slot, and can fit a full-height card, and use 4x4x4x4 bifurcation, the ASUS Hyper M.2 is really good.

[–] [email protected] 2 points 1 year ago (1 children)

Sweet!

I've got a gen3 hyper M.2 but I was looking for something for the 8x slots in one of my servers without needing full height cards.

[–] [email protected] 2 points 1 year ago

That's the exact use case I got these for

load more comments (2 replies)
[–] Molecular0079 1 points 1 year ago

Hell yeah. I personally just switched over from a QNAP HDD-based NAS to a DIY NVMe-based NAS running Arch and ZFS and I couldn't be happier with the performance. It is wild seeing ZFS scrubs happening at 1.3 GB/s. With NVMe prices coming down, I really think if you can afford the slightly extra cost per GB, it is so worth it in the long run.

I am using a Konyead 3001K which supports 4 NVMe slots on one card, but unfortunately my mobo only support x8x4x4, so I can only use 3 of them. Luckily my mobo has two on-board slots so I just used one of them to create a 4-drive raidz1.

load more comments
view more: next ›