matthewc

joined 1 year ago
[–] [email protected] 11 points 1 year ago

Exactly what you said. It has always been about control.

[–] [email protected] 3 points 1 year ago

I spin up a lot of Docker containers with large data sets locally.

[–] [email protected] 7 points 1 year ago (4 children)

Developer here. Completely depends on your workflow.

I went base model and the only thing I regret is not getting more RAM.

Speeds have been phenomenal when there binaries are native. Speeds have been good when the binaries are running through Rosetta.

The specs you’re wavering between are extremely workflow specific. You know if your workflow requires the 16 extra GPU cores. You know if your workflow requires another 64 GB of RAM.

[–] [email protected] 4 points 1 year ago (2 children)

Raising the standard enables new uses of technology.

[–] [email protected] 1 points 1 year ago

In my experience restart are infrequent. DSM runs plenty fast.

When I have a container that performs frequent small read/writes, i.e. lemmy and pictrs, I put those directories on a USB connected SSD. That greatly increased the performance of the containers I moved to that solution.

My other biggest performance boost was caching my main volume with two NVME SSDs.

[–] [email protected] 1 points 1 year ago (2 children)

You’re welcome.

I’m not sure if you’ll get a speed benefit or not since there is no way to prioritize the SSD.

[–] [email protected] 2 points 1 year ago (4 children)
[–] [email protected] 3 points 1 year ago

Serious. I installed VSCodium today.

[–] [email protected] 1 points 1 year ago (2 children)

I didn’t realize vscode is open source. Good to know!

[–] [email protected] 3 points 1 year ago (1 children)

Use two providers on different networks. They can fill in the gaps for each other.

[–] [email protected] 2 points 1 year ago (1 children)

Not just performance; I can’t imagine it would be good for five drives of a volume to go missing if a single cable fails.

I’m wondering if I can move my four current drives into the DX517 and save the volume. Can I just move the drives around without consequence?

I’m starting to think a second NAS full of SSDs would be best to host home lab applications off of instead of trying to make it work with my current NAS and an expansion unit.

 

I have a DS920+ full of 16TB HHDs. I am considering adding a DX517. Is it possible to move my HDDs into the expansion unit and maintain the existing raid while installing SSDs into the original four bays? Does it make sense to put the HDDs in the expansion because they connect over a single eSATA cable? Does it make any sense to try and optimize like that considering all of the NICs are 1gb?

[–] [email protected] 2 points 1 year ago (1 children)

I added 16GB to mine. It was recognized without me doing anything special. I run about 15 Docker containers in addition to the normal Synology suite. I only end up using about 3 GB of ram, but I don’t mind having 17 GB available for paging.

view more: next ›