TechAdmin

joined 1 year ago
[–] TechAdmin 12 points 1 year ago* (last edited 1 year ago)

Software config optimizations help a little bit but my biggest improvement was moving the DB to SSD. Spinning disks are great for capacity but not for DB performance. Random I/O is a big factor for them and those drives drop in performance so fast for that type of I/O due to physically spinning media.

I started out using Owncloud and later switched to Nextcloud once that fork was stable. For all my uses it has always needed beefy hardware to run well but I definitely have way more junk files in synced folders than I should & rarely clean things up.

[–] TechAdmin 2 points 1 year ago

That varies depending on the computer/motherboard manufacturer and model. The manual may reference that feature but if not can go into BIOS/UEFI setup menu then browse through looking to see if there is any option to enable it. Also I've only used it with built-in NICs so not sure if it's an option with add-on NIC.

[–] TechAdmin 1 points 1 year ago

For old monitors breaking or acting weird a lot of times it's capacitors going bad & popping. I love looking around at the insides of tech things that stopped working right to try seeing why & maybe fixing them so just curious what could be causing it :)

[–] TechAdmin 2 points 1 year ago (2 children)

IMO, management interfaces like iDRAC are very nice extra to have when using enterprise servers for homelab.

The base iDRAC allows you to control power state, monitor & configure hardware, and view hardware system event log. The remote console and media features cost extra as part of the Enterprise iDRAC. Remote console lets you access server just like if you were physically in front of it. Remote media lets you mount images over the network to the server and boot from bootable ones too.

It has in band and out of band connectivity methods but I only have experience with out of band.

[–] TechAdmin 3 points 1 year ago

VMs in ESXi have the same behavior when iSCSI connection is lost then restored later. Windows with iSCSI drive mounts shows the same behavior in that scenario too.

UPS would be a great addition no matter what option you choose.

[–] TechAdmin 6 points 1 year ago (7 children)

Unfortunately I can't help with boot speed. Cold boot on enterprise servers tends to be on the slower side even for latest servers at my work across all major vendors. For rebooting the newer ones are faster but the older ones (around same age as R620) are slow to boot no matter what.

For the firmware that system is end of support life so once they are caught up to latest you are done, just an FYI. Do you have a single or multiple Dell servers?

I don't have much experience with single server environments so I'd recommend research & verify everything before attempting to install any firmware. Dell OpenManage Server Administrator looks like it could be helpful. Failing that you can use the iDRAC web interface for some of the firmware installs. You'll need to research to learn which ones can be installed there & the proper order to do them. If your iDRAC has the fancy remote console & media features available you could use those features to handle the rest of the firmware updates as well as install any OS you want on it. If it doesn't and have some budget available then I'd say look on eBay (or equivalent) for iDRAC Enterprise card and license if needed.

If you have multiple Dell servers I would recommend using the OpenManage Enterprise virtual appliance they make. It's free and makes firmware updates on Dell servers quick and easy. It can also handle installing firmware in the correct order when necessary. It will need access to the iDRAC interface.

[–] TechAdmin 2 points 1 year ago* (last edited 1 year ago) (1 children)

I recommend reading up on LXC within Proxmox. They are containers so run on bare metal but you interact with them a bit like normal VM. There are some prebuilt templates for a few different distros available for download too.

My current test proxmox setup is intel quad-core 10th gen i5 nuc with 32GB ram, 2 * 2TB nVNME, and 1TB SATA SSD. I have a few different LXCs for things like NVR, ZeroTier, TailScale, and a general docker one where I have plex, emby, jellyfin, and supporting apps. All LXC that need it have been configured for access to the iGPU and the host retains access.

[–] TechAdmin 2 points 1 year ago

Unfortunately I don't have any servers to test that anymore and power was never a major concern at the time. Also a different use case, for me I've always used IMMs to remotely setup and troubleshoot for servers that I expected to be up 24/7.

[–] TechAdmin 6 points 1 year ago (5 children)

I suggest to read up on the way Wake On Lan works, it's pretty neat. it has to send a packet to a local broadcast address. I don't think that can route over the internet so you need some device to send the packet from on the network or over a VPN connection.

For the KVM part, that model server should have some form of remote control. I think they called it the Integrated Management Module (IMM) on those things. The IMM is running as long as the server has power, it's a tiny independent system. They have various licenses/feature sets but at minimum it should get you a web interface to see status of the server as well as power it on & off. It may also have remote console and media options but those are add-on costs so not everybody buys them. The default login information should be somewhere on the chassis unless it was removed or got lost. The old defaults used to be username all uppercase 'USERID' with password exactly 'PASSW0RD' with a zero instead of the letter O. I don't recall when they changed to newer methods but it's worth a try.

[–] TechAdmin 1 points 1 year ago

I don't recall Windows ever touching linux bootloader but I imagine it could if you had it scan & repair any potential boot problems. Installing any OS can result in bootloader being changed so I've always installed Windows first & then Linux, especially if only have a single drive. When dual booting with dedicated drive for each OS I install in the same order but I change the drive boot order in BIOS between installs as well. After installs are done I leave drive boot order so Linux bootloader is default. I can change the drive boot order if I ever have a need to use Windows bootloader.

view more: ‹ prev next ›