___

joined 1 year ago
[–] [email protected] 1 points 1 day ago

Short-sightedly.

[–] [email protected] -2 points 3 days ago (2 children)

Of course that’s how sanctions work.. against nations. Linux isn’t a country, it’s not an American asset. They could have resisted. Linus chose not to.

[–] [email protected] 1 points 1 week ago

It’s on sa, so ok.

[–] [email protected] 3 points 1 week ago (1 children)

This is what I’m thinking. The file originally overwrote an older one, I muxed in and synced truehd audio into the original and ended up copying it back after forgetting a subtitle track. It definitely went back and forth with the same name a few times. It’s probably something with the Unix ACLs. Still concerning that it crashes the SMB daemon.

16
submitted 1 week ago* (last edited 1 week ago) by [email protected] to c/selfhosted
 

So I recently muxed a bluray with mkvmerge, and proceeded to copy it to my nas with proxmox and cockpit smb server with a mount bind passed through from my host zraid.

I have 500 files on here, and have had a similar issue once before with a different movie mux file (ignored back then). I’m on Ublue Aurora fedora. Shares are smb mounted locally.

What happens is this: I copy the file, it gets to 100% and hangs. The proxmox smbd spawns a ton of PIDs and the unprivileged LXC wont shutdown or stop until the host is rebooted (which takes 5 mines of slow waiting). All 4 cores mapped to the lxc get pegged to 100% and adding 2 more for a total of 6 pegs those too.

After rebooting, copying the file again will cause the issue every time. I have another cockpit smb NAS on a privileged container on entirely different hardware, different setup that does the same exact thing when I copy this remux file. The file copies over fine to an SMB QNAP share. Copies over fine from a windows box to the Proxmox NAS, so isolated to the dolphin file copy or mkvmerge mux copy on Fedora to Proxmox LXC smb share.

I assume it’s something with Fedora and this file, but don’t know what it is. The machine froze afterwards once, so I’m wondering if it’s bad ram potentially. 128GB off brand ECC.

Anyone experienced something similar before?

[–] [email protected] 5 points 1 week ago (5 children)

Mindless Self Indulgence

[–] [email protected] 11 points 3 weeks ago (4 children)

A blue whale would be impressive.

[–] [email protected] 4 points 1 month ago

Exactly. All the hype and excitement over a locked down arm ecosystem with evaporating battery life advantages. No thank you. Development efforts are better served elsewhere. I would prefer the Linux community ignore it rather than support it over RISC-V.

[–] [email protected] 7 points 1 month ago (1 children)

Let’s whip out the spoons and replace our excavators!

If you want convicts to have jobs, fix how society views them so that they’re not pariahs.

[–] [email protected] 4 points 1 month ago

Tron Legacy, the vibe is captivating.

[–] [email protected] 4 points 1 month ago

Steam Deck has been cited by a number of articles.

[–] [email protected] 4 points 1 month ago (1 children)

Redox looks like it’s up and coming, hopefully something useable pans out from it once cosmic is rolled out of alpha.

Microkernel is an uptime and security benefit on modern hardware.

 

I’m running opnsense on proxmox with some lxc containers and docker hosts.

I’ve never done internal DNS routing, just a simple DMZ with Cloudflare proxies and static entries for some external services. I want to simplify things and stop using my IPs from memory internally.

For example, I have the ports on my docker hosts memorized for the services I use, only a couple mapped hosts in opnsense, but nothing centralized.

What is the best way to handle internal DNS name resolution for both docker and the lxc containers? Internal CA certs? External unroutable (security)?

Any tips and setups appreciated.

 
 
 

If you don’t mind Chinese vendors from AliExpress. It’s probably the best deal you’re going to find.

 

If you don’t mind Chinese vendors from AliExpress. It’s probably the best deal you’re going to find.

view more: next ›