submitted 1 month ago by [email protected] to c/[email protected]
top 50 comments
sorted by: hot top controversial new old
[-] [email protected] 35 points 1 month ago

Just because it has a CVE number doesn't mean it's exploitable. Of the 800 CVEs, which ones are in the KEV catalogue? What are the attack vectors? What mitigations are available?

[-] [email protected] 25 points 1 month ago

The idea that it is somehow possible to determine that for each and every bug is a crazy fantasy by the people who don't like to update to the latest version.

[-] [email protected] -1 points 4 weeks ago

The fact that you think it's not possible means that you're not familiar with CVSS scores, which every CVE includes and which are widely used in regulated fields.

And if you think that always updating to the latest version keeps you safe then you've forgotten about the recent xz backdoor.

[-] [email protected] 3 points 4 weeks ago

I am familiar with CVSS and its upsides and downsides. I am talking about the amount of resources required to determine that kind of information for every single bug, resources that far exceed the resources required to fix the bug.

New bugs are introduced in backports as well, think of that Debian issue where generated keys had flaws for years because of some backport. The idea that any version, whether the same you have been using, the latest one or a backported one, will not gain new exploits or new known bugs is not something that holds up in practice.

[-] [email protected] 1 points 4 weeks ago

I don't know where you got the idea that I'm arguing that old versions don't get new vulnerabilities. I'm saying that just because a CVE exists it does not necessarily make a system immediately vulnerable, because many CVEs rely on theoretical scenarios or specific attack vectors that are not exploitable in a hardened system or that have limited impact.

[-] [email protected] 2 points 4 weeks ago

And I am saying that that information you are referring to is unknown for any given CVE unless it is unlocked by some investment of effort that usually far exceeds the effort to actually fix it and we already don't have enough resources to fix all the bugs, much less assess the impact of every bug.

Assessing the impact on the other hand is an activity that is only really useful for two things

  • a risk / impact assessment of an update to decide if you want to update or not
  • determining if you were theoretically vulnerable in the past

You could add prioritizing fixes to that list but then, as mentioned, impact assessments are usually more work than actual fixes and spending more effort prioritizing than actually fixing makes no sense.

[-] lightnegative 6 points 4 weeks ago

If I had a dollar for the number of BS CVE's submitted by security hopefuls trying to pad their resumes...

[-] [email protected] 27 points 1 month ago

Great reason to push more code out of the kernel and into user land

[-] [email protected] 26 points 1 month ago* (last edited 1 month ago)
[-] [email protected] 11 points 1 month ago

I dunno, Stallman, it's been 30 years, you got something for us?

[-] lightnegative 1 points 4 weeks ago

I'd just like to interject for a moment. What you’re refering to as Linux, is in fact, GNU/LInux, or as I’ve recently taken to calling it, GNU plus Linux. Linux is not an operating system unto itself, but rather another free component of a fully functioning GNU system made useful by the GNU corelibs, shell utilities and vital system components comprising a full OS as defined by POSIX.

[-] [email protected] 5 points 1 month ago

I think we should just resurrect Plan 9 instead.

[-] [email protected] 3 points 1 month ago

Plan 9 is also monolithic, according to wikipedia. For BSD it depends.

[-] [email protected] 3 points 4 weeks ago

I mean, you're right but I still want to see a modernized plan 9, I just think it would be neat.

[-] [email protected] 2 points 4 weeks ago
[-] [email protected] 1 points 4 weeks ago

Latest release was 9 years ago, not exactly what I'm looking for. 9front is probably closer to what I want than inferno.

[-] [email protected] 2 points 4 weeks ago

L4. HURD never panned out, and L4 is where the microkernel research settled: Memory protection, scheduling, IPC in the kernel the rest outside and there's also important insights as to the APIs to do that with. In particular the IPC mechanism is opaque, the kernel doesn't actually read the messages which was the main innovation over Mach.

Literally billions of devices run OKL4, seL4 systems are also in mass production. Think broadband processors, automotive, that kind of stuff.

The kernel being watertight doesn't mean that your system is, though, you generally don't need kernel privileges to exfiltrate any data or generally mess around, root suffices.

If you want to see this happening -- I guess port AMDGPU to an L4?

[-] [email protected] 1 points 3 weeks ago

seL4 is the world’s only hypervisor with a sound worst-case execution-time (WCET) analysis, and as such the only one that can give you actual real-time guarantees, no matter what others may be claiming. (If someone else tells you they can make such guarantees, ask them to make them in public so Gernot can call out their bullshit.)

That bit on their FAQ is amusing.

[-] Rustmilian 5 points 1 month ago* (last edited 1 month ago)

eBPF is looking great.

[-] [email protected] 2 points 1 month ago

So what you are saying is “mach was right”?

[-] [email protected] 1 points 3 weeks ago

Everybody knows it was. Even Linus said a microkernel architecture was better. He just wanted something working “now” for his hobby project, and microkernel research was still ongoing then.

[-] [email protected] 20 points 1 month ago

Best way I found it running this command

rm -rf /

Then do a reboot just to be sure.

Good luck compromising my system after that.

FYI This is a joke Don't actually run this command :)

[-] qaz 7 points 4 weeks ago

It won't work without --no-preserve-root

[-] [email protected] 7 points 4 weeks ago* (last edited 4 weeks ago)

sudo apt-get remove systemd (don't actually run this)

[-] [email protected] 3 points 4 weeks ago

I ran it and followed a documentation to install Void Linux and now it runs so much smoother!

load more comments (1 replies)
[-] lordnikon 2 points 4 weeks ago

good thing that command won't do anything anymore

[-] BigTrout75 17 points 4 weeks ago

Article for the sake of having an article.

[-] [email protected] 16 points 1 month ago
load more comments (4 replies)
[-] [email protected] 15 points 1 month ago

Step one: stop listening to anything from Ziff-Davis.

[-] bhamlin 10 points 1 month ago

I mean, this isn't any different for Windows or macos. The difference is the culture around the kernel.

With Linux there are easily orders of magnitude more eyeballs on it than the others combined. And fixes are something anyone with a desire to do so can apply. You don't have to wait for a fix to be packaged and delivered.

[-] [email protected] 8 points 4 weeks ago

Security is not a binary variable, but managed in terms of risk. Update your stuff, don't expose it to the open Internet if it doesn't need it, and so on. If it's a server, it should probably have unattended upgrades.

[-] qaz 5 points 4 weeks ago* (last edited 4 weeks ago)

If it's a server, it should probably have unattended upgrades.

Interesting opinion, I've always heard that unattended upgrades were a terrible option for servers because it might randomly break your system or reboot when an important service is running.

[-] [email protected] 3 points 4 weeks ago

There are two schools of thought here. The "never risk anything that could potentially break something" school and the "make stuff robust enough that it will deal with broken states". Usually the former doesn't work so well once something actually breaks.

[-] [email protected] 2 points 3 weeks ago

That only applies to unstable distros. Stable distros, like debian, maintain their own versions of packages.

Debian in particular, only includes security patches and changes in their packages - no new features at all.* This means risk of breakage and incompatibilitu is very low, basically nil.

*exceot for certain packages which aren't viable to maintain, like Firefox or other browsers.

[-] [email protected] 2 points 2 weeks ago

Not having automated updates can quickly lead to not doing updates at all. Same goes for backups.

Whenever possible, one should automate tedious stuff.

[-] qaz 1 points 2 weeks ago

Thanks for the reminder to check my backups

[-] [email protected] 2 points 4 weeks ago* (last edited 4 weeks ago)

Both my Debian 12 servers run with unattended upgrades. I've never had anything break from the changes in packages, I think. I tend to use docker and on one even lxc containers (proxmox), but the lxc containers also have unattended upgrades running.

Do you just update your stuff manually or do you not update at all? I'm subscribed to the Debian security mailing list, and they frequently find something that means people should upgrade, recently something with the glibc.

Debian especially is focused on being very stable, so updating should never break anything that wasn't broken before. Sometimes docker containers don't like to restart so they refuse, but then I did something stupid.

[-] qaz 1 points 4 weeks ago* (last edited 4 weeks ago)

I used to check the cockpit web interface every once in a while, but I've tried to enable unattended updates today. It doesn't actually seem to work, but I planned on switching to Nix anyway.

[-] [email protected] 1 points 4 weeks ago

I don't use Cockpit, I just followed the Debian wiki guide to enabling unattended upgrades. As fast as I remember you have to apt install something and change a few lines in the config file.

It's also good to have SMTP set up, so your server will notify you when something happens, you can configure what exactly.

[-] [email protected] 8 points 4 weeks ago

pacman -Syu

Rhetorical question?

[-] [email protected] 5 points 1 month ago

Install all the patches immediately.

[-] [email protected] 3 points 1 month ago
[-] [email protected] 2 points 4 weeks ago* (last edited 4 weeks ago)

Honestly it is a valid option for critical systems. It is a bad idea to connect water treatment plans to the internet for example

[-] [email protected] 1 points 1 month ago

Some air gaps better than others

[-] [email protected] 3 points 4 weeks ago
[-] [email protected] 1 points 4 weeks ago
[-] [email protected] 1 points 4 weeks ago

the other one

( ))💨

[-] someacnt_ 1 points 3 weeks ago

The penguin pic is so cute

load more comments
view more: next ›
this post was submitted on 23 Apr 2024
107 points (87.4% liked)


44648 readers
945 users here now

From Wikipedia, the free encyclopedia

Linux is a family of open source Unix-like operating systems based on the Linux kernel, an operating system kernel first released on September 17, 1991 by Linus Torvalds. Linux is typically packaged in a Linux distribution (or distro for short).

Distributions include the Linux kernel and supporting system software and libraries, many of which are provided by the GNU Project. Many Linux distributions use the word "Linux" in their name, but the Free Software Foundation uses the name GNU/Linux to emphasize the importance of GNU software, causing some controversy.


Related Communities

Community icon by Alpár-Etele Méder, licensed under CC BY 3.0

founded 5 years ago