this post was submitted on 15 Dec 2024
143 points (98.6% liked)
Technology
59984 readers
2767 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
This implies the entire build still takes a few minutes on that beefier machine, which is in the "check back later" category of tasks. Rebuilds need to be seconds, and going from 10s to 5s (or even 30s) isn't worth a separate machine.
If my builds took that long, I'd seriously reconsider how the project is structured to dramatically reduce that. A fresh build taking forever is fine, you can do that at the end of the day or whatever, but edit/reload should be very fast.
That belongs at the system architecture level IMO. A dev machine shouldn't be that interesting to an attacker since a dev only needs:
My access to all of the source material is behind a login, so IT can easily disable my access and entirely cut an attacker out (and we require refreshing fairly frequently). The biggest loss is IP theft, which only requires read permissions to my home directory, and most competitors won't touch that type of IP anyway (and my internal docs are dev level, not strategic). Most of my cached info is stale since I tend to only work in a particular area at a given time (i.e. if I'm working on reports, I don't need the latest simulation code). I also don't have any access to production, and I've even told our devOPs team about things that I was able to access but shouldn't. I don't need or even want prod access.
The main defense here is frequent updates, and I'm 100% fine with having an automated system package monitor, and if IT really wants it, I can configure
sudo
to send an email every time I use it. I tend to run updates weekly, though sometimes I'll wait 2 weeks if I'm really involved in a project.And this, right here, is my problem with a lot of C-suite level IT policy, it's often more about CYA and less about actual security. If there was another 9/11, the airlines would point to TSA and say, "not my problem," when the attack very likely came through their supply chain. "I was just following orders" isn't a great defense when the actor should have known better. Or on the IT side specifically, if my machine was compromised because IT was late rolling out an update, my machine was still compromised, so it doesn't really matter whose shoulders the blame lands on.
The focus should be less on preventing an attack (still important) and more on limiting the impact of an attack. My machine getting compromised means leaked source code, some dev docs, and having to roll back/recreate test environments. Prod keeps on going, and any commits an attacker makes in my name can be specifically audited. It would take maybe a day to assess the damage, and that's it, and if I'm regularly sending system monitoring packets, an automated system should be able to detect unusual activity pretty quickly (and this has happened with our monitoring SW, and a quick, "yeah, that was me" message to IT was enough).
My machine is quite unlikely to be compromised in the first place though. I run frequent updates, I have a high quality password, and I use a password manager (with an even better password, that locks itself after a couple hours) to access everything else. A casual drive-by attacker won't get much beyond whatever is cached on my system, and compromising root wouldn't get much more.
For your average office worker who only needs office software and a browser, sure, lock that sucker down. But when you're talking about a development team that may need to do system-level tweaks to debug/optimize, do regular training or something so they can be trusted to protect their system.
Sure, but when I need them, I need them urgently. Maybe there's a super high-priority bug on production that I need to track down, and waiting 2 days isn't acceptable, because we need same-day turnaround. Yeah, I could escalate and get someone over pretty quickly, but things happen when critical people are on leave, and IT can review things afterward. That's pretty rare, and if I have time, I definitely run changes like that through our IT pros (i.e. "hey, I want to install X to do Y, any concerns?").
Then maybe we'd be a better fit than I thought. If, during the interview process, I discovered that IT didn't use MS or Google for their cloud stuff, I may actually be okay with a locked-down machine, because the IT team is absolutely based. I'd probably ask a lot of follow-up questions, and maybe you'd mitigate my concerns.
But when shopping around for a new job, I steer clear of any red flags, and "even devs use standard IT images" and "we're a MS shop" completely kills my interest. My current company is an MS shop, but they said we have our own infra for our team, and we use Macs specifically to avoid the standard, locked-down IT images.
On my personal machines, I use Firefox, openSUSE (due to openQA, YaST, etc; TW on desktop, Leap on NAS and VPS), and full-disk encryption. I'm considering moving to MicroOS as well, for even better security and ease of maintenance. I expose internal services through a WireGuard tunnel, and each of those services runs in a docker container (planning to switch to podman). I follow cybersecurity news, and I'm usually fully patched at home before we're patched at work. Cyber security is absolutely something I'm passionate about, and I raise concerns a few times/year, which our OPs team almost always acts on.
All of that said, I absolutely don't expect the keys to the kingdom, and I actually encourage our OPs team to restrict my access to resources I don't technically need. However, I do expect admin access on my work machine, because I do sometimes need to get stuff done quickly.
Remediation after an attack happens is part of the security posture. How does the company recover and continue to operate is a vital part of security incident planning. The CYA aspect of it comes from the legal side of that planning. You can take every best practice ever, but if something happens. Then what does the company do if it doesn't have insurance fallback or other protections? Even a minor data breach can cause all sorts of legal troubles to crop up, even ignoring a litigious user-base. Having the policies satisfied keeps those protections in place. Keeps the company operating, even when an honest mistake causes a significant problem. Unfortunately it's a required evil.
On a company computer? That's presumably on a company network? Able to talk and communicate with all the company infrastructure? You seem to be specifically narrowing the scope to just your machine, when a compromised machine talks to way more than just the shit on the local machine. With a root jump-host on a network, I can get a lot more than just what's cached on your system.
We don't use google at all if it's at all possible to get away with it... We do have disposable docker images that can be spun up in the VDI interface to do things like test the web side of the program in a chrome browser (and Brave, chromium, edge, vivaldi, etc...). We do use MS for email (and by extension other office suite stuff cause it's in the license, teams... as much as I fucking hate what they do to the GUI/app every other fucking month... is useful to communicate with other companies... as we often have to get on calls with API teams from other companies), but that's it and nextcloud/libreoffice is the actual company storage for "cloud"-like functions... and there's backup local mail host infrastructure laying in wait for the day that MS inevitably fucks up their product more than I'm willing to deal with their shenanigans as far as O365 mail goes.
I'm pushing for a rewrite out of an archaic 80's language (probably why compile times suck for us in general) into Rust and running it on alpine to get rid of the need for windows server all together from our infrastructure... and for the low maintenance value of a tiny linux distro. I'm not particularly on the SUSE boat... just because it's never come up. I float more on the arch side of linux personally, and debian for production stuff typically. Most of our standalone products/infrastructure are already on debian/alpine containers. Every year I've been here I've pushed hard to get rid of more and more, and it's been huge as far as stability and security goes for the company overall.
No, it's "even devs meet SCA". Not necessarily a standard image. I pointed it out, but only in passing. I can spawn an SCA for many different linux os's that enforce/prove a minimum security posture for the company overall. I honestly wouldn't care what you did with the system outside of not having root and meeting the SCA personally. Most of our policy is effectively that but in nicer terms for auditing people. The root restriction is simply so that you can't disable the tools that prove the audit, and by extension that I know as the guy ultimately in charge of the security posture, that we've done everything reasonable to keep security above industry standard.
The SCA checks for configuration hardening in most cases. That same Debian example I posted above, here's a snippet of the checks
No, we have hard limits on what people can access. I can't access prod infra, full stop. I can't even do a prod deployment w/o OPs spinning up the deploy environment (our Sr. Support Eng. can do it as well if OPs aren't available).
We have three (main) VPNs:
I can't be on two at the same time, and each requires MFA. The IT-supported machines auto-connect to the corporate VPN, whereas as a dev, I only need the corporate VPN like once/year, if that, so I'm almost never connected. Joe over in accounting can't see our test infra, and I can't see theirs. If I were in charge of IT, I would have more segmentation like this across the org so a compromise at accounting can't compromise R&D, for example.
None of this has anything to do with root on my machine though. Worst case scenario, I guess I infect everyone that happens to be on the VPN at the time and has a similar, unpatched vulnerability, which means a few days of everyone reinstalling stuff. That's annoying, but we're talking a week or so of productivity loss, and that's about it. Having IT handle updates may reduce the chances of a successful attack, but it won't do much to contain a successful attack.
If one machine is compromised, you have to assume all devices that machine can talk to are also compromised, so the best course of action is to reduce interaction between devices. Instead of IT spending their time validating and rolling out updates, I'd rather they spend time reducing the potential impact of a single point of failure. Our VPN currently isn't a proper DMZ (I can access ports my coworkers open if I know their internal IP), and I'd rather they fix that than care about whether I have root access. There's almost no reason I'd ever need to connect directly to a peer's machine, so that should be a special, time-limited request, but I may need to grab a switch and bridge my machine's network if I needed to test some IOT crap on a separate net (and I need root for that).
Nice, we use Google Drive (dev test data) and whatever MS calls their drive (Teams recordings, most shared docs, etc). The first is managed by our internal IT group and is mostly used w/ external teams (we have two groups), and the second is managed by our corporate IT group. I hate both, but it works I guess. We use Slack for internal team communication, and Teams for corporate stuff.
That's not going to help the compile times. :)
I don't use Rust at work (wish I did), but I do use it for personal projects (I'm building a P2P Lemmy alternative), and I've been able to keep build times reasonable. We'll see what happens when SLOC increases, but I'm keeping an eye on projects like Cranelift.
That's fair. I used Arch for a few years, but got tired of manually intervening when updates go sideways, especially Nvidia driver updates. openSUSE Tumbleweed's openQA seemed to cut that down a bit, which is why I switched, and
snapper
made rollbacks painless when the odd Nvidia update borked stuff. I'm now on AMD GPUs, so update breakage has been pretty much non-existent. With some orchestration, Arch can be a solid server distro, I just personally want my desktop and servers to run the same family, and openSUSE was the only option that had rolling desktop and stable servers.For servers, I used to use Debian, and all our infra uses either Debian or Ubuntu. If I was in charge, I'd probably migrate Ubuntu to MicroOS since we only need a container host anyway. I'm comfortable w/ apt, pacman, and zypper, and I've done my share of dpkg shenanigans as well (we did unattended Debian upgrades for an IOT project).
SCA is for payment services, no? I'm in the US, and this seems to be an EU thing I'm not very familiar with, but regardless, we don't touch ecommerce at all, we're B2B and all payments go through invoices.
If you're worried someone will disable your tools, why would you hire them in the first place? Also, that should be painfully obvious because you wouldn't get reporting updates, no?
We do auditing, and our devOPs team gets a weekly report from IT about any devices that aren't updated yet or aren't reporting. They also do a manual check every quarter or so to verify serials and version numbers and whatnot. I've gotten one notice from our local devOPs person, and very few of my team show up as well. The ones that do show up tend to be our UX and Product teams, and honestly, they have more access to interesting info than we devs do (i.e. they have planned features for the next 6 months, we just have the next month or so). And they need far fewer exceptions to the rules, since UX mostly just needs their design software and Product just needs office stuff and a browser.
I obviously can't speak for all devs, but in general, devs tend to be more interested in applying updates in a timely manner and keeping things secure. In fact, I think all of my devs already used a password manager and MFA before starting, which absolutely isn't the case for other positions.
But it does. If your machine is compromised, and they have root permissions to run whatever they want, it doesn't matter how segmented everything is, you said yourself you jump between them (though rare).
No, it's just a term for a defined check that configurations meet a standard. An SCA can be configured to check on any particular configuration change.
Not necessarily? Hard to tell if something is disabled vs just off.
I don't hire people... especially people in other departments.
But while I found this discussion fun, I have to get back to work at this point. Shit just came up with a vendor we used for our old archaic code that might accelerate a rust-rewrite... and logically related to the conversation I might be in the market for some rust devs.
Sure, but I need MFA to do so. So both my phone and my laptop would need to be compromised to jump between networks, unless we're talking about a long-lived, opportunistic trojan or something, which smells a lot like a targeted attack.
Sounds fun, and stressful. Good luck!