I personally package the files in a scratch or distroless image and use https://github.com/static-web-server/static-web-server, which is a rust server, quite tiny. This is very similar to nginx or httpd, but the static nature of the binary removes clutter, reduces attack surface (because you can use smaller images) and reduces the size of the image.
sudneo
Yep, I know and it's very convenient. I discovered recently that bitwarden also has integration, but requires manually provisioning an API key. Not as convenient but quite nice as well.
I guess a bunch of things, as they are specialized apps:
- proper auth. I think with Firefox you can have a password, but a password manager will have multiple options for 2fa including security keys, and on phone fingerprint unlock etc. In general, password managers are more resistant to malicious users gaining access to your device.
- store all kinds of stuff. Not everything happens in the browser, and it's just convenient to have an app just for credentials. Many password managers allow to store and autofill credit cards too, for example.
- on the fly generation of aliases. Password managers have external integrations. For example proton and bitwarden can integrate with simplelogin.io to generate email aliases when you choose to generate a new username.
- org-like features. Password managers can be also convenient for sharing with family (for example). I do manage a bitwardes organization used by all my immediate family, which means I can share credentials easily with any of them. Besides the sharing I can also ensure my (not tech savvy mom) won't lock herself out (emergency breakglass access configurable) and technically enforce policies on password strength etc.
- as banal as it is, self-managing. I like to run my own services and running my own password manager with my own backups gives me peace of mind.
- another perhaps obvious point. More compatibility? I can use my password manager on whatever device, whatever browser. For some, it might not change anything, but it's a convenient feature.
As a personal addition, I would say that I simply want the cornerstone of my online security to be a product for a company that is specialized in doing that. I have no idea how much effort goes into the password manager from Mozilla, for example.
+1 i just switched from Obsidian and I think logseq logic clicks more with my thought process (block notes vs page focus). Awesome application!
As an Italian I don't really know/have never tried this recipe, so I won't comment it besides the fact that looks good. One thing I would recommend though is to finish the cooking of the pasta in a pan within the boiling sauce, rather than putting the sauce on top of the cooked pasta. It really helps blending the tastes together and let pasta absorb the sauce, you should try it!
The bridge was near Genova, in the North, btw. For good or for bad, Italy is still a member of g7, and despite the gazillion problems, relatively to other countries, it is a rich country.
I don't think this is needed to implement censorship. It's Italy, I know better than thinking something is done out of malice, when it can be the result of incompetence. I completely believe this is some idiotic implementation of what football an TV economic powers wanted. Either way, this idea that everything bad is because "old white men" is bs. We don't even need meloni, we can use the dear iron lady as an example...class and economic positions count way more than age and gender, ultimately.
This whole thing happened while a young woman is in power. This has to do with submission to economic power, not with gender and age.
I will skip some parts because I think it's not worth repeating.
I think the vast majority of Mutt users don’t get their Mutt binaries from Kevin McCarthy, and having him put a targeted backdoor in the source code would be foolish as it would be likely to be noticed by one of the mutt distributors who builds it before it gets distributed. Since reproducible builds still aren’t ubiquitous, the best place to insert a widely-distributed-but-targeted-in-code backdoor would be at the victim’s distributor’s buildserver.
This was clearly just an example. Any distributor is the single of point of failure. You can coerce or compromise it, and you will serve compromised software.
Yes, but unlike the ProtonMail case there is a chance of being caught so it is a much higher risk for the attacker.
No there isn't. There is nothing that prevents Github to serve you a different file when you query the same URL than what regular users will (for example by IP). It's trivial to do this with any reverse proxy. And the same applies for a signature file, which means you can only notice if you manage to get the file and the signature from someone else and compare the signature/hashes for the same release. Which is basically the same as saying "I will compare my JS blob de-minified with the one in the OSS repo", nobody does this either, I agree. This can totally happen every time you download something from any website, technically, provided that the server is coerced or compromised.
on a spectrum of difficulty to attack
Not really. The spectrum is much narrower than how you present it. I bet 99% of users install software in one of these ways:
- Package manager (linux/Mac).
- Download an installer or the code from the software website (Windows, AppImage, etc.).
- Install through a platform (say, Steam)
Almost all the package managers AFAIK work under the same model (package, signed with the distributor's key, served via web), which is susceptible to coercion and compromise. All the webservers and platforms can be coerced/compromised to serve different files (installers) to different clients.
Am I missing something? Is there another way to serve software that I am missing?
numerous single points of failure that can be exploited to attack a specific user
There is one, so far. The provider being compromised. The rest is your speculation such as
but it’s because you have not been hearing me saying that HTTPS has been circumvented numerous ways over the years and will continue to be
Which is like saying, there are vulnerabilities. Yes, there will be vulnerabilities, but this applies to any software too. And if HTTPs is broken to allow MiTM then this is a risk for any software you download via web, starting from the linux ISO, so it's far from a webmail-specific problem.
No. See previous answers for the massive differences.
You list:
- These days, in many/most cases, at least two keys/people are required to compromise them. This isn’t nearly enough but it is better than one.
Nothing, absolutely nothing, tells you that it's enough to compromise one Proton employee to gain access to production and replace the code. Also, you have absolutely no idea of the security practice of the couple of people who handle those keys, they are not accountable in any way, they don't need to be compliant with any standard (for what is worth), etc. I would say it's much more likely for any of the mirrors/repositories to get compromised compared to Proton.
In fact, you say:
From your earlier comments I think you’re working from a mental model where an individual employee performing the attack would need to check something in to git or something like that, but, don’t you think anyone with root on, say, one of the caching frontend webservers do this? I suggest that you try to think about how you would design their system to prevent a single person from unilaterally doing it, and then figure out how you can break your design.
I do this for a living. One way to do this is to close off production environments, assign temporary permissions that require multiple people to sign-in at the same time and spectate when production is accessed. Teleport allows to do this, for example, nothing I am conjuring out of thin air. Similarly, the CI can implement a million check to verify the provenance of the software and require multiple sign-off before things are actually deployed. Breakglass procedure exist (usually for a handful of individuals), but they generate alert and are audited post-factum, so that such attack would be detected.
- Other than by IP, users aren’t identifying themselves before downloading things
True, but for me being attacked this changes very little. Attackers can just establish a C2, check if the target is right and do not do anything else on other devices. I grant you, this is a difference, but the control here is the fact that more people will possibly spot the issue and I will get to know it before getting compromised. It's possible, but it's a very weak control.
Users can access them from many different mirrors; there isn’t a single server from which to target all users of a given distribution
True, bigger attack surface, but each individual mirror can be compromised via the same vector (and of course the source can). Also Proton does not have a machine that serves everyone. Might have multiple regions, multiple clusters, separate by accounts, departments etc. In addition, you are talking about a highly targeted attack. Relying on the obscurity of which mirror someone uses is really not something I would consider applicable here.
The biggest difference is the automation with which JS code is "updated". This is what makes the attack potentially slower via regular supply chain. Nothing I would consider massive for sophisticated attackers like the ones able to exploit this vector. So the massive difference in your opinion is that:
- Attackers are not able to target individuals with the same precision.
- Attackers might need to know more about you to target the distributor of your software.
On the other hand:
- A company with a security department has a smaller chance to be compromised compared to random individual
- A company like Proton at least has to adhere to some standard and security hygiene, which individuals handling package repos/mirrors don't.
If for you these are massive differences, OK. For me they are not.
Finally:
If both users are using the bridge (assuming it is designed how I think it is), they would certainly be better off than if one or both of them is using the webmail e2ee. However, I would never use or recommend using protonmail, even with the bridge, because it is very likely that the people I’m writing to would often not be using the bridge. Also, because ProtonMail e2ee doesn’t interoperate with anything else, and by using it I’d be endorsing it and encouraging others to use it (“it” being ProtonMail, which for most users is this webmail snakeoil).
How it is relevant what both users are using the bridge? The bridge is literally doing the same that -say- mutt
does. This has nothing with the bridge, what you are saying (I think) is that you wouldn't send an email to someone if you don't trust the software they use, but this is independent from you using the bridge. You can add other people (non-Proton users) keys to be used, so Bridge -> Mutt is exactly the same as Bridge -> Bridge or Mutt -> Mutt.
because it is very likely that the people I’m writing to would often not be using the bridge
In this case there is no tool that you can use that will "protect" you, if you don't trust the other side.
Also, because ProtonMail e2ee doesn’t interoperate with anything else, and by using it I’d be endorsing it and encouraging others to use it (“it” being ProtonMail, which for most users is this webmail snakeoil).
Which is not a security consideration.
The security model of the bridge is the same as the security model of mutt
, or other CLI tools or anything you might use for PGP. It seems you have absolutely no security consideration why this would be worse.
So, in short:
- Using protonmail you address the security risk you highlighted in the same way as it is addressed by using any other client tool that doesn't run in the browser.
- The fact that you won't use it because of your personal crusade against webmail is irrelevant in terms of security for a non-webmail too.
Hey, until those are money that go from VC leeches to actual workers, I call it a win.
Yeah, i think it is a feature, and a very beneficial one for the people this system was designed for - those who want a lot of privacy-desiring users to settle on using an encryption solution which isn’t too difficult to circumvent.
This you need to prove somehow. Has there be any attack that happened like this? Has there been any content leaked this way, or provided to law enforcement? In other words, did they use this "feature" in any way? Because if this is just a design limitation, then it's not a feature, it's a risk exactly like using someone else's code exposes you to supply chain risk. Would you say that anybody who uses any external library is actually a snake-oil seller about the property of their product because if a supplier (library, dependency, etc.) get compromised their product could be compromised? I wouldn't say so. I think that intentions matter here.
Note that, throughout this discussion, I’m not really just talking about Proton but rather them and Tuta and Hushmail and anything else that shares this architecture.
Yes, I understand.
Well, they could be honest and inform their users: “to have the convenience of using webmail you must sacrifice the benefit of end-to-end encryption (not needing to trust the server and its operators to refrain from reading your messages).”
But that's not true. End-to-end encryption simply means that the encrypt/decrypt operation happen on the client side. It doesn't mean that it's an unbreakable design. Following this logic, every software that does PGP encryption should say "to have the convenience of not having to rewrite all the code ourselves we use suppliers which might allow third parties to read your messages". Proton content is still end-to-end encrypted, with the code hosted publicly. The fact that vectors exist to invalidate that is not a reason to invalidate the whole thing, exactly like the existence of supply chain attacks are not a reason to dismiss the validity of e2ee for CLI tools and the like.
Also, I mentioned the potential to use the bridge. That is a fully client-side tool which does not run in the browser, does that satisfy your risk appetite?
Yep. But no matter how tight their processes are, there are still single points of failure that can be coerced to gain access to anyone’s email.
They are a point of failure, not a single point of failure necessary (as in a single person).
but I don’t have the energy to explain to you why selling something as e2ee while it reduces to (among other things) specifically the security of TLS is dishonest.
But this was not your claim, your claim was that compromising them and serving backdoored JS was not the only way, and that an attacker in an appropriate network position could achieve the same. I am saying that particular vector does not apply, because your browser will actually refuse to serve Proton without a valid certificate due to HSTS. So an attacker can tamper with the code only at either of the "ends" (either compromising them or compromising your endpoint).
I just checked their site and they still say it’s “for journalists”, and “we can never access your messages”, etc etc.
Just for reference, what I meant is that people referred by the statement "and the incorrect perception that ProtonMail’s end-to-end encryption provides meaningful security is undoubtedly preventing some of their customers from using better tools instead." are not those who have that risk model. Journalist and other at-risk people have technical consultants and are (hopefully?) aware of the risks, and can apply additional controls (for example, using Proton to send encrypted content). They are not those who won't use other - more secure - channels than email because they read Proton pages.
If what you want is not privacy from adversaries who can compromise your mailserver, but rather just protection from GMail reading your mail, then you don’t need e2ee: you need a provider with a privacy policy you believe they will honor.
e2ee is just a very nice and clear-cut way to enforce the privacy policy. Law enforcement can still get the data from a provider. If the data is not collected, the data cannot be given. Sure, it's possible that a 3-letter agency will coerce Proton to compromise a user but a) this did not happen yet (as far as we know?) and 2) again, if that's part of your risks, don't use emails or just use email to send encrypted content...
Why would you assume they are when they’re lying about their ability to read your emails?
You seem to be really fixated with this statement, but it's not true. They don't have the "ability" to read emails. They have a setup that - provided the violation of controls that we both don't know about - can possibly grant them that ability. I really don't understand why you think it's different from any other software. If the NSA goes to https://www.gnupg.org and says "you know what, the next time you serve your software to IP x.x.x.x", you serve this package, you will never know and your encryption is toast. Would you say that the folks behind GnuPG "have the ability to read your emails"? I wouldn't, because they are not backdooring the software, although the possibility for them, contributors and national actors to do that exists.
rather you are just saying that you think it is very unlikely that they would ever abuse that capability and that you assume their procedures make it so that one rogue employee couldn’t do it alone. You do seem to understand that, contrary to what they’ve written in the screenshot above, ProtonMail as a company technically could decide to.
Yeah, you are correct. This is exactly the same as me saying that technically a lot of people in my organization could tamper with payments and violate the integrity of most of UK bank transfers. In practice, there are a bazillions controls in place to ensure that this does not happen, and before touching production there are tons of safeguards, but theoretically my company could decide to break compliance, remove procedure and allow a free-for-all on banking transactions before being fined/shut down/to the abyss.
I do believe that they have no interest whatsoever to abuse this architectural feature, but I agree that they could be coerced to. However, as I said before, I believe the same to be true for any other software, which is why I don't agree on the risk model to be significantly different from many other tools. In fact, the fact that they are in Swiss jurisdiction might help, compared to a lot of (F)OSS entities which are in the US.
But, do you think most of their customers understand that?
No, I think most people don't.
which of these statements do you think is the most likely to be accurate:
I have no idea. I would say 1 or 3 are the most likely. It seems a very unnecessary way (if I were a certain 3 letter agency) to gain access to a small set of data, when I can compromise the whole device and maintain persistence much more conveniently (for example coercing the ISP to give me access to the router and go from there, or ask directly Apple and Microsoft, etc.).
If it were revealed that #4 were in fact the case, would you agree that it is snakeoil? If you agree with me that #3 is the most likely scenario, approximately how many times per hour/week/year would they need to be complying with these requests before you would agree that they are, in fact, snakeoil?
I would say that they should disclose that for sure, at least with a warrant canary, since they might actually not even be allowed to fully disclose it. I am fairly conflicted about the fact that government surveillance has -sometimes- reason to be exercised, provided a judge has vetted it and proper guarantees are in place (not the US way, to be clear), and the fact it is routinely abused. I also believe that perfect security does not exist, and it's enough for me to send an encrypted attachment via Proton and mitigate this whole risk.
To answer your question, I would say that if this is a forced action that happened a handful of times, for extremely high profile cases and severe reasons, then I might still consider their claim legitimate. If it's a routine procedure to satisfy pretty much any request, then I would agree that this becomes more of a feature than an attack.
That said, I also have a couple of final questions for you too:
- Proton bridge runs on the client and does not use the browser. The code is open source. Since they provide this too, would you consider this on-par with using your favorite CLI/plugin for PGP? Would this solve the problem you raise?
- Do you think that it's possible that any of the 3-letters agencies could coerce a software author (or some collaborator) and produce a malicious release for the code that is served only to you (for example, by IP, fingerprint or other identifier) or that it activates only for you (device ID etc.)? For example go to Kevin McCarthy and force him to produce a backdoored version of Mutt (http://www.mutt.org/download.html) which is backdoored to leak your keys.
- Do you think that alternatively Github/Bitbucket for example could be coerced by said agencies to backdoor the version (and signature) you get for a given code, say https://bitbucket.org/mutt/mutt/downloads/mutt-2.2.12.tar.gz (maybe after graciously "asking" Kevin for his key to sign the software).
- If you think the above is possible, do you think there is any distributor for software that could not be coerced? And how this vector is actually different from Proton being forced to break their own encryption?
- If you agree that the above is possible, would you say that any claim about Mutt using PGP to e2e encrypt/decrypt your emails are snakeoil?
Containers are a perfectly suitable use-case for serving static sites. You get isolation and versioning at the absolutely negligible cost of duplicating a binary (the webserver - which in case of the one I linked in my comment, it's 5MB of space). Also, you get autostart of the server if you use compose, which is equivalent to what you would do with a Systemd unit, I suppose.
You can then use a reverse-proxy to simply route to the different containers.