TCB13

joined 2 years ago
[–] TCB13 2 points 6 hours ago (1 children)

I want to learn about PGP and how to encrypt email. Someone sells that service, great. And it is not like I cannot send normal emails to anyone else.

I don't disagree with you, I believe it as well. PGP is it stands is cumbersome.

The thing is that could've still implemented a easy-to-use, "just login and send email" type of web client and abstracted the user from the PGP complexities while still delivering everything over IMAP/SMTP.

They are using the same standard, not some made up version of SMTP (when sending to other servers, I assume any email from client A to client B both being Proton customer never leave their server, so no need for a new protocol).

You assume correctly, but when your mail client is trying to send an email instead of using SMTP to submit to their server, you're using a proprietary API in a proprietary format and the same goes for receiving email.

This is well documented and to prove it further if you want to configure Proton in a generic mail client like Thunderbird then you're required to install a "birdge", a piece of software that essentially simulates a local IMAP and SMPT server (that Thunderbird communicates with) and then will convert those requests into requests their proprietary API understands. There are various issues with this approach the most obvious one is that it is an extra step, there's also the issue that in iOS for eg. you're forced to use their mail app because you can't run the bridge there.

The bridge is an afterthought to support generic email clients and generic protocols, only works how and where they say it should work and may be taken away at any point.

while being fully open source using open standards

Delivering your data over proprietary APIs doesn't count as "open standards" - sorry.

[–] TCB13 1 points 7 hours ago* (last edited 7 hours ago)

Would it be inaccurate to say that your fear is that Proton pulls an “Embrace, Extend, Extinguish” move?

No, it isn't. But they never "embraced" as there was never direct IMAP to their servers, instead it's a proprietary API serving data in a proprietary format.

I also see how that would make Proton like WhatsApp, which has its own protocol and locks its users in.

The problem isn't that taking down the bridge would make Proton like WhatsApp. It's the other way around, when they decided to build their internals with proprietary protocols and solutions instead eg. IMAP+SMTP they became the WhatsApp. Those things shouldn't be addons or an afterthought, they should be bult into the core.

This clearly shows that making open solutions ranks very low their company and engineering priority list. If it was at the top they would've built it around IMAP instead.

I could download an archive of everything I have on Proton without a hitch.

Yes you can, but the data will come in more property formats hard to upload to anywhere else - at least for some of the data. They've improved this situation but it's still less than ideal. In the beginning they would export contacts and calendars in some JSON format, I see they moved to vCard and iCal now.

[–] TCB13 2 points 7 hours ago (1 children)

I work in another big4 company, and I have a strong feeling that your claims apply to us as well.

That's sad, but it is the world we live in.

[–] TCB13 2 points 21 hours ago (3 children)

Okay, here are a few thoughts:

  • Companies like blame someone when things go wrong, if they chose open-source there's isn't someone to sue then;
  • Buying proprietary stuff means you're outsourcing the risks of such product;
  • Corruption pushes for proprietary: they might be buying software that is made by someone that is close to the CTO, CEO or other decision marker in the company, an old friend, family or straight under the table corruption;
  • Most non-tech companies use services from consulting companies in order to get their software developed / running. Consulting companies often fall under the last point that besides that they have have large incentives from companies like Microsoft to push their proprietary services. For eg. Microsoft will easily provide all of a consulting companies employees with free Azure services, Office and other discounts if they enter in an exclusivity agreement to sell their tech stack. To make things worse consulting companies live of cheap developers (like interns) and Microsoft and their platform makes things easier for anyone to code and deploy;
  • Microsoft provider a cohesive ecosystem of products that integrate really well with each other and usually don't require much effort to get things going - open-source however, usually requires custom development and a ton of work to work out the "sharp angles" between multiple solutions that aren't related and might not be easily compatible with each other;
  • Open-source requires a level of expertise that more than half of the developers and IT professionals simply don't have. This aspect reinforces the last point even more. Senior open-source experts are more expensive than simply buying proprietary solutions;
  • If we consider the price of a senior open-source expert + software costs (usually free) the cost of open-source is considerable lower than the cost of cheap developers + proprietary solutions, however consider we are talking about companies. Companies will always prefer to hire more less expensive and less proficient people because that means they're easier to replace and you'll pay less taxes;
  • Companies will prefer to hire services from other companies instead of employees thus making proprietary vendors more compelling. This happens because from an accounting / investors perspective employees are bad and subscriptions are cool (less taxes, no responsibilities etc);
  • The companies who build proprietary solutions work really hard to get vendors to sell their software, they provide commissions, support and the promises that if anything goes wrong they'll be there. This increases the number of proprietary-only vendors which reinforces everything above. If you're starting to sell software or networking services there's little incentive for you to go pure "open-source". With less companies, less visibility, less professionals (and more expensive), less margins and less positive market image, less customers and lesser profits.

Unfortunately things are really poised and rigged against open-source solutions and anyone who tries to push for them. The "experts" who work in consulting companies are part of this as they usually don't even know how to do things without the property solutions. Let me give you an example, once I had to work with E&Y, one of those big consulting companies, and I realized some awkward things while having conversations with both low level employees and partners / middle management, they weren't aware that there are alternatives most of the time. A manager of a digital transformation and cloud solutions team that started his career E&Y, wasn't aware that there was open-source alternatives to Google Workplace and Microsoft 365 for e-mail. I probed a TON around that and the guy, a software engineer with an university degree, didn't even know that was Postfix was and the history of email.

[–] TCB13 2 points 21 hours ago

Yeah it's all about outsourcing the risk to someone.

[–] TCB13 2 points 21 hours ago (2 children)

Sure, you're using a bridge they develop and they can away or break at any point. It's not the best ideal. Why support a company that is actively trying to turn open protocols into more closed stuff? Makes no sense. That type of non-sense is what got us into the situation we've now with WhatsApp and other messengers.

[–] TCB13 1 points 21 hours ago
[–] TCB13 -3 points 1 day ago* (last edited 1 day ago) (3 children)

Any e-mail service that doesn’t provide standard IMAP/SMTP directly to their servers and uses custom protocols is yet another attempt at vendor lock-in and nobody should use it.

What Proton is doing is pushing for vendor lock-in at any possible point so you’re stuck with what they deem acceptable because it’s easier for them to build a service this way and makes more sense from a business / customer retention perspective. Proton is doing to e-mail about the same that WhatsApp and Messenger did to messaging - instead of just using an open protocol like XMPP they opted for their closed thing in order to lock people into their apps. People in this community seem to be okay with this just because they sell the “privacy” cool-aid.

People complain when others use Google or Microsoft for e-mail around here, but at least in those providers you can access your e-mail through standard protocols. How ironic it is to see privacy / freedom die hard fans suddenly going for a company that is far less open than the big providers… just because of marketing. :)

Proton is just a company that wants profits and found out there was a niche of people who would buy into everything that they label as “encryption” and “privacy” no matter what the cost. They’ve learnt how to weaponize “privacy” to push more and more vendor lock-in. Not even Apple does this bullshit.

Now, I can see anyone commenting "oh but they have to it because of security" - no they don't. That's bullshit.

Any generic IMAP/SMPT provider + Thunderbird + PGP will provide the same level of security that Proton does - that is assuming they didn’t mess their client-side encryption/decryption or key storage in some way. PGP makes sure all your e-mail content is encrypted and that’s it, doesn’t matter if it’s done by Thunderbird and the e-mails are stored in Gmail OR if it’s done by the Proton bridge and the e-mails are on their servers, the same PGP tech the only difference is the client. So, no, there isn't the reason to do it the way they do it besides vendor lock-in.

[–] TCB13 0 points 1 day ago (2 children)

And since when did I offend you? Unless... you've been "grumble quietly until a final straw is added to the stack"

[–] TCB13 0 points 1 day ago (4 children)

What a piss of an excuse that is ahah

[–] TCB13 8 points 1 day ago (1 children)

And why do you need to go back?

 

cross-posted from: https://lemmy.world/post/23071801

Considering a lot of people here are self-hosting both private stuff, like a NAS and also some other is public like websites and whatnot, how do you approach segmentation in the context of virtual machines versus dedicated machines?

This is generally how I see the community action on this:

Scenario 1: Fully Isolated Machine for Public Stuff

Two servers one for the internal stuff (NAS) and another for the public stuff totally isolated from your LAN (websites, email etc). Preferably with a public IP that is not the same as your LAN and the traffic to that machines doesn't go through your main router. Eg. a switch between the ISP ONT and your router that also has a cable connected for the isolated machine. This way the machine is completely isolated from your network and not dependent on it.

Scenario 2: Single server with VM exposed

A single server hosting two VMs, one to host a NAS along with a few internal services running in containers, and another to host publicly exposed websites. Each website could have its own container inside the VM for added isolation, with a reverse proxy container managing traffic.

For networking, I typically see two main options:

  • Option A: Completely isolate the "public-facing" VM from the internal network by using a dedicated NIC in passthrough mode for the VM;
  • Option B: Use a switch to deliver two VLANs to the host—one for the internal network and one for public internet access. In this scenario, the host would have two VLAN-tagged interfaces (e.g., eth0.X) and bridge one of them with the "public" VM’s network interface. Here’s a diagram for reference: https://ibb.co/PTkQVBF

In the second option, a firewall would run inside the "public" VM to drop all inbound except for http traffic. The host would simply act as a bridge and would not participate in the network in any way.

Scenario 3: Exposed VM on a Windows/Linux Desktop Host

Windows/Linux desktop machine that runs KVM/VirtualBox/VMware to host a VM that is directly exposed to the internet with its own public IP assigned by the ISP. In this setup, a dedicated NIC would be passed through to the VM for isolation.

The host OS would be used as a personal desktop and contain sensitive information.

Scenario 4: Dual-Boot Between Desktop and Server

A dual-boot setup where the user switches between a OS for daily usage and another for hosting stuff when needed (with a public IP assigned by the ISP). The machine would have a single Ethernet interface and the user would manually switch network cables between: a) the router (NAT/internal network) when running the "personal" OS and b) a direct connection to the switch (and ISP) when running the "public/hosting" OS.

For increased security, each OS would be installed on a separate NVMe drive, and the "personal" one would use TPM with full disk encryption to protect sensitive data. If the "public/hosting" system were compromised.

The theory here is that, if properly done, the TPM doesn't release the keys to decrypt the "personal" disk OS when the user is booted into the "public/hosting" OS.

People also seem to combine both scenarios with Cloudflare tunnels or reverse proxies on cheap VPS.


What's your approach / paranoia level :D

Do you think using separate physical machines is really the only sensible way to go? How likely do you think VM escape attacks and VLAN hopping or other networking-based attacks are?

Let's discuss how secure these setups are, what pitfalls one should watch out for on each one, and what considerations need to be addressed.

19
submitted 1 month ago* (last edited 1 month ago) by TCB13 to c/selfhosted
 

Considering a lot of people here are self-hosting both private stuff, like a NAS and also some other is public like websites and whatnot, how do you approach segmentation in the context of virtual machines versus dedicated machines?

This is generally how I see the community action on this:

Scenario 1: Air-gapped, fully Isolated Machine for Public Stuff

Two servers one for the internal stuff (NAS) and another for the public stuff totally isolated from your LAN (websites, email etc). Preferably with a public IP that is not the same as your LAN and the traffic to that machines doesn't go through your main router. Eg. a switch between the ISP ONT and your router that also has a cable connected for the isolated machine. This way the machine is completely isolated from your network and not dependent on it.

Scenario 2: Single server with VM exposed

A single server hosting two VMs, one to host a NAS along with a few internal services running in containers, and another to host publicly exposed websites. Each website could have its own container inside the VM for added isolation, with a reverse proxy container managing traffic.

For networking, I typically see two main options:

  • Option A: Completely isolate the "public-facing" VM from the internal network by using a dedicated NIC in passthrough mode for the VM;
  • Option B: Use a switch to deliver two VLANs to the host—one for the internal network and one for public internet access. In this scenario, the host would have two VLAN-tagged interfaces (e.g., eth0.X) and bridge one of them with the "public" VM’s network interface. Here’s a diagram for reference: https://ibb.co/PTkQVBF

In the second option, a firewall would run inside the "public" VM to drop all inbound except for http traffic. The host would simply act as a bridge and would not participate in the network in any way.

Scenario 3: Exposed VM on a Windows/Linux Desktop Host

Windows/Linux desktop machine that runs KVM/VirtualBox/VMware to host a VM that is directly exposed to the internet with its own public IP assigned by the ISP. In this setup, a dedicated NIC would be passed through to the VM for isolation.

The host OS would be used as a personal desktop and contain sensitive information.

Scenario 4: Dual-Boot Between Desktop and Server

A dual-boot setup where the user switches between a OS for daily usage and another for hosting stuff when needed (with a public IP assigned by the ISP). The machine would have a single Ethernet interface and the user would manually switch network cables between: a) the router (NAT/internal network) when running the "personal" OS and b) a direct connection to the switch (and ISP) when running the "public/hosting" OS.

For increased security, each OS would be installed on a separate NVMe drive, and the "personal" one would use TPM with full disk encryption to protect sensitive data. If the "public/hosting" system were compromised.

The theory here is that, if properly done, the TPM doesn't release the keys to decrypt the "personal" disk OS when the user is booted into the "public/hosting" OS.

People also seem to combine both scenarios with Cloudflare tunnels or reverse proxies on cheap VPS.


What's your approach / paranoia level :D

Do you think using separate physical machines is really the only sensible way to go? How likely do you think VM escape attacks and VLAN hopping or other networking-based attacks are?

Let's discuss how secure these setups are, what pitfalls one should watch out for on each one, and what considerations need to be addressed.

 

cross-posted from: https://lemmy.world/post/21563379

Hello,

I'm looking for a high resolution image of the PAL cover from the Dreamcast (I believe).

There was this website covergalaxy that used it have in 2382x2382 but all the content seems to be gone. Here's the cache https://ibb.co/nRMhjgw . Internet archive doesn't have it.

Much appreciated!

 

Hello,

I'm looking for a high resolution image of the PAL cover from the Dreamcast (I believe).

There was this website covergalaxy that used it have in 2382x2382 but all the content seems to be gone. Here's the cache https://ibb.co/nRMhjgw . Internet archive doesn't have it.

Much appreciated!

51
So you want privacy? (en.wikipedia.org)
submitted 2 months ago by TCB13 to c/privacy
 

The most severe restrictions to the general public are imposed within a 20-mile (32 km) radius of the Green Bank Observatory.[5] The Observatory polices the area actively for devices emitting excessive electromagnetic radiation such as microwave ovens, Wi-Fi access points and faulty electrical equipment and request citizens discontinue their usage. It does not have enforcement power[6] (although the FCC can impose a fine of $50 on violators[7]), but will work with residents to find solutions.

5
Enter MacBB :) (lemmy.world)
submitted 5 months ago* (last edited 5 months ago) by TCB13 to c/macapps
 

MacBB is a community of apple users that has been around for a while. You can find and provide help, apps and engage in random talk mostly about the Apple ecosystem.

Registration is open and free for everyone. No ads, no BS.

--->> https://macbb.org/

Enjoy!

3
submitted 5 months ago* (last edited 5 months ago) by TCB13 to c/esp32
4
submitted 5 months ago* (last edited 5 months ago) by TCB13 to c/[email protected]
4
SQLite Database Integration (make.wordpress.org)
submitted 6 months ago by TCB13 to c/wordpress
 

As a middle ground, we could implement a solution for the bottom tier: small to medium sites and blogs. These sites don’t necessarily need a full-fledged MySQL database.

SQLite seems to be the perfect fit:

  • It is the most widely used database worldwide
  • It is cross-platform and can run on any device
  • It is included by default on all PHP installations (unless explicitly disabled)
  • WordPress’s minimum requirements would be a simple PHP server, without the need for a separate database server.
  • SQLite support enables lower hosting costs, decreases energy consumption, and lowers performance costs on lower-end servers.

What would the benefits of SQLite be?

Officially supporting SQLite in WordPress could have many benefits. Some notable ones would include:

  • Increased performance on lower-end servers and environments.
  • Potential for WordPress growth in markets where we did not have access due to the system’s requirements.
  • Potential for growth in the hosting market using installation “scenarios”.
  • Reduced energy consumption – increased sustainability for the WordPress project.
  • Further WordPress’s mission to “democratize publishing” for everyone.
  • Easier to contribute to WordPress – download the files and run the built-in PHP server without any other setup required.
  • Easier to use automated tests suite.
  • Sites can be “portable” and self-contained.

Source and other links:

-100
submitted 6 months ago* (last edited 6 months ago) by TCB13 to c/[email protected]
 

New GNOME dialog on the right:

Apple's dialog:

They say GNOME isn't a copy of macOS but with time it has been getting really close. I don't think this is a bad thing however they should just admit it and then put some real effort into cloning macOS instead of the crap they're making right now.

Here's the thing: Apple's design you'll find that they carefully included an extra margin between the "Don't Save" and "Cancel" buttons. This avoid accidental clicks on the wrong button so that people don't lose their work when they just want to click "Cancel".

So much for the GNOME, vision and their expert usability team :P

 

Hi,

Is there anyone using Amcrest IP4M-1041B with Home Assistant? I've a few questions about software and integration.

  1. From what I hear, this camera can be setup 100% offline, connected via cable to any computer and by using a built in WebUI the camera has, is this true?

  2. It offers pan, tilt or zoom. Does it work really good with HA? Can it be operated without any Amcrest software / internet connection?

  3. The features above allow you to set preset locations, can that be done on HA / WebUI / without the Amcrest app as well?

  4. Does it really operate all features offline and is it reliable? Eg. motion detection works as expected / doesn't miss events?

  5. What's your overall experience with the camera? Does it compare to let's say a TP-Link tapo?

Thank you.

 

cross-posted from: https://lemmy.world/post/14398634

Unfortunately I was proven to be right about Riley Testut. He's yet another greedy person barely batter than Apple. After bitching to Apple to remove GBA4iOS from the App Store he's now leveraging Delta to force people into his AltStore.

Delta has finally made its way to the App Store. Additionally, the Delta developer has also published their alternative marketplace, AltStore, in the EU today.

If you're in the EU you'll only be able to get Delta on the AltStore and that requires:

This is complete bullshit he could've just launched Delta on the App Store in Europe as well but he decided not to.

Thanks Riley Testut for being a dick to the people that actually forced Apple into allowing alternative app stores in the first place.


Github issue related to this dick move: https://github.com/rileytestut/Delta/issues/292

view more: next ›