IAm_A_Complete_Idiot

joined 1 year ago
[–] [email protected] 3 points 7 months ago

The proper way to handle issues like these is process level permissions (i.e. capability systems), instead of user level. Linux CGroups, namespaces, etc. are already moving that way, and in effect that's the way windows is trying to head too. (Windows has its own form of containerization called AppContainers, which UWP apps use. Windows also has its own capability system).

[–] [email protected] 1 points 7 months ago (2 children)

As a third party, my understanding is that both the implementation and the protocol are really hard, if not next to impossible to iterate on. Modern hardware doesn't work like how it did when X did, and X assumes a lot of things that made sense in the 90s that don't now. Despite that, we cram a square peg into the round hole and it mostly works - and as the peg becomes a worse shape we just cram it harder. At this point no one wants to keep working on X.

And I know your point is that it works and we don't need too, but we do need too. New hardware needs to support X - at least the asahi guys found bugs in the X implementation that only exists on their hardware and no one who wants to fix them. Wayland and X are vastly different, because X doesn't make sense in the modern day. It breaks things, and a lot of old assumptions aren't true. That sucks, especially for app devs that rely on those assumptions. But keeping around X isn't the solution - iterating on Wayland is. Adding protocols to different parts of the stack with proper permission models, moving different pieces of X to different parts of the stack, etc. are a long term viable strategy. Even if it is painful.

[–] [email protected] 1 points 7 months ago

manpages aren't guides though - they don't help much in learning new tools, especially complicated ones. They're comprehensive references, some can literally span hundreds of pages. Useful when you know what you're doing and what you're looking for, not great for learning new tools.

[–] [email protected] 2 points 7 months ago

In which case the -a isn't needed.

[–] [email protected] 4 points 7 months ago* (last edited 7 months ago) (2 children)

Better have not created any new files tho - git commit -a doesn't catch those without an add first.

[–] [email protected] 2 points 7 months ago* (last edited 7 months ago)

This is good for precisely the single user case - potentially malicious services on your system can't view things they otherwise would be able to, or access resources they don't need. Even if it's under the same user.

[–] [email protected] 2 points 7 months ago

As a Linux user (and ex arch user btw), I'm deeply offended.

[–] [email protected] 1 points 7 months ago

The data model there is fundamentally different. That would break how git would work because operations that worked one way before would now no longer work that way. You'd functionally have rewritten and mapped all the old functionality to new functionality with subtle differences, but at that point is it even git? You have a wrapper with similar but subtly different commands and that's it. It's like saying "instead of reinventing functionality by building both ext4 and btrfs, why don't we just improve ext4"?

The two are practically entirely different.

[–] [email protected] 3 points 7 months ago* (last edited 7 months ago) (2 children)

It being objectively better then SVN or CVS doesn't mean that it's the best we can do. Git has all sorts of non-ideal behaviors that other VCS's don't. Pijul's data structure for instance is inherently different from git and it can't be retrofitted on top. Making tooling only support git effectively kills off any potential competitors that could be superior to git.

One example is pijul specifically let's you get away from the idea that moving commits between branches changes their identity, because pijul builds a tree of diffs. If two subtrees of diffs are distinct, they can always be applied without changing identity of those diffs. This means "cherry picking" a commit and then merging a commit doesn't effectively merge that commit twice resulting in a merge conflict.

That's one example how one VCS can be better.

[–] [email protected] 3 points 7 months ago* (last edited 7 months ago) (4 children)

Not OP, but personally yes. Every code forge supporting only git just further enforces git's monopoly on the VCS space. Git isn't perfect, nor should be treated as perfect.

The above is probably the reason why so many alternative VCS's have to cludge themselves onto git's file format despite likely being better served with their own.

Interesting new VCS's, all supporting their own native format as well for various reasons:

  • pijul
  • sapling
  • jujutsu

Sapling is developed by meta, jujutsu by an engineer at Google. Pijul is not tied to any company and was developed by an academic iirc. If you're okay with not new:

  • mercurial
  • fossil
  • darcs

VCS's are still being itterated on and tooling being super git centric hurts that.

[–] [email protected] 26 points 7 months ago (3 children)

Second person excited for bcachefs, I'm planning on swapping over as soon as it supports scrubbing.

[–] [email protected] 1 points 7 months ago* (last edited 7 months ago) (1 children)

Right, but squashed commits don't scale for large PRs. You could argue that large PRs should be avoided, but sometimes they make sense. And in the case where you do have a large PR, a commit by commit review makes a lot of sense to keep your history clean.

Large features that are relatively isolated from the rest of the codebase make perfect sense to do in a different branch before merging it in - you don't merge in half broken code. Squashing a large feature into one commit gets rid of any useful history that branch may have had.

view more: ‹ prev next ›