Max_P

joined 1 year ago
MODERATOR OF
[–] [email protected] 2 points 2 hours ago* (last edited 2 hours ago) (1 children)

The issue DNS solves is the same as the phone book. You could memorize everyone's phone number/IP, but it's a lot easier to memorize a name or even guess the name. Want the website for walmart? Walmart.com is a very good guess.

Behind the scenes the computer looks it up using DNS and it finds the IP and connects to it.

The way it started, people were maintaining and sharing host files. A new system would come online and people would take the IP and add it to their host file. It was quickly found that this really doesn't scale well, you could want to talk to dozens of computers you'd have to find the IP for! So DNS was developed as a central directory service any computer can request to look things up, which a hierarchy to distribute it and all. And it worked, really well, so well we still use it extensively today. The desire to delegate directory authority is how the TLD system was born. The host file didn't use TLDs just plain names as far as I know.

[–] [email protected] 2 points 2 hours ago

There's definitely been a surge in speculation on domain names. That's part of the whole dotcom bubble thing. And it's why I'm glad TLDs are still really hard to obtain, because otherwise they would all be taken.

Unfortunately there's just no other good way to deal with it. If there's a shared namespace, someone will speculate the good names.

Different TLDs can help with that a lot by having their own requirements. .edu for example, you have to be a real school to get one. Most ccTLDs you have to be a citizen or have a company operating in the country to get it. If/when it becomes a problem, I expect to see a shift to new TLDs with stronger requirements to prove you're serious about your plans for the domain.

It's just a really hard problem when millions of people are competing to get a decent globally recognized short name, you're just bound to run out. I'm kind of impressed at how well it's holding up overall despite the abuse, I feel like it's still relatively easy to get a reasonable domain name especially if you avoid the big TLDs like com/net/org/info. You can still get xyz for dirt cheap, and sometimes there's even free ones like .tk and .ml were for a while. There's also several free short-ish ones, I used max-p.fr.nf for a while because it was free and still looks like a real domain, it looks a lot like a .co.uk or something.

[–] [email protected] 2 points 3 hours ago* (last edited 3 hours ago) (4 children)

Because if they're not owned, then how do you know who is who? How do we independently conclude that yup, microsoft.com goes to Microsoft, without some central authority managing who's who?

It's first come first served which is a bit biased towards early adopters, but I can't think of a better system where you go to google.com and reliably end up at Google. If everyone had a different idea of where that should send you it would be a nightmare, we'd be back to passing IP addresses on post-it notes to your friends to make sure we end up on the same youtube.com. When you type an address you expect to end up on the site you asked, and nothing else. You don't want to end up on Comcast YouTube because your ISP decided that's where youtube.com goes, you expect and demand the real one, the same as everyone else.

And there's still the massive server costs to run a dictionary for literally the entire Internet for all of that to work.

A lot of the times, when asking those kinds of questions, it's useful to think about how would you implement it such that it would work. It usually answers the question.

[–] [email protected] 4 points 3 hours ago

In case you didn't know, domain names form a tree. You have the root ., you have TLDs com., and then usually the customer's domain google.com., then subdomains www.google.com.. Each level of dots typically hands over the rest of the lookup to another server. So in this example, the root servers tell you go ask .com at this IP, you go ask .com where Google is, and it tells you the IP of Google's DNS server, then you query Google's DNS server directly. Any subdomain under Google only involves Google, the public DNS infrastructure isn't involved at that point, significantly reducing load. Your ISP only needs to resolve Google once, then it knows how to get *.google.com directly from Google.

You're not just buying a name that by convention ends with a TLD. You're buying a spot in that chain of names, the tree that is used to eventually go query your server and everything under it. The fee to get the domain contributes to the cost of running the TLD.

[–] [email protected] 5 points 3 hours ago (7 children)

Mostly because you need to be able to resolve the TLD. The root DNS servers need to know about every TLD and it would quickly be a nightmare if they had to store hundreds of thousands records vs the handful of TLDs we have now. The root servers are hardcoded, they can't easily be scaled or moved or anything. Their job is solely to tell you where .com is, .net is, etc. You're supposed to query those once and then you hold to your cached reply for like 2+ days. Those servers have to serve the entire world, so you want as few queries to those as possible.

Hosting a TLD is a huge commitment and so requires a lot of capital and a proper legal company to contractually commit to its maintenance and compliance with regulations. Those get a ton of traffic, and users getting their own TLDs would shift the sum of all gTLD traffic to the root servers which would be way too much.

With the gTLDs and ccTLDs we have at least there's a decent amount of decentralization going, so .ca is managed by Canada for example, and only Canada has jurisdiction on that domain, just like only China can take away your .cn. If everyone got TLDs the namespace would be full already, all the good names would be squatted and waiting to sell it for as much as possible like already happens with the .com and .net TLDs.

There's been attempts at a replacement but so far they've all been crypto scams and the dotcom bubble all over again speculating on the cool names to sell to the highest bidder.

That said if you run your own DNS server and configure your devices to use it, you can use any domain as you want. The problem is gonna get the public Internet at large to recognize it as real.

[–] [email protected] 13 points 1 day ago

Sometimes it's also, is it really important to know? A lot of things I have complicated opinions of because things are nuanced and complicated in the real world, so for example even if you ask me it's not like I can just be for or against Israel or whatever. And I certainly don't feel like going over it again and again and again as people keep asking about random topics.

I swear americans have this weird thing where everyone needs to have a strong opinion on every topic all the time, and talk about it all the time so they can sus out if you're leaning democrat or republican. It's so weird. I'm not even american, I can't do anything about it! I'll keep my opinions where they belong, in my head, thank you.

It's important to be educated about those topics but I don't feel the need to make it my entire personnality, unlike some people. I have better things to do that actually brings me joy rather than doom and gloom over things I can't do anything about.

[–] [email protected] 6 points 2 days ago

WireGuard works great for that.

[–] [email protected] 12 points 2 days ago (2 children)

Not sure if Voyager exposes such a setting (probably?), but on Tesseract I'd do something like this:

Example of Tesseract's options to filter posts based on keywords

[–] [email protected] 67 points 2 days ago (3 children)

Voyager for Lemmy, at least for me, pushes political content like crazy.

No content is being pushed to anyone, Lemmy's algorithms are very simple. It's just there's a lot of it.

You can unsubscribe from or block the politics and news communities, especially worldnews, and it should get rid of a lot of it. I find the experience to be better when subscribing to the stuff you want rather than remove the stuff you don't want.

[–] [email protected] 18 points 3 days ago (6 children)

This. They even provide the cover image to use. If they don't want embedding they could just block the request.

But they don't want to. They want to sell the cake and eat it too.

[–] [email protected] 7 points 3 days ago

Anyone that's used a custom ROM knows just how shitty your 48MP camera looks like without the processing lol. People go out of their way to make GCam work because it's so bad.

It's one of those bougie "nostalgia" app isn't it? Like those shitty scamcorders that VWestlife covered not long ago.

[–] [email protected] 2 points 3 days ago

It does need both. Requires= alone will only pull the unit as a dependency and will activate it, but doesn't define a hard dependency. You need the After= to also declare that the unit must be started after its dependencies are finished loading, not merely being activated. Otherwise they will start in parallel, it just guarantees that both units will be activated. There's an even stronger directive, BindsTo=, that will tie them such that if its dependency is stopped, this unit will be deactivated too. If SMB is a hard dependency that might be preferable. Requires+After still allows the mount to fail, but ensures if it's mountable it'll be mounted before Docker, whereas with BindsTo+After, failing the SMB mount would also shut down Docker.

 

Testing, I broke the database so bad my posts were federating out but not saving on my local instance, fun stuff

 

I can't post at all now?

 

I can't post at all now?

 

Tried some database tweaks

 

Tried some database tweaks

 

Neat little thing I just noticed, might be known but I never head of it before: apparently, a Wayland window can vsync to at least 3 monitors with different refresh rates at the same time.

I have 3 monitors, at 60 Hz, 144 Hz, and 60 Hz from left to right. I was using glxgears to test something, and noticed when I put the window between the monitors, it'll sync to a weird refresh rate of about 193 fps. I stretched it to span all 3 monitors, and it locked at about 243 fps. It seems to oscillate between 242.5 and 243.5 gradually back and forth. So apparently, it's mixing the vsync signals together and ensuring every monitor's got a fresh frame while sharing frames when the vsyncs line up.

I knew Wayland was big on "every frame is perfect", but I didn't expect that to work even across 3 monitors at once! We've come a long, long way in the graphics stack. I expected it to sync to the 144Hz monitor and just tear or hiccup on the other ones.

 

All the protections in software, what an amazing idea!

 

It only shows "view all comments", so you can't see the full context of the comment tree.

 

The current behaviour is correct, as the remote instance is the canonical source, but being able to copy/share a link to your home instance would be nice as well.

Use case: maybe the comment is coming from an instance that is down, or one that you don't necessarily want to link to.

If the user has more than one account, being able to select which would be nice as well, so maybe a submenu or per account or a global setting.

 

Testing federation stuff after fixing NTP

view more: next ›