this post was submitted on 27 Dec 2023
147 points (69.6% liked)
Technology
59777 readers
4723 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I'm not sure if you just didn't read or what. It seems like you understand the history but are insistent on awkward characterizations of the situation.
I mean kibi is the retcon because it made all previous software wrong.
They didn't modify the use of kilo for other units - they used it as an awkward approximation with bytes. No other units were harmed in the making of these units.
And they didn't hijack it - they used the closest approximation and it stuck. Nobody gave a fuck until they bought a 300gb hd with 277gb of free space.
The difference was a lot smaller when you were dealing with 700 byte files - it was often a rounding error. Also - you needed two sectors (1024 bytes at the time) two store your 700 byte file, so what did it matter anyway? If you want to get really specific, you actually needed three sectors - because there's metadata on the file... however the metadata will share space with other files so does that count?
Filesystems are incredibly complex and there's no way they can be explained to a lay person. Storage is and always has been an approximation.
It's even worse with RAM these days - my Mac has 298TB of memory address space currently allocated... but only between 6GB and 7GB of "app memory" in use (literally fluctuating between those two from one second to the next when I'm not even doing anything but watching the memory usage).
Yeah, no, I'm sure I noticed it but I didn't really have the sophistication to get the implication.
Before we got our first Windows machine I had some DOS books. I remember a table in DOS for dummies talking about kilo/giga/petabytes and internalized it, but CDs were a thing by then.
To me, your attempt at defending it or calling it a retcon is an awkward characterization. Even in your last reply: now you're calling it an approximation. Dividing by 1024 is an approximation? Did computers have trouble dividing by 1000? Did it lead to a benefit of the 640KB/320KB memory split in the conventional memory model? Does it lead to a benefit today?
Somehow, every other computer measurement avoids this binary prefix problem. Some, like you, seem to try to defend it as the more practical choice compared to the "standard" choice every other unit uses (e.g: 1.536 Mbps T1 or "54" Mbps 802.11g).
The confusion this continues to cause does waste quite a bit of time and money today. Vendors continue to show both units on the same specs sheets (open up a page to buy a computer/server). News still reports differences as bloat. Customers still complain to customer support, which goes up to management, and down to project management and development. It'd be one thing if this didn't waste time or cause confusion, but we're still doing it today. It's long past time to move on.
The standard for "kilo" was 1000 centuries before computer science existed. Things that need binary units have an option to use, but its probably not needed: even in computer science. Trying to call kilo/kibi a retcon just seems to be trying to defend the use of the 1024 usage today: despite the fact that nearly nothing else (even in computers) uses the binary prefixes.
I don't think it's more practical. I think it's what emerged from researchers trying to refer to concepts. I prefer the clarified prefixes.