this post was submitted on 17 Sep 2023
418 points (96.4% liked)

Programmer Humor

19594 readers
1025 users here now

Welcome to Programmer Humor!

This is a place where you can post jokes, memes, humor, etc. related to programming!

For sharing awful code theres also Programming Horror.

Rules

founded 1 year ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] PixxlMan 3 points 1 year ago (1 children)

To everyone commenting that you have to convert to binary to represent numbers because computers can't deal with decimal number representations, this isn't true! Floating point arithmetic could totally have been implemented with decimal numbers instead of binary. Computers have no problem with decimal numbers - integers exist. Binary based floating point numbers are perhaps a bit simpler, but they're not a necessity. It just happens to be that floating point standards use binary.

[–] [email protected] 1 points 1 year ago (1 children)

What you're talking about isn't floating point, it's fixed point.

[–] PixxlMan 1 points 1 year ago (1 children)

Wrong. Sounds like you think only fixed point/precision could be implemented in decimal. There's nothing about floating point that would make it impossible to implement in decimal. In fact, it's a common form of floating point. See C# "decimal" type docs.

The beginning of the Wikipedia article on floating point also says this: "In practice, most floating-point systems use base two, though base ten (decimal floating point) is also common." (https://en.m.wikipedia.org/wiki/Floating-point_arithmetic) Also check this out: https://en.m.wikipedia.org/wiki/Decimal_floating_point

Everything in my comment applies to floating point. Not fixed point.

[–] [email protected] 1 points 1 year ago

I generally interpret "decimal" to mean "real numbers" in the context of computer science rather than "base 10 numbers". But yes, of course you can implement floating point in base 10, that's what scientific notation is!