this post was submitted on 02 Jan 2025
803 points (99.1% liked)

Programmer Humor

32710 readers
1281 users here now

Post funny things about programming here! (Or just rant about your favourite programming language.)

Rules:

founded 5 years ago
MODERATORS
 

~~Stolen~~ Cross-posted from here: https://fosstodon.org/@foo/113731569632505985

all 43 comments
sorted by: hot top controversial new old
[–] [email protected] 55 points 3 days ago

Okay. I really laughed out loud on that one

[–] [email protected] 45 points 3 days ago (1 children)

The downvote is from someone who doesn't understand floating point notation

[–] konalt 80 points 3 days ago (2 children)

There are actually 0 downvotes, but the 1 is a rounding error

[–] [email protected] 20 points 3 days ago (1 children)

Which is especially ironic on this post.

[–] [email protected] 10 points 3 days ago* (last edited 2 days ago)

Unless you're an original Intel Pentium trying to divide a number.

[–] [email protected] 3 points 2 days ago

It's up to five now

[–] HStone32 28 points 3 days ago (10 children)

I've wondered why programming languages don't include accurate fractions as part of their standard utils. I don't mind calling dc, but I wish I didn't need to write a bash script to pipe the output of dc into my program.

[–] [email protected] 21 points 3 days ago (1 children)

Because at the end of the day everything gets simplified to a 1 or a 0. You could store a fraction as an “object” but at some point it needs to be turned into a number to work with. That’s where floating points come into play.

[–] Knock_Knock_Lemmy_In 13 points 2 days ago (1 children)

There is already a pair of objects we can use to store fractions. The ratio of two integers.

Irrational numbers is when floating points come into play.

[–] [email protected] 8 points 2 days ago* (last edited 2 days ago) (1 children)

Which they tend to do a lot. Like, the moment a square root or trig function shows up.

Even without it's pretty easy to overflow a fraction stored the way your describing. x=1/(x*x+1) does it in log time. There's really not a lot of situations where exact fractions work, but purely symbolic logic wouldn't. Maybe none, IDK.

Edit: I mean, I guess it's all symbolic. What I'm trying to say is that if you're at a level of detail where you can keep track of the size of denominators, native support for a type that hides them is counterproductive. It's better to lay out your program in such a way that you just use small integers, which is guaranteed to be possible in that case.

[–] [email protected] 4 points 2 days ago (1 children)

There’s really not a lot of situations where exact fractions work, but purely symbolic logic wouldn’t. Maybe none, IDK.

Simulations maybe? Like the ones for chaotic systems where even the slightest inaccuracy massively throws the result off, where the tiny difference between an exact fraction and a float can seriously impact the accuracy as small errors build up over time.

[–] [email protected] 2 points 2 days ago

Are you aware of one that takes place completely within fractions of a few select types? Usually they're continuous.

I can think of some that are all integers, but I covered that in the edit.

[–] [email protected] 17 points 3 days ago (1 children)

You can only store rational numbers as a ratio of two numbers, and there's infinitely times more irrational numbers than rational ones - as soon as you took (almost any) root or did (most) trigonometry, then your accurate ratio would count for nothing. Hardcore maths libraries get around this by keeping the "value in progress" as an expression for as long as possible, but working with expressions is exceptionally slow by computer standards - takes quite a long time to keep them in their simplest form whenever you manipulate them.

[–] [email protected] 9 points 2 days ago

You could choose a subset of fractions, though, and then round it to the nearest one. Maybe you could use powers of two as the denominator for easy hardware implementation. Oh wait, we've just reinvented floats.

[–] [email protected] 10 points 2 days ago (2 children)

A lot of work has gone into making floating point numbers efficient and they cover 99% of use cases. In the rare case you really need perfect fractional accuracy, it's not that difficult to implement as a pair of integers.

[–] WhiskyTangoFoxtrot 19 points 2 days ago
[–] [email protected] 14 points 2 days ago (3 children)

Many do. Matlab, Julia and Smalltalk are the ones I know

[–] [email protected] 4 points 2 days ago

Factor, Prolog, and Python has the fractions module

[–] [email protected] 7 points 2 days ago

... Perl, Haskell, Lisp, ...

[–] [email protected] 4 points 2 days ago

Scheme and many others. And lots of libraries for C and others.

It's called bignum, or Arbitrary-precision arithmetic.

[–] [email protected] 3 points 2 days ago

I think the reason is that most real numbers are gonna be the result of measurement equipment (for example camera/brightness sensor, or analog audio input). As such , these values are naturally real (analog) values, but they aren't fractions. Think of the vast amount of data in video, image and audio files. They typically make up a largest part of the broadband internet usage. As such, their efficient handling is especially important, or you're gonna mess up a lot of processing power.

Since these (and other) values are typically real values, they are represented by IEEE-754 floats, instead of fractions.

[–] [email protected] 3 points 2 days ago

A lot do. They call them "rationals".

[–] [email protected] 2 points 2 days ago

There are programming languages with fractions in them, afaik many varieties of Scheme have fractions

[–] [email protected] 4 points 2 days ago* (last edited 2 days ago)

Performance penalty I would imagine. You would have to do many more steps at the processor level to calculate fractions than floats. The languages more suited toward math do have them as someone else mentioned, but the others probably can't justify the extra computational expense for the little benefit it would have, also I'd bet there are already open source libraries for all the popular languages of you really need a fraction.

[–] [email protected] 3 points 3 days ago* (last edited 2 days ago) (1 children)

I' assume its because implemenring comparisons can't be done efficiently.

You'd either have to reduce a fraction every time you perform an operation. That would essentially require computing at least one prime decomposition (and the try to divide rhe other number by each prime factor) but thats just fucking expensive. And even that would just give you a quick equality check. For comparing numbers wrt </> you'd then have to actually compute the floating point number with enough percesion or scale the fractions which could easily lead to owerflows (comparing intmax/1 and 1/intmax would amount to comparing intmax^2/intmax to 1/intmax. The emcodinglengh required to store intmax^2 would be twice that of a normal int... Plus you'd have to perform that huge multiplication). But what do you consider enough? For two numbers which are essentially the same except a small epsilon you'd need infinite percision to determine their order. So would that standard then say they are equal even though they aren't exectly? If so what would be the minimal percision (that makes sense for every concievable case? If not, would you accept the comparison function having an essentially unbounded running time (wrt to a fixed encoding lengh)? Or would you allow a number to be neither smaller, nor bigger, nor equal to another number?

Edit: apparently some languages still have it: https://pkg.go.dev/math/big#Rat

[–] affiliate 3 points 2 days ago (2 children)

why couldn’t you compute p/q < r/s by checking ps < rq? if you follow the convention that denominators have to be strictly positive then you don’t even have to take signs into account. and you can check equality in the same way. no float conversion necessary. you do still need to eat a big multiplication though, which kind of sucks. the point you bring up of needing to reduce fractions after adding or multiplying also a massive problem. maybe we could solve this by prohibiting the end user from adding or multiplying numbers

[–] [email protected] 3 points 2 days ago

why couldn’t you compute p/q < r/s by checking ps < rq?

That's what I meant by scaling the fractions. Tbh I kind of forgot that was an option and when I remembered I had allready written the part about comparing floats so I just left it in. But yeah, encoding lengh might be a killer there.

You could also avoid reducing fractions the same way. Like I don't neecessairly need my fractions to be reduced, if I am just doing a few equality comparisons per fraction. Of course I would have to reduce them at some point to avoid exceding the encoding lentgh in the enumerator and denominator when there is a representation with a short enough encoding available.

I think the bigger problem might be the missing usecases. As another user mentioned, this would still only encode rationals perfectly (assuming no limit on encoding lengh). But I cannot see many usecases where having rationals encoded percisely, but irrationals still with an error is that usefull. Especially considering the cost.

maybe we could solve this by prohibiting the end user from adding or multiplying numbers

I genuently chuckled, thanks :).

[–] [email protected] 2 points 2 days ago

It's very easy to overflow

[–] johannesvanderwhales 1 points 3 days ago* (last edited 3 days ago)

It would be pretty easy to make a fraction class if you really wanted to. But I doubt it would result in much difference in the precision of calculations since the result would still be limited to a float value (edit: I guess I'm probably wrong on that but reducing a fraction would be less trivial I think?)

[–] [email protected] 8 points 2 days ago (1 children)

IEE 754 is my favourite IEEE standard. 754 gang

[–] [email protected] 6 points 2 days ago (3 children)

Has anyone ever come across 8 or 16 bit floats? What were they used for?

[–] [email protected] 4 points 2 days ago

Neural net evaluation mainly, but FP16 is used in graphics too.

[–] [email protected] 1 points 2 days ago* (last edited 2 days ago) (2 children)

Actually, you can consider RGB values to be (triplets of) floats, too.

Typically, one pixel takes up up to 32 bits of space, encoding Red, Green, Blue, and sometimes Alpha (opacity) values. That makes approximately 8 bits per color channel.

Since each color can be a value between 0.0 (color is off) and 1.0 (color is on), that means every color channel is effectively a 8-bit float.

[–] [email protected] 3 points 1 day ago

Pretty sure what you're describing isn't floating-point numbers, but fixed-point numbers... Which would also work just as well or better in most cases where floats are used.

[–] [email protected] 7 points 2 days ago

Aren't they fractions rather than floating point decimals?

[–] [email protected] 16 points 3 days ago (2 children)

Technically, floating point also imitates irrational and whole numbers as well. Not all numbers though, you'd need a more uhm... elaborate structure to represent complex numbers, surreal numbers, vectors, matrices, and so on.

[–] [email protected] 7 points 3 days ago

It does not even imitate all rationals. For example 1/3.

[–] [email protected] 6 points 3 days ago (1 children)
[–] Knock_Knock_Lemmy_In 7 points 2 days ago (1 children)
[–] [email protected] 6 points 2 days ago (1 children)

thank you

..i wish math would just calm down, it's a lunatic

[–] Knock_Knock_Lemmy_In 5 points 2 days ago* (last edited 2 days ago)

Mathematicians are just bad at naming things. For example

  • Rings don’t resemble circles

  • Fields aren’t literal open spaces

  • Normal has about 20 different meanings

  • Chaos is not random