Okay. I really laughed out loud on that one
Programmer Humor
Post funny things about programming here! (Or just rant about your favourite programming language.)
Rules:
- Posts must be relevant to programming, programmers, or computer science.
- No NSFW content.
- Jokes must be in good taste. No hate speech, bigotry, etc.
The downvote is from someone who doesn't understand floating point notation
There are actually 0 downvotes, but the 1 is a rounding error
Which is especially ironic on this post.
Unless you're an original Intel Pentium trying to divide a number.
It's up to five now
I've wondered why programming languages don't include accurate fractions as part of their standard utils. I don't mind calling dc, but I wish I didn't need to write a bash script to pipe the output of dc into my program.
Because at the end of the day everything gets simplified to a 1 or a 0. You could store a fraction as an “object” but at some point it needs to be turned into a number to work with. That’s where floating points come into play.
There is already a pair of objects we can use to store fractions. The ratio of two integers.
Irrational numbers is when floating points come into play.
Which they tend to do a lot. Like, the moment a square root or trig function shows up.
Even without it's pretty easy to overflow a fraction stored the way your describing. x=1/(x*x+1) does it in log time. There's really not a lot of situations where exact fractions work, but purely symbolic logic wouldn't. Maybe none, IDK.
Edit: I mean, I guess it's all symbolic. What I'm trying to say is that if you're at a level of detail where you can keep track of the size of denominators, native support for a type that hides them is counterproductive. It's better to lay out your program in such a way that you just use small integers, which is guaranteed to be possible in that case.
There’s really not a lot of situations where exact fractions work, but purely symbolic logic wouldn’t. Maybe none, IDK.
Simulations maybe? Like the ones for chaotic systems where even the slightest inaccuracy massively throws the result off, where the tiny difference between an exact fraction and a float can seriously impact the accuracy as small errors build up over time.
Are you aware of one that takes place completely within fractions of a few select types? Usually they're continuous.
I can think of some that are all integers, but I covered that in the edit.
You can only store rational numbers as a ratio of two numbers, and there's infinitely times more irrational numbers than rational ones - as soon as you took (almost any) root or did (most) trigonometry, then your accurate ratio would count for nothing. Hardcore maths libraries get around this by keeping the "value in progress" as an expression for as long as possible, but working with expressions is exceptionally slow by computer standards - takes quite a long time to keep them in their simplest form whenever you manipulate them.
You could choose a subset of fractions, though, and then round it to the nearest one. Maybe you could use powers of two as the denominator for easy hardware implementation. Oh wait, we've just reinvented floats.
A lot of work has gone into making floating point numbers efficient and they cover 99% of use cases. In the rare case you really need perfect fractional accuracy, it's not that difficult to implement as a pair of integers.
99.000004%
Many do. Matlab, Julia and Smalltalk are the ones I know
Factor, Prolog, and Python has the fractions
module
... Perl, Haskell, Lisp, ...
Scheme and many others. And lots of libraries for C and others.
It's called bignum, or Arbitrary-precision arithmetic.
I think the reason is that most real numbers are gonna be the result of measurement equipment (for example camera/brightness sensor, or analog audio input). As such , these values are naturally real (analog) values, but they aren't fractions. Think of the vast amount of data in video, image and audio files. They typically make up a largest part of the broadband internet usage. As such, their efficient handling is especially important, or you're gonna mess up a lot of processing power.
Since these (and other) values are typically real values, they are represented by IEEE-754 floats, instead of fractions.
A lot do. They call them "rationals".
There are programming languages with fractions in them, afaik many varieties of Scheme have fractions
Performance penalty I would imagine. You would have to do many more steps at the processor level to calculate fractions than floats. The languages more suited toward math do have them as someone else mentioned, but the others probably can't justify the extra computational expense for the little benefit it would have, also I'd bet there are already open source libraries for all the popular languages of you really need a fraction.
I' assume its because implemenring comparisons can't be done efficiently.
You'd either have to reduce a fraction every time you perform an operation. That would essentially require computing at least one prime decomposition (and the try to divide rhe other number by each prime factor) but thats just fucking expensive. And even that would just give you a quick equality check. For comparing numbers wrt </> you'd then have to actually compute the floating point number with enough percesion or scale the fractions which could easily lead to owerflows (comparing intmax/1 and 1/intmax would amount to comparing intmax^2/intmax to 1/intmax. The emcodinglengh required to store intmax^2 would be twice that of a normal int... Plus you'd have to perform that huge multiplication). But what do you consider enough? For two numbers which are essentially the same except a small epsilon you'd need infinite percision to determine their order. So would that standard then say they are equal even though they aren't exectly? If so what would be the minimal percision (that makes sense for every concievable case? If not, would you accept the comparison function having an essentially unbounded running time (wrt to a fixed encoding lengh)? Or would you allow a number to be neither smaller, nor bigger, nor equal to another number?
Edit: apparently some languages still have it: https://pkg.go.dev/math/big#Rat
why couldn’t you compute p/q < r/s by checking ps < rq? if you follow the convention that denominators have to be strictly positive then you don’t even have to take signs into account. and you can check equality in the same way. no float conversion necessary. you do still need to eat a big multiplication though, which kind of sucks. the point you bring up of needing to reduce fractions after adding or multiplying also a massive problem. maybe we could solve this by prohibiting the end user from adding or multiplying numbers
why couldn’t you compute p/q < r/s by checking ps < rq?
That's what I meant by scaling the fractions. Tbh I kind of forgot that was an option and when I remembered I had allready written the part about comparing floats so I just left it in. But yeah, encoding lengh might be a killer there.
You could also avoid reducing fractions the same way. Like I don't neecessairly need my fractions to be reduced, if I am just doing a few equality comparisons per fraction. Of course I would have to reduce them at some point to avoid exceding the encoding lentgh in the enumerator and denominator when there is a representation with a short enough encoding available.
I think the bigger problem might be the missing usecases. As another user mentioned, this would still only encode rationals perfectly (assuming no limit on encoding lengh). But I cannot see many usecases where having rationals encoded percisely, but irrationals still with an error is that usefull. Especially considering the cost.
maybe we could solve this by prohibiting the end user from adding or multiplying numbers
I genuently chuckled, thanks :).
It's very easy to overflow
It would be pretty easy to make a fraction class if you really wanted to. But I doubt it would result in much difference in the precision of calculations since the result would still be limited to a float value (edit: I guess I'm probably wrong on that but reducing a fraction would be less trivial I think?)
IEE 754 is my favourite IEEE standard. 754 gang
Has anyone ever come across 8 or 16 bit floats? What were they used for?
Neural net evaluation mainly, but FP16 is used in graphics too.
Actually, you can consider RGB values to be (triplets of) floats, too.
Typically, one pixel takes up up to 32 bits of space, encoding Red, Green, Blue, and sometimes Alpha (opacity) values. That makes approximately 8 bits per color channel.
Since each color can be a value between 0.0 (color is off) and 1.0 (color is on), that means every color channel is effectively a 8-bit float.
Pretty sure what you're describing isn't floating-point numbers, but fixed-point numbers... Which would also work just as well or better in most cases where floats are used.
Aren't they fractions rather than floating point decimals?
Technically, floating point also imitates irrational and whole numbers as well. Not all numbers though, you'd need a more uhm... elaborate structure to represent complex numbers, surreal numbers, vectors, matrices, and so on.
It does not even imitate all rationals. For example 1/3.
.... Surreal numbers?!?
thank you
..i wish math would just calm down, it's a lunatic
Mathematicians are just bad at naming things. For example
-
Rings don’t resemble circles
-
Fields aren’t literal open spaces
-
Normal has about 20 different meanings
-
Chaos is not random