this post was submitted on 12 Aug 2023
205 points (98.6% liked)

196

16613 readers
2848 users here now

Be sure to follow the rule before you head out.

Rule: You must post before you leave.

^other^ ^rules^

founded 2 years ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 10 points 1 year ago (3 children)

Symbolically, sure, but then you're not dealing with infinities you're just representing them.

It's a meme it's playing fast and loose with things but the general gist is that mathematics, to this day, doesn't really care about Gödel/Church/Turing, incompleteness, the halting problem, whatever angle you want to look at it from. Formalists lost the war and they simply went on doing maths as if nothing had happened, as if a system could be simultaneously complete and consistent. There's people out there preaching to the unenlightened masses but it's an uphill battle.

[–] [email protected] 6 points 1 year ago (1 children)

Math went on because it doesn't matter. Nobody cares about incompleteness. If you can prove ZFC is inconsistent, do it and we'll all move to a new system and most of us wouldn't even notice (since nobody references the axioms outside of set theorists and logicians anyway). If you can prove it's incomplete, do it and nobody will care since the culprit will be an arcane theorem far outside the realm of non-logic fields of math.

[–] [email protected] 2 points 1 year ago* (last edited 1 year ago) (1 children)

You wouldn't even notice if some proof is wrong because it relies on an inconsistency that's the issue. And that's before you didn't notice because noone builds anything on axioms but instead uses fragile foundations made of intuition, hand-waving, and mass psychology.

Incomplete is fine that's exactly what constructive maths is doing.

[–] [email protected] 1 points 1 year ago

You wouldn’t even notice if some proof is wrong because it relies on an inconsistency that’s the issue.

You wouldn't notice because there's no realistic chance that any meaningful result in the vast majority of math depends strictly on the particular way in which ZFC is hypothetically inconsistent.

And that’s before you didn’t notice because noone builds anything on axioms but instead uses fragile foundations made of intuition, hand-waving, and mass psychology.

This is a ridiculous attitude. Nobody uses the axioms of ZFC directly because that would be stupid. It's obviously sufficient to know how to do so. There is literally no difference to the vast majority of all math which particular axiomatic formalism you decide to use, because all of those results are trivially translatable between them.

[–] [email protected] 2 points 1 year ago (2 children)

We have sorta the same problem with imaginary numbers, and I remember some programmable calculators can process complex numbers using symbolic representation (which happens to work similarly to Cartesian coordinates, so that's convenient)

But from what I remember any infinity bigger than counting numbers (say the set of real numbers) cannot be differentiated from each other, so we don't have established rules.

To be fair, I last tinkered with infinities in the aughts and then as a hobbyist. The Grand Hilbert Hotel can accomodate more compound infinities and still retain perfect utilization since the last time I visited.

[–] [email protected] 3 points 1 year ago* (last edited 1 year ago)

https://en.wikipedia.org/wiki/Continuum_hypothesis

Hmm. Frankly speaking I always assumed Mathematicians had more of an idea about infinities I mean why even have indices if you don't have an inductive rule to descr... oh wait never mind.

That said the reals aren't countable yet we have perfectly reasonable ways to deal with them symbolically, even compute with them, represent ~~every single one of~~^1^ them in finite space, it's just when you want to compare or output them with infinite precision that you might have to wait for eternity. But who needs infinite precision anyway, arbitrary precision is plenty.

^1^ On second thought, after diagonalisation, no we don't. Or we do because there's some magic going on with included transcendental constants that break through that do I look like a numerologist.

[–] [email protected] 2 points 1 year ago (1 children)

I don't understand what you think the problem is. What do you mean infinities can't be differentiated from each other? Infinite cardinals are by definition equivalence classes of sets that can be placed in bijection with one another. You can compare one cardinal to another by showing there exists an injection from a representative set of the first one into a representative for the other. You can show equality by showing there is an injection both ways (Cantor-Schroder-Bernstein theorem) or otherwise producing a bijection explicitly. Infinite ordinals may as well be copies of the natural numbers indexed by infinite cardinals, so of course we can distinguish them too.

[–] [email protected] 1 points 1 year ago (1 children)

So far AFAIK we have two kinds of infinity: Those that can be accommodated at the Grand Hilbert (e.g. integers, fractions, etc.) and those that cannot (set of irrational numbers, set of curves, set of polytopes, etc.) This was why we had to differentiate orders of infinity, e.g. ℵ₀ (The Grand Hilbert set), ℵ₁ (the irrational set, the real set), ℵ₂ (???), ℵ₃ (?????), ℵₙ (??!??????????)

For values of infinity that are in higher orders than ℵ₀, we can only tell if they're equal to ℵ₁ or undetermined, which means their infinity size is ℵ₁ or greater, but still unknown.

Unless someone did some Nobel prize worthy work in mathematics that I haven't heard about which is quite possible.

[–] [email protected] 1 points 1 year ago (1 children)

No, that's definitely not true. As I said, infinite cardinals (like the cardinality of the naturals ℵ₀) are defined to be equivalence classes of sets that can be placed in bijection with one another. Whenever you have infinite sets that can't be placed in bijection, they represent different cardinals. The set of functions f : X --> X has cardinality 2^X too, so e.g. there are more real-valued functions of real numbers than there are real numbers. You can use this technique to get an infinite sequence of distinct cardinals (via Cantor's theorem, which has a simple constructive proof). And once you have all of those, you can take their (infinite) union to get yet another greater cardinal, and continue that way. There are in fact more cardinalities that can be obtained in this way than we could fit into a set-- the (infinite) number of infinite cardinals is too big to be an infinite cardinal.

You might be thinking of the generalized continuum hypothesis that says that there are no more cardinal numbers in between the cardinalities of power sets, i.e. that ℵ₁ = 2^(ℵ₀), ℵ₂ = 2^(ℵ₁), and so on.

[–] [email protected] 1 points 1 year ago (1 children)

It's quite possible that what I'm encountering is the the momentary failure to understand Cantor's theorem, or rather the mechanism it uses to enumerate the innumerable. So my math may just be old.

[–] [email protected] 1 points 1 year ago

Cantor's theorem says the power set of X has a strictly larger cardinality than X.

When |X| is a natural number, the power set of X has cardinality 2^(|X|), since you can think of an element of the power set as a choice, for each element of X, of "in the subset" vs "not in the subset." Hence the notation 2^X for the power set of X.

Cantor's theorem applies to all sets, not just finite ones. You can show this with a simple argument. Let X be a set and suppose there is a bijection f : X -> 2^(X). Let D be the set { x in X : x is not in f(x) }. (The fact that this is well defined is given by the comprehension axiom of ZFC, so we aren't running into a Russell's paradox issue.) Since f is a bijection, there is an element y of X so that f(y) = D. Now either:

  • y is in D. But then by definition y is not in f(y) = D, a contradiction.

  • y is not in D. But then by definition, since y is not in f(y), y is in D.

Thus, there cannot exist such a bijection f, and |2^(X)| != |X|. It's easy enough to show that the inequality only goes one way, i.e. |2^(X)| > |X|.

[–] [email protected] 2 points 1 year ago

Here is an alternative Piped link(s): https://piped.video/watch?v=21qPOReu4FI

Piped is a privacy-respecting open-source alternative frontend to YouTube.

I'm open-source, check me out at GitHub.