The notion of algorithms and computer science were invented to discuss the behavior of infinite sequences: examining if there’s descriptions of real numbers. This was extended by connecting complicated infinite structures like the hyperreals with decisions theory problems.
That descriptions of other infinite sets also corresponds to some kind of algorithm seems like a natural progression.
That it happens to be network theory algorithms rather than (eg) decision theory algorithms is worth noting — but hardly surprising. Particularly because the sets examined arose from graph problems, ie, a network.
To me this is quite surprising. Distributed systems were not designed to solve measure theory problems.
That you can express the constraints of network colorings (ie, the measure theory problem) as network algorithms strikes me as a “well duh” claim — at least if you take that Curry-Howard stuff seriously.
Been a long way towards it. \o/
https://www.theregencyballroom.com/events/detail/?event_id=1...
This is enormously off topic but if one person sees this and ends up not missing the show then it was worth mentioning. I think there’s a reasonable crossover between math, discrete math, hacking, and mathcore :)
And Beyond!
One infinitesimal step for the numerical, one giant step for mathkind.
let x = x in x
Completely encapsulates a countable infinity.
However, I also am starting to believe that infinity doesn't exist.
Or more specifically, I want to argue that infinity is not a number, it is a process. When you say {1, 2, 3, ... } the "..." represents a process of extending the set without a halting condition.
There is no infinity at the end of a number line. There is a process that says how to extend that number line ever further.
There is no infinity'th prime number. There is a process by which you can show that a bigger primer number must always exist.
For example, S(0) is 1, S(S(0)) is 2, S(S(S(0))) is 3, and so on.
There is no end of a number line. There are lines, and line segments. Only line segments are finite.
> There is no infinity'th prime number. There is a process by which you can show that a bigger primer number must always exist.
You misunderstand the concept of infinity. Cantor's diagonal argument proves that such a bigger number must always exist. "Infinity'th" is not a place in a number line; Infinity is a set that may be countable or uncountable, depending on what kind of infinity you're working with.
There are infinities with higher cardinality than others. Infinity relates to set theory, and if you try to simply imagine it as a "position" in a line of real numbers, you'll understandably have an inconsistent mental model.
I highly recommend checking out Cantor's diagonal argument. Mathematicians didn't invent infinity as a curiosity; it solves real problems and implies real constraints. https://en.wikipedia.org/wiki/Cantor's_diagonal_argument
Sure, but ordinal numbers exist and are useful. It's impossible to prove Goodstein's theorem without them.
https://en.wikipedia.org/wiki/Ordinal_number
https://en.wikipedia.org/wiki/Goodstein%27s_theorem
The statement and proof of the theorem are quite accessible and eye-opening. I think the number line with ordinals is way cooler than the one without them.
(Granted, there are other objects that may seem ugly which can only be constructed by reference to infinite things.)
You can do a lot of things without assuming that the natural numbers keep going, but it is plain awful to work with.
Or in the other direction, if numbers stop at 3, that certainly won't falsify "every number above 7 is the sum of two other numbers". It will prove that it's true. And the extremely strong conjecture immediately proves that the extremely weak one is false.
In a way, the axiom of infinity seems to behave much like other axioms that assert the existence of even larger mathematical "universes": it's worth being aware of what parts of a mathematical development are inherently dependent on it as an assumption, which is ultimately a question of so-called reverse mathematics.
https://en.wikipedia.org/wiki/Finitism
https://en.wikipedia.org/wiki/Constructivism_(philosophy_of_...
I claim the reason is that 5 is prime, while 10 is composite (10 = 5 times 2).
Therefore, 5 and 10, and 2, exist.
In any case existence of mathematical objects is a different meaning of existence to physical objects. We can say a mathematical object exists just by defining it, as long as it doesn't lead to contradiction.
What the hell. What about Type Theory?
See this mathoverflow response here https://mathoverflow.net/a/437200/477593
As a matter of fact, ZFC fits CS quite poorly.
In ZFC, everything is a set. The number 2 is a set. A function is a set of ordered pairs. An ordered pair is a set of sets.
In ZFC: It is a valid mathematical question to ask, "Is the number 3 an element of the number 5?" (In the standard definition of ordinals, the answer is yes).
In CS: This is a "type error." A programmer necessarily thinks of an integer as distinct from a string or a list. Asking if an integer is "inside" another integer is nonsense in the context of writing software.
For a computer scientist, Type Theory is a much more natural foundation than Set Theory. Type Theory enforces boundaries between different kinds of objects, just like a compiler does.
But, in any case, that ZFC is "typical" is an accident of history, and whether or not it's appropriate at all is debatable.
When I did a PhD in theoretical computer science, type theory felt like one niche topic among many. It was certainly of interest to some subfield, but most people didn't find it particularly relevant to the kind of TCS they were doing.
The reason ZFC is used isn't because it's a particularly pedagogical way of describing any branch of math (whether CS or otherwise), but because the axioms are elegantly minimal and parsimonious.
um... no... computer science is very concerned with the infinite. I'm surprised quanta published this. I always think highly of their reporting.
I'm really torn about this, because I think they're providing a valuable service. But their general formula doesn't diverge a whole lot from run-of-the-mill pop-science books: a vague, clickbaity title and then an article that focuses on personalities and implications of discoveries while glancing over a lot of important details (and not teaching much).
But I have come to think, well actually, approximate is doing some heavy lifting there AND I have never used infinity for anything except to say "look I don't know how high this should go, so go as far as you can go, and double that, which is really saying, you are bound by finite boundaries, you'll have to work within them, and the uncountable thing that I was thinking about is really finite.
Edit: Think of it like this
We know that it's most likely that the universe is infinite, but we can only determine how big it is by how far we can see, which is bounded by the speed of light, and the fact that we can only see matter emitting light (I'm being careful here, if the big bang theory is right, and I am understanding it correctly, there is a finite amount of matter in the universe, but the universe itself is infinite)
Also, a small aside: there’s a finite amount of matter in the visible universe. We could have infinite matter in an infinite universe.
> Set theorists use the language of logic, computer scientists the language of algorithms.
Computer science doesn’t use logic? Hello, Booleans.
So lazy, especially when you can ask an AI to tell you if you’re saying something stupid.
It wasn’t just that.. all over the article..
I am getting really strong LLM article vibes.
…that existed in the world of descriptive set theory — about the number of colors required to color certain infinite graphs in a measurable way.
To Bernshteyn, it felt like more than a coincidence. It wasn’t just that computer scientists are like librarians too, shelving problems based on how efficiently their algorithms work. It wasn’t just that these problems could also be written in terms of graphs and colorings. Perhaps, he thought, the two bookshelves had more in common than that. Perhaps the connection between these two fields went much, much deeper. Perhaps all the books, and their shelves, were identical, just written in different languages — and in need of a translator.
The reason is simple - numbers are cuts in the continuum while infinity isn't. It should not even be a symbolic notion of very large number. This is not to say infinity doesn't exist. It doesn't exist as a number and doesn't mix with numbers.
The limits could have been defined saying "as x increases without bound" instead of "as x approaches infinity". There is no target called infinity to approach.
Cantor's stuff can easily be trashed. The very notion of "larger than" belongs to finite numbers. This comparitive notion doesn't apply to concepts that can not be quantified using numbers. Hence one can't say some kind of infinity is larger than the other kinds.
Similarly, his diagonal argument about 1-to-1 mapping can not be extended to infinities, as there is no 1-to-1 mapping that can make sense for those which are not numbers or uniquely identifiable elements. The mapping is broken. No surprise you get weird results and multiple infinities or whatever that came to his mind when he was going through stressful personal situations.
This sounds like ranting from someone who doesn't deeply understand the implications of set theory or infinite sets. Cardinality is a real thing, with real consequences, such as Gödel's incompleteness theorems. What "weird results" are tripping you up?
> when he was going through stressful personal situations
Ad hominem. Let's stick to arguing the subject material and not personally attacking Cantor.
https://en.wikipedia.org/wiki/Cantor's_diagonal_argument#Con...
Further, all math is idealist bullshit; but it's useful idealist bullshit because, when you can map representations of physical systems into it in a way that the objects act like the mathematical objects that represent them, then you can achieve useful predictive results in the real world. This holds true for results that require a concept of infinities in some way to fully operationalize: they still make useful predictions when the axiomatic conditions are met.
For the record, I'm not fully against what you're saying, I personally hate the idea of the axiom of choice being commonly accepted; I think it was a poorly founded axiom that leads to more paradoxes than it helps things. I also wish the axiom of the excluded middle was actually tossed out more often, for similar reasons, however, when the systems you're analyzing do behave well under either axiom, the math works out to be so much easier with both of them, so in they stay (until you hit things like Banac-Tarsky and you just kinda go "neat, this is completely unphysical abstract delusioneering" but, you kinda learn to treat results like that like you do when you renormalize poles in analytical functions: carefully and with a healthy dose of "don't accidentally misuse this theorem to make unrealistic predictions when the conditions aren't met")
I can say it can not be extended or applied, because the operation can not be "completed". This is not because it takes infinite time. It is because we can't define completion of the operation, even if it is a snapshot imagination.
I can't do that for real numbers.
I don't think this definition is that weird, for example by 'larger' I might say I can easily 'fit' all the rational numbers in the real numbers, but cannot fit the real numbers in the rational numbers.
So, let's inspect pi. It's a fraction, precision of which depends on how much you zoom in on it. You can take it as a constant just for having a name for it.
The zooming is finite for every fraction.
Only on hackernews.