Groups underpin modern math
120 points
9 days ago
| 7 comments
| quantamagazine.org
| HN
hiAndrewQuinn
8 days ago
[-]
Groups are pretty awesome. Understanding group actions in particular, which amomg other things is how those guys calculated how many different states a Rubik's cube can be in, knitted things together and was my "grow up" moment in abstract algebra class.

They're also damn hard to explain! You know one when you see one, but it takes a lot of practice with different concrete examples before what they're really about begins to shine through for most of us.

For any undergrads who are tempted to skip AA and go right to category theory - don't do it, at least take the group theory course. It's a great intermediate intuition pump that makes the initial juggling you have to do with the basics of categories feel much easier.

reply
kelseyfrog
8 days ago
[-]
Not just groups, magmas, semigroups, monoids, &c.

I acquired a taste for algebraic structures and category theory by way of Scala+cats. While I no longer work in that space, the concepts are universally applicable.

Just last week, I shared with teammates in our weekly symposium how merging counts was an associative operation, and how knowing that directly relates to being able to divide and conquer the problem into an implicit O(log n) operation. Being able to identify the operation as forming a semigroup directly contributed toward being effective in that problem space.

The abstractness of the concepts is a double-edged sword. It allows them to be broadly applied, but it also requires more effort on the part of the observer to form the connection.

reply
cced
8 days ago
[-]
Can you expand a bit on what the problem was and what the discussion was like and solution, if possible.

Thanks!

reply
kelseyfrog
8 days ago
[-]
For sure. The problem was writing an np.unique[1] that could handle large datasets. Specifically, the solution involved chunking the dataset, mapping np.unique across chunks, and then combining chunks. Merging the counts result is an associative operation and merging them in a tree-like computational graph implies O(log n) merges. The end result was being able to perform the calculation in seconds whereas the previous duration was days to weeks. Going from O(n) to O(log n) is magic.

Specifically this is work related to implementing large dataset support for the dedupe library[1]. It's valuable to be able to effectively de-duplicate messy datasets. That's about as much as I can share.

1. https://numpy.org/doc/stable/reference/generated/numpy.uniqu...

2. https://github.com/dedupeio/dedupe/blob/main/dedupe/clusteri...

reply
SkiFire13
8 days ago
[-]
> merging them in a tree-like computational graph implies O(log n) merges.

Intuitively this doesn't make sense to me. You have a tree that has N leaves, and you have to perform a merge for each parent. There are however N-1 parents, so you'll still perform O(N) merges.

reply
dskloet
8 days ago
[-]
Why do you need to merge counts if you look for unique elements? Isn't the count always 1 for each element present? Isn't the key to shard your chunks so each element can appear in at most one chunk?
reply
ReleaseCandidat
8 days ago
[-]
Actually what you are using is an equivalence relation and you are generating the equivalence classes of your set, "your" associativity is transitivity. Whenever you are comparing something it's almost always better to think of it as a relation instead of an algebraic structure. So you already know beforehand that your algorithm should be reflexive, symmetric and transitive (in theory, of course with computers there is numerics involved ;)

The steps to the solution are something like:

- I need all distinct elements of X

- Oh, that's a quotient set (the partition) of X by a/the equivalence relation (`==`)

- so my algorithm must be reflexive (yeah, trivially), symmetric (not so helpful) and transitive - now this I can use (together with the symmetry)

It's generally easier if you know beforehand that it must be e.g. transitive.

reply
lanstin
8 days ago
[-]
One of the most impressive early results on the path the the full classification (in terms of being able to understand the statement of the result, if not the 250 pages of the proof) is that all non-Abelian groups of odd order are not simple. https://en.wikipedia.org/wiki/Feit%E2%80%93Thompson_theorem
reply
vlovich123
8 days ago
[-]
> Physicists rely on them(opens a new tab) to unify the fundamental forces of nature: At high energies, group theory can be used to show that electromagnetism and the forces that hold atomic nuclei together and cause radioactivity are all manifestations of a single underlying force.

Wow. I hadn't read this. Does this force have a name & an accessible explanation of how it degrades to 3 separate forces at low energies?

reply
trashtester
8 days ago
[-]
It's not so much that there aren't 3 different forces, but rather that they are linked to each other in a way that cause them to "mix".

In particular, electromagnetism (EM) and the weak force (WF) are represented mostly by the U(1) group and SU(2) groups, respectively.

In pure electromagnetism, U(1)_em is what we're observing. This group is linked to a field caused by electric charge.

But if we drill into it, there is also and underlying U(1)_y, that is linked to a hypercharge that is a combination of of electric charge and the WF interaction strength.

The physics of the combined electroweak force is defined by the combined gauge group:

SU(2)_L x U(1)_y

From this fundamental physics we get (as energies get low enough) the Higgs mechanism through "spontaneous symmetry breaking".

This causes two new independent fields from a linear combination of the fields associated with the two above gauge symmetries. One of this gives rise to observable photons, and one gives rise to observable (kind of) W, Z+ and Z-.

And also quite significantly, this mechanism also gives rise to the Higgs field itself, which in turn provides mass (inertia). Without the Higgs mechanism, the particles arising from "pure" U(1) and SU(2) fields would be massless.

reply
alasr
8 days ago
[-]
Disclaimer: I'm not a physicist; just a hobbyist interested in these topics.

What I understood, while reading this part of the article, was the author meant something along the line of supersymmetry[1,2] (as Groups are all about symmetry – up to isomorphism).

From CERN Supersymmetry article[1]:

  If supersymmetric particles were included in the Standard Model, the interactions of its three forces – electromagnetism and the strong and weak nuclear forces – could have the exact same strength at very high energies, as in the early universe.

--

[1] - https://home.cern/science/physics/supersymmetry

[2] - https://home.cern/science/physics/unified-forces

reply
pfortuny
8 days ago
[-]
Should be this (electroweak interaction):

https://en.m.wikipedia.org/wiki/Electroweak_interaction

reply
vlovich123
8 days ago
[-]
Hmmm mistake in the article then?

> the forces that hold atomic nuclei together

Isn't that the strong nuclear force which isn't unified with electroweak according to wikipedia? Ah [1] suggests that maybe Wikipedia just hasn't caught up and the math suggests that the strong nuclear force and even gravity may all be unified into a single such force. That's supremely fascinating.

[1] https://www.symmetrymagazine.org/article/what-is-the-electro...

reply
setopt
8 days ago
[-]
Disclaimer: I’m not a particle physicist, but I am a theoretical physicist. My info on this subject might be outdated.

My understanding is that the “electroweak unification” (electromagnetism + weak force) is considered experimentally confirmed, whereas the “grand unification” (electroweak + strong force) is something most physicists believe to be possible but the theories remain somewhat speculative. For instance, if true, such unification likely means that the proton should be an unstable particle that very slowly decays, and that has never been observed experimentally.

Unifying such a “grand unified theory” with gravity (general relativity) is however a much harder problem: While all the other forces are already understood in terms of quantum mechanics, gravity is currently understood in terms of geometry, and it’s not obvious how to combine those in a sane way. That’s where string theory, quantum loop gravity, etc, come in; but AFAIK, most prospective theories are very far from even experimental predictions, let alone experimental testing, and not everyone even agrees on the premise that gravity “should” be described using quantum field theory.

We just know that our current theories can’t be complete, since thought experiments like “what is gravity during a quantum double slit experiment” or “what really happens near and inside a black hole” can’t be answered in a fully satisfactory way using current theories.

reply
card_zero
9 days ago
[-]
This starts out as one of those "how they came up with a part of mathematics" stories, or explanations. But it rapidly gives up on explaining that. I don't think I've ever read a satisfactory explanation of what any mathematician was thinking when devising any part of mathematics. The subheading (which is probably written by an editor) is especially bad:

> What do the integers have in common with the symmetries of a triangle? In the 19th century, mathematicians invented groups as an answer to this question.

No. There is no way they were just sitting around and somebody said, "you know what, I bet all the ways you can flip a triangle have something in common with adding the first 6 integers modulo 6, let's try inventing some abstract concepts and see if we can find one that makes this true!"

The quote from Sarah Hart says much the same thing: "It's not like a bunch of mathematicians got together one day and said, 'Let's create an abstract structure just for a laugh.'" But what were they doing? One group of size six is adding integers, which seems like an important thing (even when constrained to the first six). The other is ... flipping a triangle around. Who cares about that? The only real-world example of symmetries is when your mattress gets lumpy and you have a choice of ways to turn it. Why speculate, even for a moment, that turning a mattress and adding integers might be the same category of thing? Why reify flippin' shapes like that, why think about symmetries as a thing at all? I don't get it.

reply
isotypic
8 days ago
[-]
You might like the book "A History of Abstract Algebra" by Israel Kleiner - it goes over specifically the developments leading to the invention of the abstract group. The answer to your questions is that nobody really sat down and invented the group from the ether - its more accurate to say someone sat down and said "Hey, all these things we've been studying for the past 50 years are all the same thing if we think of it this way", and then the mathematical community eventually gets around to realizing its a useful abstraction (if it is one) as people build on it, or work more without it and eventually realize the abstraction would be helpful. For groups, this played out in how Cayley defined the abstract group in the 1850s, but it only started to gain more widespread usage in the 1870s. As for what they were doing, the main areas ways appeared around this time were through permutation groups (roots of polynomials), abelian groups (various number theoretic constructions/statements), and geometry (study geometry by studying groups of transformations, like isometries for Euclidean geometry).
reply
082349872349872
9 days ago
[-]
https://en.wikipedia.org/wiki/History_of_group_theory

> flipping a triangle around. Who cares about that?

Chemists, for one. (the people who brought us Haber-Bosch, plastics, etc.) See https://en.wikipedia.org/wiki/Spectroscopy#Molecules .

reply
defrost
9 days ago
[-]
Also roller manufacturers that want a smooth ride: https://en.wikipedia.org/wiki/Reuleaux_triangle
reply
card_zero
9 days ago
[-]
Yes, but who cared about it in 1830 (or earlier), and why did they imagine even for a moment that it might have some equivalence to the number line?

Looks like it was all about the quintics, somehow, but I don't know why they made the leap from that problem to geometry. I'm thinking maybe the equivalence to triangle-flipping is just like an amusing conceptual side-effect that happened by accident when working out stuff about permutations?

I don't think a Reuleaux roller functions very well if you flip it around a different axis, anyway, but I'll let you off because they're cute.

reply
trashtester
8 days ago
[-]
Galois was the first to realize the connection between polynomial roots and geometric symmetries through group theory.

Some simple examples can be found in subgroups of U(1). For instance in how Z_n is linked to regular polyhedra of order n and also n'th roots of complex numbers of the Unit Circle.

Kind of like how Z_12 is linked to an analog clock.

reply
defrost
9 days ago
[-]
> Yes, but who cared about it in 1830 (or earlier)

Anybody with a Sphinx to move, struggling to make a purrfect circle.

> I don't think a Reuleaux roller functions very well if you flip it around a different axis

As an extruded 2D shape -> 3D solid it lacks a little in the mirror symmetry department, rotation is pretty much its limit .. admittedly a lacklustre submission, but cute indeed.

reply
082349872349872
9 days ago
[-]
> why did they imagine even for a moment that it might have some equivalence to the number line?

A few of the groups which Galois introduced are what we now call Abelian (after Abel), which is to say that we can forget the order of elements within a product: AB == BA (if you get up to leave the beach and put on flip-flops then put on a shirt, you wind up in the same state as if you get up to leave the beach and put on a shirt then put on flip-flops)

Number theory studies products of primes, and here, always, although we generally must write down multiplicands in some order, it doesn't matter which: 2*3 == 3*2.

This connection would be enough for any modern undergraduate to consider applying general machinery built for quintics to the specific case of the number line, but in those days it took Euler (working before Galois) and Gauss[0] (working after? check this) to blaze the trails along this particular connection.

> maybe the equivalence to triangle-flipping is just like an amusing conceptual side-effect

Not just conceptual: spectroscopy is exactly why chemists are taught a little group theory, and triangle flipping is the simplest non-trivial[1] example.

Like programmers, who spend their days building up data structures and picking them apart[2], chemists are concerned with synthesis (building up molecules) and analysis (picking them apart; in principle this includes synthetic steps that make small molecules from bigger ones, but in practice this means checking your product at the end of synthesis to confirm that you made lots of what you were hoping to make[3], and little of what you didn't want to make[4]).

In particular, spectroscopy is a useful tool in chemical analysis, and very often[5] parts of a molecule will have a triangular symmetry, meaning that the peaks in a recorded spectrum[6] can be explained via a representation of the triangle-flipping group. If you set out to make, say, ammonia[7], but don't get any triangle-flipping parts in the spectrum when you run your tests ("characterise your product"), you know you failed[8].

https://www.smbc-comics.com/comics/1725209167-20240901.png

[0] when Laplace was asked who the greatest german mathematician was, he replied "Pfaff". when asked why not Gauss, he explained "you asked for the greatest german mathematician; Gauss is the greatest european mathematician" (compare: the LUB of a set need not be a member of that set — EDIT: I guess the set of working mathematicians is always finite, so this comparison falls)

[1] in 0-D, a point has only the identity, so it's a degenerate group; in 1-D, a line segment does have a symmetry group (isomorphic to the booleans which are so important to CS) but unfortunately children do not learn about digons in elementary school, and must wait until they discover computer graphics to learn that edge AB is distinct from edge BA; indeed, they're explicitly taught to ignore that distinction in high school geometry, leaving the first non-trivial pedagogically-suitable example to be in 2-D: the triangle

[2] we've made some progress on also building up functions with the same aplomb as we handle data, but we're still not very comfortable when it comes to taking functions apart

[3] just as computer scientists often have a better-than-average knowledge of computer cracking, and physicists of bomb geometry, chemists tend to have a better-than-average knowledge of street syntheses. In particular, I have a second hand anecdote of undergraduates, who, having been in the process of characterising a synthetic product one evening, were interrupted by a grad student who, just by looking at the spectral lines, told them he hadn't seen anything that night but if they wished to continue exploring those particular [synthetic] pathways, they had better do so independently of university equipment.

[4] just as software engineers (who know what corners to cut to produce huge numbers of right answers and an acceptable number of wrong ones much more cheaply than only right answers) are generally paid better than researchers, ChemE's (who know what corners to cut to produce huge amounts of wanted product and an acceptable amount of unwanted) are generally paid better than their purer colleagues.

[5] why? (hint: it's the same mechanism —related to Natural primes— that makes binary taxonomies so popular)

[6] indeed, "spectrum" has been reborrowed back into maths to refer to something in algebraic geometry which is currently beyond my ken. If you poke around these areas long enough, you'll also find that von Neumann (who had physical, computational, and mathematical reasons to be interested) has had the "von Neumann regular rings" named after him, and rings are nothing but a pair of groups which interact in a certain manner. (the "regular" here being related to the "regular" in regular expressions, btw) Exercise: do regular expressions contain any rings?

[7] as the last century taught us, being able to make ammonia is very powerful, having applications both desirable and undesirable.

[8] Exercise: if you do see signs of triangle-flipping, is that enough to be sure you just made ammonia?

reply
card_zero
9 days ago
[-]
Thank you for the extensive reply. There's a hint in there about commutativity inspiring the connection, but my mental model is now simply "Euler did it", which somewhat like creationism relieves me from having to ask further questions.

Something I used to imagine: what if we were radically different creatures, like ant colonies, and by habit we communicated non-linearly (with thousands of limbs and organs swarming in parallel all over our mathematical work, perhaps written in 3D)? That could make equations mostly trivial, since our symbols wouldn't be constrained to any particular arrangement in the first place: and that seems kind of advantageous. But then grasping the concept of "non-commutative" would be a real strain for these poor ant-hills. They'd have to deliberately reintroduce linear ordering, maybe with special symbols to mark precedence.

reply
082349872349872
8 days ago
[-]
(a) if you haven't read it, Chiang, Story of Your Life (1998) might have interesting aliens.

(b) the special symbols is a good point. Reading older maths papers is cool because you get to see all sorts of things people tried before we settled on what we use now. Two works that come immediately to mind: Principia Mathematica (1910) uses various numbers of dots instead of parentheses to mark precedence, while Peano, Arithmetices principia: nova methodo exposita (1889) uses very modern-looking notation, including parens as we would use them, but its expository text is all in latin!

(c) I don't think your aliens would have any more trouble with non-commutative than we have with commutative. Have you heard of the Boom Hierarchy? It starts with trees; when we add an associative law, so (AB)C == A(BC), then we only have flat lists; when we add a commutative law, so AB == BA, then we only have unordered bags; and finally when we add an idempotent law, so AA == A, then we have unduplicated sets. It turns out (exercise!) that if we have information encoded in any of these representations, we always have at least one way to represent the same information in all the other representations, such that we can "round trip" between any two levels of this hierarchy without losing any information.

So for programming, where we care about time and space, picking ordered or unordered representations can be very important, but for maths, where all that matters is the existence of invertible functions between all these representations, that decision is unimportant. Does that make sense?

reply
User23
8 days ago
[-]
> That could make equations mostly trivial, since our symbols wouldn't be constrained to any particular arrangement in the first place

You might find this little tidbit[1] from C.S. Peirce by way of John Sowa interesting then. Existential Graphs (EG) are an unordered diagramatic representation of mathematical logic. And Peirce is the real deal. His more conventional notation was adopted by Peano (who substituted the familiar symbols for capital sigma and pi (which created confusion when used in the context of broader proofs, even though sigma and pi pretty much directly correspond to what existence and universality mean)) and he is credited as an independent co-discoverer of both quantifiers with Frege.

  For EGs, only one axiom is necessary: a blank sheet of assertion, from which all the axioms and rules of inference by Frege, Whitehead, and Russell can be proved by Peirce’s rules. As an example, Frege’s first axiom, a⊃(b⊃a), can be proved in five steps by Peirce’s rules

Peirce gives us the entire predicate calculus with three rules and one axiom. And of course it's all built on NAND.

And another teaser:

  In the Principia Mathematica, Whitehead and Russell proved the following theorem, which Leibniz called the Praeclarum Theorema (Splendid Theorem). It is one of the last and most complex theorems in propositional logic in the Principia, and the proof required a total of 43 steps ... With Peirce’s rules, this theorem can be proved in just seven steps starting with a blank sheet of paper.
John Sowa himself is also no slouch, having been one of the leading lights of the earlier AI push. I expect advances in modern AI will come when we stop trying to do everything with ngrams and start building on richer models of knowledge representation.

[1] https://www.jfsowa.com/pubs/egtut.pdf

reply
kjellsbells
8 days ago
[-]
I dont want to get too far into a joint and doritos speculation here, but I wonder if the fact that humans have a very small capacity for holding multiple objects in their minds, a small number of physical digits, and excellent visual acuity is why we do a lot of math the way we do. Group theory comes out of symmetry for example. Algorithms come out of linear stepwise problem solving. It takes us considerable mental effort to think about problems in ways that are not like this.

An example that stayed with me for years is when Adelman of RSA fame considered whether DNA could be used as a computer. (Spoiler: yes). It basically does all the computations at once, and then discards all the non optimal solutions.

reply
tzs
8 days ago
[-]
> The only real-world example of symmetries is when your mattress gets lumpy and you have a choice of ways to turn it.

I once failed to notice that, and it led to a lot of teasing at my expense.

It was my second year as an undergraduate at Caltech. Our rooms were rectangles with a door on one of the short sides, a windows and radiator opposite that, a bed and a desk along one of the long sides, and a closet and sink along the other long side.

I decided I wanted to switch where my head and feet were when sleeping, so started trying to turn the bed around. The room was not wide enough to simply swing it around, but it was tall enough that if I lifted one end high enough I'd be able to pivot it.

That was difficult as a one man job, not helped by the growing crowd of people standing outside my door watching with obvious amusement as several times I almost dropped the bed on myself.

I finally managed it and then the spectators then pointed out the the bed consisted of a rectangular frame with a rectangular mattress which is symmetrical under the rotation operation that I had just painfully taken a great deal of time to execute, and that the only thing that determines which is the "head" end and which is the "feet" end is which end you tuck your top sheet in.

I could have simply waited for the weekly linen exchange, which I would be removing all the linens for, and then put the new linens on with the tuck on the other side and accomplished my switch with literally no extra work at all.

It took a long time for people to stop making fun me.

reply
otoburb
8 days ago
[-]
>>which is symmetrical under the rotation operation that I had just painfully taken a great deal of time to execute

This critical symmetry property only holds if the mattress is assumed to be uniformly lumpy (including no discernable lumps at all), which perhaps in your case it was but you might have had the last laugh if you’d challenged them on that point.

reply
tzs
8 days ago
[-]
They still would have had the last laugh because even if I could justify rotating the mattress I wouldn't have been able to justify rotating the frame, which was what made the operation difficult.
reply
andrewflnr
8 days ago
[-]
I think it starts as a nagging feeling that there's a common thread between diverse stuff like transforming triangles, transforming more complicated/interesting shapes, numbers, and other weirder stuff. It's a natural instinct, at least for math people, to try to nail down and name that common thread. And then it takes, as the article mentioned, a lot of thinking and trying things to figure out the best way to formalize it. At least that's how I feel it in a niche way, trying to figure out my own formalisms for some things. Maybe you're just not that type of person, though.
reply
bbor
8 days ago
[-]
This is close, but I think it's more than an instinct: it was a philosophical challenge. Math is part of a larger project to formalize thought, and "we have a bunch of tools for a bunch of different things that work in different ways" is a lot less metaphysically/intuitively satisfying than "we have a single cohesive system of formalization that can be applied to any system of quantities and qualities found in human experience, all based on the same solid foundation."

This challenge/goal was best expressed by Bertrand Russell and Alfred Whitehead, I think: https://plato.stanford.edu/entries/principia-mathematica/

  The present work has two main objects. 
  
  One of these, the proof that all pure mathematics deals exclusively with concepts definable in terms of a very small number of fundamental concepts, and that all its propositions are deducible from a very small number of fundamental logical principles... will be established by strict symbolic reasoning...
  
  The other object of this work... is the explanation of the fundamental concepts which mathematics accepts as indefinable. This is a purely philosophical task.
reply
andrewflnr
8 days ago
[-]
Well yes, but it takes a certain kind of instinct to take on that philosophical challenge, and again to guide your particular angles of attack on it. I took the question to be, within the framework of formalized math, how do people arrive at particular abstractions like groups? How do you pick the axioms and rules?
reply
j2kun
8 days ago
[-]
If you're asking for a legitimate explanation for why mathematicians came up with groups, it's because they wanted to find roots of polynomials (or rather, prove one cannot find a general formula for solving large-degree polynomials).

The complex roots of polynomials satisfy symmetry properties. The group structure of those symmetries allows one to discriminate when one can and cannot solve the polynomial using elementary operations (+,-,*,/) and radicals (nth roots). They call this "Galois Theory", and group theory grew out of it to streamline the ideas about symmetry so they could be applied elsewhere, particularly in the study of geometry and non-Euclidean geometry.

reply
jacobolus
8 days ago
[-]
If someone's interested in a more detailed discussion of this, here's a an out-of-copyright paper that turned up in a 2 minute literature search: https://www.jstor.org/stable/2972411
reply
Someone
8 days ago
[-]
Slight nitpick: I don’t think anybody ever set out to prove the non-existence of a general formula for solving large-degree polynomials.

The goal always was to find one but they had to settle for second best: proving that there is no such formula, and thus that the search was over.

reply
mrkandel
8 days ago
[-]
My professor explained that Galois originally thought about the subgroups of S_n, i.e. the set of all the permutations of n objects.

Thinking about the permutations of n objects is rather natural, especially when thinking about roots of polynomials, as when you look at roots of some polynomial you will see that some permutations of the roots are legal and some are not. And then when you investigate that you start to notice there are sets of permutation that can operate independently of the other permutation, and those are the subgroups. The concept of a "group" as an abstract term in itself was not there. Later on it was codified.

reply
bbor
8 days ago
[-]

  The only real-world example of symmetries is when your mattress gets lumpy and you have a choice of ways to turn it. 
I appreciate where you're coming from and love the passion, but this couldn't be further from the truth. Symmetry is the basis of life, not to mention physics nor beauty!

https://people.math.harvard.edu/~knill/teaching/mathe320_201...

> Quantum mechanics represents the state of a physical system by a vector in a space of many, actually of infinitely many, dimensions. Two states that arise from each other, either by a virtual rotation of the system of electrons or by one of their permutations, are connected by a linear transformation associated with that rotation or that permutation. Hence *the profoundest and most systematic part of group theory*, the theory of representations of a group by linear transformations, comes into play here. I must refrain from giving you a more precise account of this difficult subject. But here symmetry once more has proved the clue to a field of great variety and importance.

> From art, from biology, from crystallography and physics I finally turn to mathematics, which I must include all the more because the essential concepts, *especially that of a group*, were first developed from their applications in mathematics.

- Hermann Weyl's Symmetry, p. 135

reply
anon291
8 days ago
[-]
I mean as someone who was not into this but became into this when I started Haskell, it's because as you play with things it becomes obvious that things behave similarly.

For example, you learn addition and multiplication with numbers for dealing with real world quantities.

Then you mess with Booleans for computers. You'll notice that there's a distributive law for those as well as numbers and in this that or behaves like addition and and behaves like multiplication.

So then one thinks why that's the case (hint: it has to do with zero and one)

And as you do this you think... Well what else has things that behave this way.

And then you're like... Well what if you don't have multiplication, then what? And it goes on and on.

It's the same thing as when you're writing code and you notice two classes have similar functionality so you create a base class. Same idea . Different terminology

reply
esperent
8 days ago
[-]
In the case of group theory you'd probably just find that these ideas occured in a flash of mostly subconscious insight while the mathematician was actually thinking about how the French royalists were conspiring to suppress their previous breakthroughs, or how their lover was actually a royalist plant put in place to trick them into a fatal pistol duel. Something of that ilk, anyway.
reply
larodi
8 days ago
[-]
Fascinated how Quanta Magazine can retell university university-grade matter in a very entertaining way. Wish all professors read it (the magazine) and perhaps be inspired to tell stories this way, and maybe more people will stay in math.
reply
jebarker
8 days ago
[-]
Seeing the beauty of the correspondence between symmetry groups and Platonic Solids for the first time is a standout memory from my math undergrad. Groups are awesome.
reply