When I look back at various mathematical courses I took, most have at least one theorem that I really liked. Usually I like it because the proof has a surprising trick, sometimes it’s because of the unexpected conclusion, or maybe the unintuitive feel to it. In other cases it’s just the elegance of the proof, or the result itself.
Without further ado, here’s a selection of my favorite theorems, in no particular order:
1. Linear Algebra: the Cayley Hamilton theorem. When I first grokked the fact that you can substitute matrices for the variables in polynomials, I was awestruck. Then I learned that you can define eA by using a Taylor series, but the fun doesn’t stop there. Using the Eigenvalues you can greatly simplify the calculation, and it all “works out the same” (i.e., if A=P-1DP and D is diagonal, then p(A) = P-1p(D)P. This works also for Jordan forms). Also, since you can show that complex numbers are isomorphic to the 2×2 matrices of the form [[a, b], [-b, a]], and that the calculations were exactly the same – well, everything “fell into place for me”. At the time it seemed to be one of the joys of Mathematics.
2. Calculus: the Bolzano-Weierstrass Theorem. One of the first non trivial results you learn in calculus, I originally learned the version that says: “Every bounded infinite set has a limit point”, and its proof was a bit more elegant in my eyes than the proof of the Wikipedia version. I liked it so much that one time when I was in boot camp in the service, I worked it out again just to keep my mind working. Good times.
3. Probability: The elegant result of V(x) = E(V(x|y)) + V(E(x|y)). Just the sight of it makes one sigh with contentedness, and the result itself is very nice.
4. Calculus, again: Stokes’ theorem and its friends. Very useful and non intuitive, in layman’s terms it says that you can reason about what happens in an area just by knowing about its perimeter.
5. Numerical Analysis: Richardson Extrapolation: one of the most elegant forms of bootstrapping, you start with a simple approximation method as a building block, and at the end you get a very strong high-quality approximation.
6. Computability: The Parameter theorem. Especially elegant, it basically gives the mathematical definition of the “bind” function for function parameters. In simple terms it uses the source code of a function f(x, y), to find the source code of a function g(y) such that g(y) = f(a, y) for some a. The nice thing about it is that it works only on source code, without calling the function themselves.
This theorem had the added bonus that once I grokked it, the test in computability was very, very easy :)
7. Functional analysis: here it’s a relatively minor result that I ended up remembering distinctly: Given z1.. zn which are linearly independent in E, show that there exists a d such that for each w1…wn that follow ||wi – zi|| < d for each i, are also linearly independent. The footnote says that such a finite, linearly independent group is called stable. When visualizing I think of it this way: given a such a group, kick it. As long as you don’t kick it too strongly – it will stay linearly independent. Now that’s stable.
8. Mathematical Logic: The Compactness theorem: “a set of first-order sentences has a model if and only if every finite subset of it has a model”. One direction is almost trivial, but the other is deep. When studying for the test in this course, I remember being stuck for days on an exercise that required the use of this theorem. Once I fully understood the method of its use, it became a favorite.
(By the way, the exercise was the following: Let G a countable group of first order statements, and p a first order statement. Show that if p is true in every countable model of G, than G |= p.)
9. Cryptography: I’ve learned a bit of cryptography on my own before taking the cryptography course. When I did though, two methods were especially memorable: The first was the “Meet in the Middle” attack. Not to be confused with “Man in the Middle”, this method allows one to attack symmetric ciphers constructed by repeatedly applying a simpler cipher. This known plaintext attack got its name from its method of operation: the attacker calculates all possible decryptions the ciphertext and stores them in a lookup table. Then, he calculates all encryptions of the plaintext and looks them up in that lookup table. Once a result is found – the combination of the encryption and the decryption keys used is the final key of the composed cipher.
10. The second cryptography result that I liked was secret sharing. Trivial secret sharing is so simple, and yet effective, that when I first learned it I thought: “how come I didn’t think of this before?”.
There are obviously many more elegant theorems, some of which I’ve learned in my studies. I sure hope to learn a few more. Still, these are special. As a highschool math teacher once told us about the Pythagorean theorem: “I want you to remember the proof even if I wake you in the middle of the night”. The theorems in this short list come close to that ideal.
Now I wonder – what are your favorite theorems?
Only my first semester into my Math/CS degree, but currently that would be the Cantor-Schroeder-Bernstein. Its proof is so simple and elegant, and its implications made me go “This is just awesome!”. It’s really why I decided studying math.
The compactness theorem (logic) is actually a special case (or direct result) of Tichonoff’s theorem about compactness of topological spaces. In itself – it’s a pretty surprising and strong result with a very nice proof.
This is one of the nicest thing I learned during my math studies – that sometimes you just have to look the ‘right way’ about things (like defining topology, which sounds at first very counter-intuitive) – and then everything just falls into place.
Nice post.
“Every bounded infinite group has a limit point.” You meant “set”, no?
By the way, check out a proof I made for Bolzano-Weierstrass a few years ago.
My favorite theorem is a one I came up with as a hypothesis, and then a few months later I was astonished that I actually managed to prove it. It says: There exist real numbers that can’t be described using mathematical symbols, i.e., there is no way for one mathematician to describe one of these numbers to another mathematician.
(I’m not claiming I invented it, just saying that I came up with it on my own and proved it on my own.)
Ram: you are right, I meant set. I’m afraid that’s what you get when you originally study Math in Hebrew. Even when you know the right words you mix them up.
Every solution of a general 5th grade (and more) algebraic equation is like that
Since Abel’s proff in early XIXth century :-)
Raffaele: I think you misinterpreted the theorem. Just because a number is transcendental doesn’t mean it can’t be described. For example, Pi is transcendental, but it can be described to any numerical precision.
Ram, regarding your proof of Bolzano-Weierstrass:
It’s almost identical to the proof version I know, maybe a bit more verbose :) It’s impressive that you came up with it yourself.
The Banach fixed point theorem!
Every contraction mapping f on Banach space has a unique fixed point, and that for any x in M the iterated function sequence x, f (x), f (f (x)), f (f (f (x))), … converges to the fixed point.
(I used en.wikipedia because my English is very poor :-)
If you want to find indefinitely accurate root of equations like
sin x = x
But the greatest results are on operator spaces
I believe the number you are looking for, sir, is 0.
One of my favourite proofs is showing that there are an infinite amount of primes or that any natural number can be written as the product of its prime factors.
I don’t do much work in number theory, but those proofs are always somewhat elegant.
P.S.: Fundamental theorem of finite abelian groups.
http://en.wikipedia.org/wiki/Curry_Howard_isomorphism
it basically says that type systems (as in programming languages) and formal logics are equivalent (i think first-order only, but perhaps i’m wrong).