6 of the Most Ridiculous Tales from Mathematics
Although your high school teachers may have given you the impression that the entirety of mathematics has been inscribed in the world’s most boring book since time immemorial, the truth is that it's very much a human endeavour fraught with blood, sweat, and chalkstained clothing. And like most other human endeavours, it is no stranger to batshit insanity, such as...
#6: The solution of the cubic equation fueled an intergenerational soap opera If you muster up the courage to think about high school math, then you might be able to remember the quadratic formula. Well, it turns out that there's also a solution for cubic polynomials, i.e. equations in the form ax^{3}+bx^{2}+cx+d=0, where a, b, c, and d can be any real number. Not only that, but the story of its discovery is one of the most melodramatic tales in history. Strap in, it's math time. You see, in renaissanceera Italy, the way mathematicians proved their prowess at… mathing was in the most awesome way possible: math duels. Winning these duels was crucial to a the success of a mathematician's career. As a result, mathematical knowledge became a closely held secret. So, when a dude named Scipione del Ferro discovered a solution for a special type of cubic equation, he kept it with him to his deathbedliterally. As he was on his deathbed, del Ferro finally disclosed the secret to his apprentice Fior, who immediately went out and challenged a bunch of people to duels, secure in the knowledge that his newfound mathematical formula was foolproof. Unfortunately for him, his opponent Tartaglia was no fool. Tartaglia figured out the formula and pretty much destroyed Fior in the duel, becoming the new heavyweight champ of the math scene. But wait, there's more! An eccentric gambler/heretic/mathematician named Gerolamo Cardano became obsessed with the cubic formula, and basically begged Tartaglia nonstop to give him the solution until Tartaglia finally relented, on the condition that Cardano never tell it to anyone else. But soon, Cardano and his apprentice Lodovico Ferrari realized how to extend Tartaglia’s method to all cubic equations, and Ferrari actually figured out how to use this result to solve every quartic equation (ones involving x^{4} terms). Cardano REALLY wanted to publish these results, and so he was practically looking for any excuse to break his oath to Tartaglia. Eventually, he learned about del Ferro's original solution and decided this nullified his vow, because Tartaglia hadn't been the first to discover the formula. Cardno then published his results in his seminal book Ars Magna, and though he gave credit to Tartaglia, Tartaglia remained salty about it for the rest of his life. With that drama out of the way, mathematicians were finally free to use the tools of algebra to come up with general solutions for even higher order polynomials. ...Aaaaand, it turns out that the fabric of reality hates mathematicians, so after 300 years of banging their heads into walls, they proved that this is actually impossible. #5: Pierre de Fermat trolled the mathematical community for centuries Born in 1607, Pierre de Fermat was just your average renaissanceera geniuslawyer by day, mathematician by night, and ninja turtle by brunch. After Fermat died, his son went through his father's old papers and found a note in one of the margins. The note alleged that no three positive integers a, b, and c satisfy the equation a^{n} + b^{n} = c^{n} when n is an integer greater than 2, and declared "I have discovered a truly remarkable proof which this margin is too small to contain." Seems innocent enough, right? Cut to centuries of mathematics' best and brightest trying and failing to rediscover this "marvelous proof." This process finally concluded in 1994, with a 109 page paper by Andrew Wiles proving the last remaining cases with mathematics that would have been entirely unavailable to Fermat in the 17th century. Common consensus today is that Fermat was full of shit. #4: Grigori Perelman solves million dollar problem, declines prize, and vanishes from math Poor mathematicians always get the short end of the stick when it comes to recognition. While scientists can shoot for Nobel Prizes and their +$1 million rewardsor at least, for a career of poking holes in science fiction filmsmathematicians have to cope with with the Fields medal being their greatest possible achievement, and that only comes with a measly 15,000 Canadian Dollar reward and, presumably, a tepid fart. So, when a committee of mathematicians offers a $1,000,000 bounty for seven mathematical problems and calls these bounties the Millennium Prize Problems, you know damn well that these are some of the most important and challenging mathematical problems around. Enter Grigori Perelman, a Russian mathematician who, while still relatively unknown, posted a proof of one of the Millennium Prize problems online in 2002specifically, he proved the Poincare Conjecture. He then declined every single one of the mountain of accolades and math groupies tossed at him, saying he was no more deserving of recognition than Richard Hamilton, a mathematician who helped develop the technique used in his proof. Yes, he even declined the $1,000,000 prize. Soon after, he simply vanished from the mathematical world like a humble ninja. The Poincare Conjecture, which essentially states that certain 3dimensional spaces can be stretched into the surface of a four dimensional ball (called a 3sphere) without tearing or gluing, remains the only millenium prize problem that has been solved. #3: Kurt Godel crushes the dreams of mathematicians everywhere The early 20th century was an exciting time to be a mathematician. Before the mid 1800's, math was fast and loose field full of loose cannons like Euler and Newton who accomplished great things by employing vague methods to come up with extremely useful answers. However, as we've established, the fabric of reality hates mathematicians. So, people started noticing that when you throw weird cases into these methods, you end up with a whole bunch of contradictory nonsense. Thus dawned the modern age of mathematics, where pedantry, careful definitions, and sweet mustaches ruled the day. Basically, modern mathematics starts from a set of axioms (statements which we assume are true), and then derives as much as possible from these axioms. By 1929, faith in this new system was at an all time high. Austrian mathematician Kurt Godel had just shown that every valid statement in a mathematical theory expressed in something called first order logic has a proof that requires only a finite number of steps. For a brief, glorious moment, it appeared that mathematicians just might be able to come up with a perfect system of axioms that was able to describe everything we'd ever want to know. This bliss lasted for precisely two years until 1931, when Godel decided mathematicians were too happy and proved that any consistent theory that we can write down and is strong enough to do anything useful always has statements that can't be proved or disproved. Further, he proved that if a theory is consistent and powerful enough to define arithmetic, the theory can't be used to prove itself consistent. So no matter what we do, we can never be absolutely sure there isn't some ridiculous contradiction hiding in the mathematics we take for granted. Damn it, Godel. #2: Euclid sparks a 2000 year quest, turns out to have been right all along Euclid, born around 300 BCE in Greece, was a bit of a hipsterhe was working with an axiomatic foundation of math long before it was cool. Most famously, he wrote the textbook Elements, where he proved lots of stuff about geometry. But, although the book was well received, something about Elements bugged his contemporaries. You see, while 4 of his axioms were selfevident statements like "all right angles are equal" and "any circle can be described by a center and a radius," his fifth axiom was the buttholeclenching "if a straight line falling on two straight lines makes the interior angles on the same side less than two right angles, the two straight lines, if produced indefinitely, meet on that side on which are the angles less than the two right angles." Due to its clunkiness, mathematicians were certain that this axiom wasn't needed, and could be derived from the other four. But every time someone claimed to find such a derivation, it turned out that they had made an assumption equivalent to the thing they were trying to prove along the way. It wasn't until the 19th century when it was finally proven that this axiom was independent from the other 4, and that it's possible to construct wonky geometries that were just as logically valid as the "flat" geometry we're used to. Even more strangely, these curved geometries turned out to have lots of applications in physics, as Einstein"s theory of General Relativity predicts that mass and energy literally change the curvature of spacetime. Euclid turned out to be more right than he ever could have imagined. #1: Emmy Noether overcomes rampant sexism to prove some of the most useful theorems ever Marie Curie certainly was a badass, but she seems to be the sole female scientist that everyone knows. This is a shame, because there have been many important women in STEM fields, and perhaps one of the most important was Emmy Noether. Born in Erlangen, Germany in 1882, she developed a passion for mathematics after completing secondary school, and had to fight tooth and nail to pursue it. Her first four years in college pretty much consisted of her sitting in classes that she was barred from taking part in, until the University of Erlangen finally allowed women to enroll in 1904. She earned her PhD in 1907, but life didn't get much easier for her. Despite her genius, no university wanted her around, presumably concerned about the very real threat of a cootie pandemic. It wasn't until 1919, after over a decade and the intervention of heavyweights Albert Einstein and David Hilbert, that she was finally allowed to teach at Gottingen University. For no pay, of course. It appeared that her struggle began to pay off, but soon she faced a new problemthat of being a Jewish person living in Germany in the 1930's. She was denied permission to teach by the Nazi government in 1933, and moved to the United States the same year. Unfortunately, she died two years later. Her career greatly influenced the fields of group theory, ring theory, representation theory, topology, and a whole bunch of other stuff that is gibberish to us mere mortals. One of her most famous results, known as Noether’s theorem, links symmetries of something called the action of a system to conserved quantities. In nicer words, it basically says (insert hand waving here) that if I'm able to rotate, shift, and move in time in some way that leaves the physics of a system unchanged, there will be some quantity associated with the shift that doesn't change in value. In this way, we can show that the fact that energy cannot be created or destroyed can be seen as a consequence of the idea that the laws of physics don't seem to change over time. Similarly, conservation of momentum is associated with the observation that physics seems to work the same at any location in the universe. Deep. Her obituary in the New York Times was written by none other than the 'Stein himself, who said of her, "Fräulein Noether was the most significant creative mathematical genius thus far produced since the higher education of women began." Maybe the cooties were worth it after all.
You must be logged in with a registered account to comment on this article.
You can login or register if you do not yet have an account.

• Donald Trump tells congresswomen to go back • What's the first major news story you remember? • IT'S HAPPENINGThe Mueller Report Drops • Procedure for leaving?
