tag:j2kun.svbtle.com,2014:/feedJeremy Kun2015-02-15T09:31:53-08:00Jeremy Kunhttp://j2kun.svbtle.comSvbtle.comtag:j2kun.svbtle.com,2014:Post/trying-out-medium2015-02-15T09:31:53-08:002015-02-15T09:31:53-08:00Trying out Medium<p>I have no problems with the Svbtle platform. I have just seen the majority of new writers moving to Medium, and I figure I should give it a fair comparison.</p>
<p>So I published an article there. Check out the <a href="https://medium.com/@jeremyjkun/" rel="nofollow">profile</a> and the <a href="https://medium.com/@jeremyjkun/i-want-to-do-everything-and-here-are-the-ways-i-try-d0b7ec5b3f95" rel="nofollow">article</a>.</p>
tag:j2kun.svbtle.com,2014:Post/authenticity-of-background-math2015-01-03T08:00:01-08:002015-01-03T08:00:01-08:00What's in a blackboard? On mathematical authenticity in movies and TV.<p>Dear producers and directors, </p>
<p>For $100 per scene, I will verify the authenticity of all mathematical lines, props, documents, and boardwork used in that scene. In the event that said math is inauthentic, I will suggest an authentic replacement.</p>
<p>Your friendly neighborhood mathematician,</p>
<p>Jeremy Kun</p>
<hr>
<p>A lack of authenticity can ruin a tense mood and make a group of supposed experts seem like fools. Designers go through great pains to make costumes or a set authentically French or authentically 1920’s, to avoid anachronisms, and to use contemporary idiom. Medical television shows are applauded by my medical school and resident friends for their accurate jargon. But when it comes to anything mathematical, despite it sometimes being crucial to the plot or characters or setting, television shows and movies hardly seem to try to get it right. </p>
<p>It used to be this way with technology, causing the <a href="http://zoomandenhance.tumblr.com/" rel="nofollow">“zoom and enhance”</a> charade that eventually turned into a cheap-shot for comic effect. Now that software developers are billionaires, television suddenly seems to paints a shockingly accurate picture of Silicon Valley software.</p>
<p><a href="https://svbtleusercontent.com/bbscnqbq7mqfa.jpg" rel="nofollow"><img src="https://svbtleusercontent.com/bbscnqbq7mqfa_small.jpg" alt="zoom-futurama.jpg"></a></p>
<p>Sometimes “zoom and enhance” is a crucial plot device. Fine, I’m not in a position to deny plot devices to writers who don’t have the time or paygrade to come up with new ones. But as time has gone on even technological one-liners and references have gotten more accurate. Take this <a href="http://oracle-wtf.blogspot.co.uk/2012/05/girl-with-ansi-tattoo.html" rel="nofollow">analysis of the hacking</a> shown in The Girl with the Dragon Tattoo. The fact that what’s shown on the terminal closely resembles a database query (rather than the standard five years earlier of speedily scrolling nonsense or flying binary) is remarkable considering how what I’m about to show you is typical of television and movie mathematics.</p>
<p>My colleague recently snapped this photograph from an episode of the show <a href="http://en.wikipedia.org/wiki/Resurrection_(U.S._TV_series)" rel="nofollow">Resurrection</a>, which in the interest of disclosure I’ll admit I haven’t seen.</p>
<p><a href="https://svbtleusercontent.com/auyrjfhw4op7gg.jpg" rel="nofollow"><img src="https://svbtleusercontent.com/auyrjfhw4op7gg_small.jpg" alt="badmath-unit-circle.jpg"></a></p>
<p>The quality is poor, but in the background you can clearly see a large detailed drawing of the unit circle, complete with the usual marked points at the angles of zero, π/6, π/4, etc. This is the same thing you’d expect to see on the whiteboard of a freshman/sophomore high school student’s math class. The scene’s setting, however, is a top secret government lab. The government is ostensibly paying brilliant scientists millions of dollars to solve the world’s hardest problems. And what do they fill their whiteboards with? Math within the reach of a bright fifth grader.</p>
<p>This is hardly an isolated incident. I used to save images I saw of bad TV math but it seemed pointless except to depress me. Countless television shows, movies, and video games flood their blackboards, whiteboards, and walls with silly and obviously irrelevant mathematics. From my perspective they’d have to actually <em>try</em> to fail as badly as they do. If your goal is to pick something mysterious looking (but complete gibberish), then just go to the Wikipedia page for a random <a href="http://en.wikipedia.org/wiki/Areas_of_mathematics" rel="nofollow">area of mathematics</a>, follow a few (random links)[<a href="http://en.wikipedia.org/wiki/Riemannian_manifold" rel="nofollow">http://en.wikipedia.org/wiki/Riemannian_manifold</a>] until you find some confusing-looking equations, and behold, scribbles and jargon worthy of a blackboard.</p>
<p><a href="https://svbtleusercontent.com/hfozu0nebisdpg.png" rel="nofollow"><img src="https://svbtleusercontent.com/hfozu0nebisdpg_small.png" alt="wikipedia-math.png"></a></p>
<p>Movies whose <em>subject</em> is a mathematician tend to be better (i.e., <a href="http://www.math.harvard.edu/archive/21b_fall_03/goodwill/" rel="nofollow">Good Will Hunting</a>, <a href="https://www.youtube.com/watch?v=pYdjNeFh6zw" rel="nofollow">A Beautiful Mind</a>, or Proof). These often don’t display believable mathematics appropriate to the stated experience of the characters, but in the same way that The Girl With the Dragon Tattoo was good enough, so are these movies. Rather, I’m talking about the hundreds of movies and episodes that incorporate mathematics into the plot, use the word “equation,” or employ a mathematician (or general nerd/scientist).</p>
<p>Some shows do it better than others. Here’s a still from <a href="http://en.wikipedia.org/wiki/Elementary_(TV_series)" rel="nofollow">Elementary</a>, during an episode in which a reclusive mathematician is working on <a href="http://en.wikipedia.org/wiki/P_versus_NP_problem" rel="nofollow">a problem that could actually change the world</a> in a big way, and hence he is compelled to conceal his work with invisible ink. </p>
<p><a href="https://svbtleusercontent.com/rrwvhlqu5orssa.jpg" rel="nofollow"><img src="https://svbtleusercontent.com/rrwvhlqu5orssa_small.jpg" alt="elementary-math.jpg"></a></p>
<p>But even Elementary isn’t perfect. The lines and attitudes the characters use to describe the mathematics are often slightly misleading. Take, for example, the title of that episode (<a href="http://www.imdb.com/title/tt3125780/" rel="nofollow">Solve for X</a>), which is essentially <a href="http://j2kun.svbtle.com/the-myth-that-math-is-solving-for-x" rel="nofollow">a caricature</a> indicating something about math happens in the episode. Bill Gasarch <a href="http://blog.computationalcomplexity.org/2013/10/p-vs-np-is-elementary-no-p-vs-np-is-on.html" rel="nofollow">gives more substantial examples at his blog</a> while gracefully ignoring minor issues. But even more troubling than any of these to me is that Sherlock Holmes, a character who prides himself above all else in the <em>purity of deductive reasoning,</em> seems to know absolutely nothing about mathematics. He even says as much in the episode. This is totally incongruous with his character; he has enough time to master <a href="http://en.wikipedia.org/wiki/Singlestick" rel="nofollow">a martial art</a> he never uses, tend exotic species of bees, or study historical cartography, but he doesn’t seem to have any clue about the different areas of mathematics, something one could get a flimsy grasp on by browsing Wikipedia for an hour. Rather than have him claim total ignorance with, “The maths are beyond me” (which is in reality a manifestation of the writer’s ignorance), he could, in the above scene, gesture to different parts of the wall saying something like this:</p>
<blockquote class="large">
<p>I only vaguely understand how it all fits together, but the parts are shockingly different. [Motioning to various parts of the wall] See here he borrows from algebraic geometry, over here is clearly graph theory, and here he’s using what looks like a Galois-type correspondence. A <em>Galois correspondence!</em> If, as it appears, our friend single-handedly found a way to connect these disparate mathematical ideas in analyzing algorithms, then it may very well be the real thing. Mathematical elegance like this does not occur idly. And as such, he may be in more danger than we thought.</p>
</blockquote>
<p>Certainly it’s believable that the smartest person in their fictional world is a little bit mathematical. And then, fine, Hollywood, you can follow it up with the standard reply, “English, please?” But the point is it took me all of five minutes to come up with that line. Being a relatively penniless graduate student among many penniless graduate students, I’d take all the review/scripting work I could manage. I know I’m not alone, and I’m <em>certainly</em> not particularly talented at it. For a fraction of what you might pay a full-time consultant, you could get a small army of graduate students doing quality work. So the only reason I can fathom that studios with multi-million dollar budgets can’t get authentic lines and boardwork is that they don’t try.</p>
<p>That’s why I’m officially opening my shop for business. For my modest fee, I will remove any mathematical “<a href="https://www.youtube.com/watch?v=hkDD03yeLnU" rel="nofollow">GUI interface using visual basic</a>” snafu that sneaks into your script, ensure your top secret scientists aren’t doing high school trigonometry, and even provide you with realistic, substantive mathematical meat for your scenes. Hell, I can even show you the detailed quirks that mathematicians tend to have when, say, giving a talk. Drop me a line at <a href="mailto:jkun2@uic.edu" rel="nofollow">jkun2@uic.edu</a>, and we’ll talk.</p>
tag:j2kun.svbtle.com,2014:Post/why-i-hate-and-love-visualizations-of-mathematics2014-11-20T08:06:31-08:002014-11-20T08:06:31-08:00Why I hate (and love) visualizations of mathematics<p>I have a love-hate relationship with visualizations of mathematical ideas.</p>
<p>Let’s say I’m trying to learn about a difficult mathematical concept. For this example I’ll use Markov chains because I recently saw a highly-appreciated <a href="http://setosa.io/ev/markov-chains/" rel="nofollow">visualization of Markov chains</a> due to Victor Powell and Lewis Lehe. For now I’ll pretend I’m the typical person who claims to be a visual thinker and the only reason I don’t get math is because nobody is patient enough to explain things in a way I can understand. (Such people are <a href="https://news.ycombinator.com/item?id=8103240" rel="nofollow">everywhere.</a>)</p>
<p>So I’ve heard the mysterious term Markov chain, and tried to learn about it previously by reading a book. Maybe I want to even write a computer program to “do” a Markov Chain, whatever that means. I go check out the Powell-Lehe visualization and at the end I think “Wow! That was so easy to understand! A Markov chain is just a little diagram with a ball bouncing around, where the ball is represents the state a system, and the thickness of the lines is how likely the ball is to use that line to travel.”</p>
<p><a href="https://svbtleusercontent.com/iouz3msdperpg.png" rel="nofollow"><img src="https://svbtleusercontent.com/iouz3msdperpg_small.png" alt="markov.png"></a></p>
<p>Then I go to whatever forum linked me to the visualization and I say something like “Man, I never <em>really</em> understood Markov chains until now. I had tried to learn them, but my impatient mathematics teachers were so terrible at explaining anything in a way I could understand.” Job well done, all in a day’s work, time to go off and write some programs.</p>
<p>Here’s the problem with this scenario. All I really understand from the visualization is the definition of a Markov chain. In fact, I don’t even understand that all that deeply. The authors of that visualization make a wispy connection between a Markov chain and a matrix (just to “tally” the transition probabilities, they say). But why is a <em>matrix</em> appropriate for that? They claim it’s for efficiency, but as a practiced mathematician I know the answer is much deeper, in fact much closer to the heart of why Markov chains are interesting. </p>
<p>The truth is that the definition of a Markov chain by itself is not at all deep or complicated. That’s part of why the visualization is so effective, because anyone who understands Markov chains could explain what one is to a wiling fifth grader, in five minutes, with just a pencil and paper. I couldn’t explain why they’re <em>interesting</em> to a fifth grader, but visualizations don’t do that either. The true difficulty comes when you actually want to do something with Markov chains. Whether it’s a mathematical analysis or a useful computer program, you need more than a single definition and a picture. </p>
<p>And here’s one place visualization reveal their uglier side. You can’t analyze Markov chains with a visualization. You can use visualizations to get ideas, but you can’t check if those ideas are valid. Markov chains are inherently quantitative but visualizations are qualitative. This is especially true when working with small examples. Because as soon as you turn to any nontrivial large examples visualizations become a useless mess.</p>
<p>Here is what typical visualizations of networks tend to look like:</p>
<p><a href="https://svbtleusercontent.com/2oiecsxbmrxiuq.jpg" rel="nofollow"><img src="https://svbtleusercontent.com/2oiecsxbmrxiuq_small.jpg" alt="unnamed.jpg"></a></p>
<p>And identical looking networks can have completely different Markov chain dynamics. So there’s no hope in distinguishing between them just by looking at pictures. </p>
<p>I know what you’re thinking: if I get interested in Markov chains because I saw a neat visualization, then isn’t that all that matters? </p>
<p>Yes and no.</p>
<p>In the hypothetical scenario, I had tried to learn about Markov chains once the “normal” way (by reading a book or taking a class). But the book or teacher didn’t explain it visually for me so I gave up. I just couldn’t wrap my head around it. And now that I am comfortable with the definition of a Markov chain, I need to learn a <em>new</em> concept: how the convergence rate of a Markov chain to a stationary distribution depends on the magnitudes of the eigenvalues of the transition matrix.</p>
<p><strong>WAT</strong></p>
<p>So I look up the animations for what an eigenvalue is, and I look up visualizations of what convergence means, and I look up visualizations of probability theory. Even with all that understanding, it would take a team visualization experts spoon-feeding me for hours to get me to understand why intuitively these particular numbers govern this particular dynamic. (As far as I “know,” matrices are just for the convenience of writing down transition probabilities) And it’s almost guaranteed that the perspectives you’d gain from visualizations of the disparate concepts are incompatible or at least don’t mesh well. Instead I could do things the old fashioned way: write down some small examples, practice the algebra skills I hemorrhaged, and ask questions when I get stuck. In order to gain a deep understanding I need to actively engage the material in a way that visualizations don’t allow. And the ridiculous part is not how inefficient it would be to make visualizations for the mysterious-sounding relationship I described, but that we’re still at the <em>beginning</em> of an introduction to Markov chains! </p>
<p>What I’m saying is that visualization can help, of course it can. But if I’m not willing to put in real work to understand a topic, then I will never get <em>past</em> the visualization. I will just keep complaining that my math teachers were all terrible and that I can’t wrap my head around an idea, when the truth is that I’m being impatient. That’s the number one reason that an otherwise capable person fails to learn math. Maybe if they understood that <a href="http://j2kun.svbtle.com/mathematicians-are-chronically-lost-and-confused" rel="nofollow">being confused is the natural state of a mathematician</a> then they’d realize what the rollercoaster of gaining a deep mathematical understanding is actually like. I would argue that this applies to understanding anything, but out of all things people tend to be the least patient with mathematics.</p>
<p>I understand why visualizations are so appealing, I really do. I even make them <a href="http://youtu.be/D9ziTuJ3OCw" rel="nofollow">myself</a> to explain and synthesize ideas. They’re appealing because our eyes tend to glaze over when we see too much mathematical notation in one place. Pictures and animations give us a break from the syntax, and help us connect the general definitions to a simple example. </p>
<p>But notice that nowhere do I suggest (and I argue nobody should suggest) that these pictures <em>replace</em> the notation. You need both and they need to interact with each other. Visualizations and pictures allow you to be <strong>specific and vague,</strong> whereas more typical mathematical analysis allows you to be <strong>precise and general.</strong> The two complement each other. So visualizations that try to omit all notation are doing you a disservice, practically ensuring a steeper learning curve when you finally need to translate between syntax and idea. Likewise, mathematics authors that provide no examples and no intuition are also doing you a disservice. Just imagine trying to teach programming where you never show syntax but just draw pictures of the “man inside the machine.” And now imagine you just hand out a list of syntax forms with no connection to an intuitive understanding of their semantics. Both are ridiculous, but the pop-math-visualization crowd are basically demanding the former as a method to become competent in mathematics.</p>
<p>But the real problem, I’d say, is that math literature is too close to the latter kind of author, the kind who is too terse and provides no examples. This is a much subtler issue than whether mathematics is hard, having to do with the culture of mathematics, unwritten expectations of authors, and the limited time and incentive of experts. I don’t have a solution to help someone overcome these barriers. Maybe if we paid math graduates a reasonable fraction of what they could make on Wall Street or at the NSA there would be better resources available. Today any mathematician who blogs or writes a textbook about any topic more advanced than calculus does it as a labor of love; it generally detracts from their career by giving them less time for research, they generally don’t get much (or often, <em>any</em>) income from it, and it takes years to do something substantial. With all of this in mind it’s no wonder they’re so terse. And that’s not even mentioning that any mathematician who wants to devote their lives to teaching guarantees themselves a comparatively puny salary and even less autonomy.</p>
tag:j2kun.svbtle.com,2014:Post/what-microsoft-lost-when-it-closed-msr-silicon-valley2014-09-22T09:00:35-07:002014-09-22T09:00:35-07:00What Microsoft lost when it closed MSR Silicon Valley<p>Since I started thinking about my own job opportunities, I have always heard and considered Microsoft as the best place for research in industry. Other companies are also considered pretty excellent, but Microsoft tends to make the top of the list in terms of who they hire and how they make it easy for great people to do great work. </p>
<p>For example, when Yahoo closed their New York research lab two years ago, Microsoft offered every fired researcher a job, and even opened a new lab in New York so they didn’t have to move! And though I haven’t verified this, from what I’ve heard Microsoft has never (before now) fired a researcher. They vet their candidates and hire people they intend to keep for the long haul. Microsoft puts people in charge of the research labs who understand that the primary goal is to further the state of the art. And they have a strong track record of doing just that.</p>
<p>So shock and awe that Microsoft would close MSR Silicon Valley (and fire dozens of fantastic researchers overnight) is the only reasonable response. They fired almost everyone, offering to retain only a couple of the very highest caliber researchers provided they’re willing to move. But make no mistake, we’re talking about giants among people who are still extremely tall, here, so to kick anyone out is literally to say, “We don’t want Microsoft to be associated with awesome breakthroughs and innovations in computer science.”</p>
<p>I’m familiar with the theory folks, and if I can convince you that <em>theoretical</em> work is important, then certainly the applied researchers are doing similarly impactful work. So let me give a quick, and by the nature of a short article <em>totally</em> underappreciative, overview of the work done by theory folks at MSR.</p>
<p>Let’s start with Leslie Lamport. Since the early 70’s Lamport has had a steady stream of impressive algorithms and impossibility results in distributed systems. His seminal paper on the <a href="http://research.microsoft.com/en-us/um/people/lamport/pubs/bakery.pdf" rel="nofollow">“bakery algorithm”</a> gave a beautifully simple solution to the “semaphore problem” of multiple processors corrupting shared memory, which improved over prior solutions by adding fault tolerance and priority access. Things only get better from there. Lamport invented <a href="http://en.wikipedia.org/wiki/Paxos_(computer_science)" rel="nofollow">consensus protocols</a> and the Paxos algorithm, invented <a href="http://en.wikipedia.org/wiki/Byzantine_fault_tolerance" rel="nofollow">Byzantine fault tolerance</a>, <a href="http://en.wikipedia.org/wiki/Lamport_timestamps" rel="nofollow">logical clocks</a>, and fantastically impossible-sounding ways to <a href="http://en.wikipedia.org/wiki/Snapshot_algorithm" rel="nofollow">maintain global state</a> in a distributed system that doesn’t have global communication capabilities.</p>
<p>Leslie Lamport’s research has <strong>changed the way we think about distributed computing</strong> many times in his career, and at the ripe age of 73, he’s still going strong. For a better overview of his research than I can give here, see <a href="http://brooker.co.za/blog/2014/03/30/lamport-pub.html" rel="nofollow">this blog post</a>.</p>
<p>Lamport may seem impressive (and he is!) but revolutionizing computer science is par for the course at Microsoft Research SV. <a href="http://en.wikipedia.org/wiki/Cynthia_Dwork" rel="nofollow">Cynthia Dwork</a> has changed the discussion around privacy, inventing a new way to make statistical information public without compromising the identity of any individual in a database. She has spearheaded the entire subfield of cryptography, known as <a href="http://en.wikipedia.org/wiki/Differential_privacy" rel="nofollow">differential privacy</a>. This is not even to mention all of her other impressive contributions to cryptography, including the first <a href="http://en.wikipedia.org/wiki/Lattice-based_cryptography" rel="nofollow">lattice-based cryptosystem</a> paving the way for <a href="http://en.wikipedia.org/wiki/Homomorphic_encryption" rel="nofollow">fully homomorphic encryption</a>, and the ideas that formed the basis of cryptocurrencies. Dwork’s work on privacy is a reaction to contemporary incidents of de-anonymization of medical and internet data, so to imagine that these researchers are living in some abstract world devoid of application is a fantasy. Dwork’s work influences Microsoft’s products (and everyone else’s) by changing the way we think about privacy.</p>
<p>And then there are people like Omer Reingold, who furthered the study of space-efficient algorithms by solving a long-standing open problem about undirected graph connectivity. He also changed the way we think about randomness through his work on expander graphs, which has permeated science all the way to <a href="http://www.scottaaronson.com/blog/?p=1823" rel="nofollow">discussions</a> about the physics and philosophy of consciousness. As if that weren’t enough, Reingold has made countless other contributions to cryptography and fault tolerance, and he’s just getting started! With so much impressive work under his belt, this is what he had to say about his colleagues at MSR:</p>
<blockquote>
<p>In a place with no borders between research areas, I was free to follow my intellectual curiosity with colleagues I wouldn’t normally have the great fortune of working with. My non-theory colleagues have left me a much more complete computer scientist than I ever been. My theory colleagues left me in absolute awe! Being surrounded by the creativity and brilliance of a unique collection of young scientists was such a rush. I am confident that they will make many departments and research groups much better in the following months and years. My only regret is every free minute I didn’t spend learning from these wonderful colleagues and friends.</p>
</blockquote>
<p>His <a href="http://windowsontheory.org/2014/09/19/farewell-microsoft-research-silicon-valley-lab/" rel="nofollow">blog post</a> containing that quote is showered with comments by powerhouses of research all lamenting the loss of the hub of innovation and science. </p>
<p>There are just too many accomplished people to do justice to them all, but the point is that Microsoft stands to lose more than just world-class researchers. They stand to lose (and to some degree already lost) their image among academics. Serious academic work needs stability, time, and collaboration, and can’t happen with only two out of the three. Important researchers are already questioning the security of industry lab jobs.</p>
<p><a href="https://svbtleusercontent.com/kwseiqyamhzrfg.png" rel="nofollow"><img src="https://svbtleusercontent.com/kwseiqyamhzrfg_small.png" alt="williams-tweet.png"></a></p>
<p>And this seems to be a trend across industry: big company gets a new CEO and downsizes research. The interesting thing is that Microsoft could have maintained face and stability by giving researchers a year’s notice, enough time to secure academic positions (which are notoriously slow to materialize). Was the cost of keeping a relatively small research lab for one year really worth the dent to their image? </p>
<p>If the trend continues and industry research is no longer an obvious goal for young researchers, and when the few good university slots fill up, where will they go? And by they I mean <em>we,</em> because I will soon be forced to decide for myself. Will we work abroad (I’m certainly considering that option), and weaken the country’s relative level of innovation? Politicians don’t seem to like that, but they won’t increase science funding to compensate. Will we switch careers, go into finance and potentially cause another economic meltdown? Or waste our collective skill and talent designing iPhone games or CRUD apps? Or worst of all go work for the NSA and undermine modern security? </p>
<p>I can think of a few worthwhile endeavors specific to my interests (read: not evil or selfish, and also with a reasonable salary), but I know a lot of PhD students don’t have such clear backups they can be enthusiastic about. They would be much more likely to fall for the salary of Goldman Sachs or the stability of the NSA instead of navigating the new frontiers of computer science. Skeptics might say that’s just how the market works, but chances are that skeptic’s job wouldn’t exist if it weren’t for the pioneers of computer science. Innovation breeds better opportunities for everyone. Behind Snapchat and Google are Fourier analysis and distributed networks. Behind Amazon is a century of combinatorial optimization. Behind every banking and finance firm is a mountain of cryptography keeping their secrets and clients safe.</p>
<p>Theory informs practice by mapping out what can and can’t be done, and industry labs have some of the best track records of producing great research. So when Microsoft sends a message implying they don’t care, they risk deterring talent, but we all risk losing the fruits of the work great researchers might have done at a hub like MSR Silicon Valley.</p>
tag:j2kun.svbtle.com,2014:Post/why-dont-researchers-write-great-code2014-08-25T06:00:23-07:002014-08-25T06:00:23-07:00Why don't mathematicians write great code?<p>In the <a href="https://news.ycombinator.com/item?id=8054983" rel="nofollow">discussion</a> surrounding a series of recent articles on the question of how mathematics relates to programming (one of my favorite navel-gazing topics), the following question was raised multiple times</p>
<blockquote class="large">
<p>If mathematics is so closely related to programming, why don’t professional (research) mathematicians produce great code? </p>
</blockquote>
<p>The answer is quite a simple one: they have no incentive to.</p>
<p>It’s pretty ridiculous to claim that a mathematician, someone who typically lives and breathes abstractions, could not learn to write well-organized and thoughtful programs. To give a simple example, I once showed my advisor a little bit about the HTML/CSS logical flow/style separation paradigm for webpages, and he found it extremely natural and elegant. And the next thing he said was along the lines of, “Of course, I would have no time to <em>really</em> learn and practice this stuff.” (And he says this as a relatively experienced programmer)</p>
<p>That’s the attitude of most researchers. Most programming tools are cool and would be good to have expertise in, but it’s not worth the investment. Mostly that comes off as, “this is a waste of time,” but what’s keeping them from writing great code is their career.</p>
<p>Mathematics and theoretical computer science researchers (and many other researchers) are rewarded for one thing: publications. There is no structure in place to reward building great software, and theoretical computer scientists in particular are very aware of this. There have even been some informal proposals to change that, because everyone understands how valuable good software libraries are to progress in our fields.</p>
<p>But as it currently stands, the incentives for mathematicians reward one thing and one thing only: publishing influential papers. There are <em>very</em> small emphasis given to things like teaching, software, or administrative duties. But the problem is that they don’t <em>replace</em> publications. So spending work time on things that are <em>not</em> publications takes away from time that could be spend on papers. Everyone understands this about the job market. Say you have two candidates of equally good work, but the first candidate has one more top-tier paper and the second has contributed an equal amount of work to open source software. Though I have never seen this happen first hand, every career panel I have posed this question to has agreed the first candidate would be chosen with high probability.</p>
<p>So when mathematicians or theoretical computer scientists <em>do</em> write code, they have an incentive to get it working as quickly and cheaply as possible. They need the results for their paper and, as long as it’s correct, all filthy hacks are fair game. This is most clearly illustrated by the relationship between mathematicians and their primary paper-writing tool, the typesetting language TeX. All mathematicians are proficient with it, but almost no mathematicians actually <em>learn</em> TeX. Despite everyone knowing that TeX is a true programming language (it has a compiler and a Turing-complete macro system), everyone prefers to play guess-and-check with the compiler or find a workaround because it’s way faster than determining the root problem. </p>
<p>With this in mind, it’s hard to imagine your average mathematician having a deep enough understanding of a general-purpose language to produce code that software engineers would respect. So something like adequate testing, version control, or documentation is that much more unlikely. Even if they do write programs, most of it is exploratory, discarded once a proof is found achieving the same result. Modern software engineering practices just don’t apply.</p>
<p>For the majority of mathematicians, I claim this is mostly as it should be. Building industry-strength tools is not the core purpose of academic research, and much of mathematical research is not immediately (or ever) applicable to software. And most large companies who want to utilize bleeding-edge research for practical purposes form research teams. For example, Google does this, and from what I’ve heard many of their researchers spend a lot of time working with engineers to test and deploy new research. At places like Google (and Yahoo, Microsoft, IBM, Toyota), researchers negotiate with their company how their time is split between academic-style paper writing and engineering pursuits, and there are researchers at both extremes. </p>
<p>But even there, where coding is part of the goal, the best industry research teams still hire based on publication history. I can only hypothesize why: a great researcher can be taught programming practices trivially, so a strong research history is more important. </p>
tag:j2kun.svbtle.com,2014:Post/programming-is-not-math-huh2014-07-18T11:41:55-07:002014-07-18T11:41:55-07:00Programming is not math, huh?<p>You’re right, programming isn’t math. But when someone says this, chances are it’s a programmer misunderstanding mathematics.</p>
<p>I often hear the refrain that programmers don’t need to know any math to be proficient and have perfectly respectable careers. And generally I agree. I happen to think that programming only becomes fun when you incorporate mathematical ideas, and I happen to write <a href="http://jeremykun.com" rel="nofollow">a blog about the many ways to do that</a>, but that doesn’t stop me from realizing that the vast majority of programmers completely ignore mathematics because they don’t absolutely need it. </p>
<p>So when Sarah Mei argues in her article <a href="http://www.sarahmei.com/blog/2014/07/15/programming-is-not-math/" rel="nofollow">“Programming is not Math”</a> that math skills should not be considered the only indicator of a would-be programmer’s potential, I wholeheartedly agree. I’ve never heard anyone make that argument, but I’m much younger than she is. Having faith in Mei’s vast life experience, I’ll assume it was this way everywhere when she was writing Fortran in school, and it seems plausible that the attitude lingers at the most prestigious universities today.</p>
<p>But then she goes on to write about mathematics. As much as I respect her experience and viewpoints, her article misses the title’s claim by a long shot. It’s clear to me that it’s because she doesn’t understand the mathematics part of her argument. Here’s the best bit:</p>
<blockquote class="large">
<p>Specifically, learning to program is more like learning a new language than it is like doing math problems. And the experience of programming today, in industry, is more about language than it is about math.</p>
</blockquote>
<p>This is the core of her misunderstanding: being good at math is not about being good at “doing math problems” (from the context of her article it’s clear that she equates this with computation, e.g. computing Riemann sums). And the experience of programming in your particular corner of industry is not representative of what programming is about. The reality of the mathematics/programming relationship is more like this:</p>
<ol>
<li>Mathematics is primarily about conjecture, proof, and building theories, not doing slews of computations.</li>
<li>Learning to do mathematics is much more like learning language than learning to program is like learning language.</li>
<li>Large amounts of effort are spent on tedious tasks in industry for no reason other than that we haven’t figured out how to automate them yet. And novel automations of tedious tasks involve interesting mathematics by rule, not exception. </li>
<li>That doesn’t change how crucially reliant every programmer (and every company) is on the mathematical applications to programming that allow them to do their work.</li>
</ol>
<h1 id="mathematics-is-closer-to-language_1">
<a class="head_anchor" href="#mathematics-is-closer-to-language_1" rel="nofollow"> </a>Mathematics is closer to language</h1>
<p>Item 2 is probably why Mei isn’t able to find any research on the similarities between math and programming. There is a ton of research relating mathematics to language learning. For an extended bibliography with a nice narrative, see Keith Devlin’s book <a href="http://www.amazon.com/The-Math-Gene-Mathematical-Thinking/dp/0465016197" rel="nofollow">The Math Gene</a>.</p>
<p>One big reason that mathematics is much more like language than programming, is that doing mathematics involves resolving ambiguities. In programming you have a compiler/interpreter that just <em>dictates</em> how an ambiguity resolves. But in mathematics, as in real language, you have to resolve them yourself based on context. This happens both in the modeling side of mathematics and in the hard-core theory side. Contrary to the most common internet wisdom, almost no working mathematicians do math from a purely axiomatic standpoint. The potential for ambiguities arises in trying to communicate a proof from one person to another in an elegant and easy-to-understand way. Note the focus on communicating. This is essentially the content of a first course in proofs, which, by the way, is usually titled something like “A transition to advanced mathematics.” The reason that this never shows up when you’re computing Riemann sums is because in that context you’re playing the role of the computer and not the mathematician. It’s like getting the part of a spear carrier in a play and claiming, “acting is just about standing around looking fierce!” It’s a small, albeit important, part of a <em>much</em> larger picture.</p>
<p>Having studied all three subjects, I’d argue that mathematics falls <em>between</em> language and programming on the hierarchy of rigor.</p>
<ul>
<li>Human language</li>
<li>Mathematics</li>
<li>Programming</li>
</ul>
<p>and the hierarchy of abstraction is the exact reverse, with programming being the most concrete and language being the most abstract. Perhaps this is why people consider mathematics a bridge between human language and programming. Because it allows you to express more formal ideas in a more concrete language, without making you worry about such specific hardware details like whether your integers are capped at 32 bits or 64. Indeed, if you think that the core of programming is expressing abstract ideas in a concrete language, then this makes a lot of sense.</p>
<p>This is precisely why learning mathematics is “better” at helping you learn the kind of abstract thinking you want for programming than language. Because mathematics is closer to programming on the hierarchy. It helps even more that mathematics and programming readily share topics. You teach graph coloring for register allocation, linear algebra and vector calculus for graphics, combinatorics for algorithms. It’s not because you need to know graph coloring or how to count subsets of permutations, but because it shows the process of reasoning about an idea do you can understand the best way to organize your code. If you want to connect language to programming you almost always have to do so through mathematics (direct modeling of sentence structure via programming is a well-tried and unwieldy method for most linguistic applications).</p>
<h1 id="bigo-is-quotpretty-much-meaninglessquot_1">
<a class="head_anchor" href="#bigo-is-quotpretty-much-meaninglessquot_1" rel="nofollow"> </a>Big-O is “pretty much meaningless”</h1>
<p>Another issue I have with Mei’s article is on her claim that “big-O” is meaningless in the real world. More specifically, she says it only matters what the runtime of an algorithm is on “your data.” </p>
<p>Let’s get the obvious thing out of the way. I can name many ways in which a result in improving the worst-case asymptotic complexity of an algorithm has literally changed the world. Perhaps the biggest is the <a href="http://jeremykun.com/2012/07/18/the-fast-fourier-transform/" rel="nofollow">fast Fourier transform</a>. So if you’re applying to work at a company like Google, which deservingly gets credit for changing the world, it makes total sense for interviewees to be familiar with the kind of mathematical content that has changed the world in the past. Maybe it’s a mistake for smaller companies to emulate Google, but you can’t blame them for wanting to hire people who would do well at Google.</p>
<p>But at a deeper level I don’t believe Mei’s argument. Her example is this.</p>
<blockquote class="large">
<p>An algorithm that is O(n**2) for arbitrary data may actually be constant time (meaning O(1)) on your particular data, and thus faster than an algorithm that is O(n log n) no matter what data you give it.</p>
</blockquote>
<p>First, the chance is absolutely negligible that you will come across a nontrivial problem where the runtime of a standard algorithm meets the worst case on “your” data, but when you use a generally-considered worse algorithm it does much better. Second, there is a very rich mathematical theory of, “algorithms that run extremely fast and return correct answers to queries with high probability.” So again, you can turn to mathematics where the expectations are quantifiable rather than arbitrary and guessed.</p>
<p>But more deeply, <strong>nobody in industry has any clue</strong> what it is that characterizes “real world data” that allows you to make worst-case guarantees. They have a fuzzy idea (real social networks are usually sparse, etc.), but little in the way of a comprehensive understanding. This is a huge topic, but it’s a topic of <strong>active research</strong>, which is uncoincidentally filled to the brim with mathematics. The takeaway is that even if you have an algorithm that <em>seems</em> to run quickly on “your” data, “seems” is the best you’ll be able to say without answering big open research questions. You won’t be able to guarantee anything, which means you’ll be stuck telling your manager that you’re introducing more points of failure into the system, and you risk being paged in the middle of the night because the company has expanded to China and their data happens to break your algorithm on average.</p>
<p>But you’re a smart engineer, so what do you do? You run your clever algorithm and track how long it takes; if it takes too long, you abort and do the standard O(n log n) solution instead. Problem solved. But wait! You needed to know the difference between your algorithm’s worst case complexity and the baseline complexity, and you had to determine how long to wait before aborting. </p>
<p>The fact is, you can’t function without knowing the baselines, and asymptotic runtime (big-O) is the gold standard for comparing algorithms. Certainly you can mix things up as appropriate, as the fictional engineer in our story did, but if you’re going to do a more detailed analysis you have to have a reference frame. At a company where a one-in-a-million error happens a hundred times a day, mathematical guarantees are (literally) what help you sleep at night. Not every programmer deals with these questions regularly (which is why I don’t think math is necessary to be a programmer), but if you want to be a <em>great</em> programmer you had better bet you’ll need it. Companies like Google and Amazon and Microsoft face these problems, aspire to greatness, and want to hire great programmers. And great programmers can discuss the balance issues of various algorithms.</p>
<p>But Sarah Mei is right, there might be some interesting ways to model algorithms running better on “your” data than the worst case (and if I were interviewing someone I would gladly entertain such a discussion), but I can say with relative certainty that even an above-average math-phobic interviewee is not going to have any new and deep insights there. And even if one does, one needs to be able to answer the question of how this relates to what is already known about the problem. Without that how can you know your solution is better?</p>
<h1 id="a-quotminor-specializationquot_1">
<a class="head_anchor" href="#a-quotminor-specializationquot_1" rel="nofollow"> </a>A “minor specialization”</h1>
<p>Now my biggest beef is with her conclusive dismissal of mathematics.</p>
<blockquote class="large">
<p>If a small and shrinking set of programming applications require math, so much so that we cordon you off into your own language to do it, then it’s pretty clear that heavy math is, these days, a minor specialization.</p>
</blockquote>
<p>Oh please. You can’t possible think that every mathematician who programs does so in Fortran or Haskell. I’m a counterexample: I’m proficient in C, C++, Java, Python, Javascript, HTML and CSS. I have only really dabbled in Haskell and Racket and other functional languages (I like them a lot, but I just get more done in Python).</p>
<p>But what’s worse is that I have so many programming applications of mathematics that I don’t know what to do with them all. It’s like they’re sprouting from my ears! </p>
<p>Let’s take the examples of what Mei thinks are purely unmathematical uses of programming: “ease of use, connectivity, and interface.” I’m assuming she means the human-computer interaction version of these questions. So this is like, how to organize a website to make it easy for users to find information and streamline their workflow. I’d question whether anyone in the industry can really be said to be “solving” these problems rather than just continually debating which solution they arbitrarily think is best. In fact, I’m more inclined to argue that companies change their interface to entice users to pay for updates more than to make things easier to use (I’m looking at you, Microsoft Word).</p>
<p>In any case, it’s clear that Mei is biased toward one very specific kind of programming, which does have mathematical aspects (see below). But moreover, she blurs the distinction between an application of mathematics to programming and what she finds herself and her colleagues actively doing in her work. Let me counter with my own, more quantifiable examples of the mind-bogglingly widespread applications of mathematics to industry, both passive and active.</p>
<p><strong>Optimization:</strong> the big Kahuna of mathematical applications to the real world. Literally every industrial company relies on state of the art optimization techniques to optimize their factories, shipping lines, material usage, product design. And I’m not even talking about the software industry here. I’m talking about Chevron, Walmart, Goldman Sachs. Every single Fortune 500 company applies more “heavy” math on a daily basis than could be taught in four years of undergraduate education. They don’t care about ease of use, they care about getting that extra 0.05% profit margin. And as every mathematician knows, there is a huge theory of optimization that ranges from linear programming to functional analysis to weird biology-inspired metaheuristics. </p>
<p><strong>Signal processing:</strong> No electric device or wireless communication system would exist without signal processing. The entire computer industry relies on digital signal processing techniques and algorithms proliferated via mathematics. Literally every time you type a key on your keyboard, you’re relying on applications of mathematics to programming. Sure, you don’t need to know how to build a car to drive it, but signal processing techniques extend to other areas of programming, such as graphics, data mining, and optimization, and a large portion of the software industry is disguised as the hardware industry because they use languages like VHDL instead of Ruby. They <em>really</em> need to know this topic, and it’s not fair to forget them. That being said, let’s not forget all the engineers who do signal processing in Matlab. Our list just keeps getting bigger and bigger, huh?</p>
<p><strong>Statistics:</strong> Every company needs to manage their risk and finances via statistics, and every application of mathematics and statistics to risk and finance is done via programming. Whether you use SAS, JMP, R, or just Excel, it’s all programming and all requires mathematical understanding. This is not even to mention all of the statistical modeling (via programming) that goes on in a non-financial setting. For example, in Obama’s presidential campaign and in sports forecasting. Even as I write this, NPR is reporting on the Malaysia flight that was shot down in Ukraine, and how technicians are using “mathematics and algorithms” to pinpoint the location of the crash.</p>
<p><strong>Machine Learning:</strong> A hot topic these days, but for a long time engineers have been trying to answer the question, “what does it mean for a computer to learn?” Surprise, surprise, the generally accepted answer these days came from mathematicians. The theory of <a href="http://jeremykun.com/2014/01/02/probably-approximately-correct-a-formal-theory-of-learning/" rel="nofollow">PAC-learning</a>, and more generally its relationship to the many widely-used machine learning techniques, paved the way for things like boosting and the study of statistical query algorithms. Figuring out smart ad serving? Try <a href="http://jeremykun.com/2013/10/28/optimism-in-the-face-of-uncertainty-the-ucb1-algorithm/" rel="nofollow">bandit-learning techniques</a>. It’s mathematics all the way down.</p>
<p><strong>Graphics/Layout:</strong> You want ease of use in human computer interaction? You want graphics. You want special effects in movies? You need linear algebra, dynamical systems, lots of calculus, and lots of graphics programming. You want video games? Data structures, computational geometry, and twice as much graphics as you thought you’d ever need. You want a dynamic, adaptive, tile-based layout on your website? Get ready for packing heuristics, because that stuff is NP-hard! Information trees, word clouds, rankings, all of these layout concepts have rich mathematical underpinnings. </p>
<p>You see, Mei’s fundamental misconception is that the kind of applications that we haven’t yet automated and modularized constitutes what programming is all about. We don’t know how to automate the translation of obscure and ambiguous business rules to code. We don’t know how to automate the translation from a picture you drew of what you want your website to look like to industry-strength CSS. We don’t know how to automate the organization of our code so as to allow us to easily add new features while maintaining backwards compatibility. So of course most of what goes on in the programming industry is on that side of the fence. And before we had compilers we spent all our time tracking memory locations and allocating registers by hand, too, but that’s no more the heart and soul of programming than implementing business rules.</p>
<p>And by analogy, most of writing is not literature but fact reporting and budget paperback romance novels, but we teach students via Twain and Wilde. And most cooking is reheating frozen food, not farm-to-table fine cuisine, so should a culinary student study McDonald’s?</p>
<p>But if you wanted to genuinely <em>improve</em> on any of these things, if you wanted to figure out how to automate the translation of drawn sketches to good HTML and CSS, you can count on there being some real mathematical meat for you to tenderize. I hope you try, because without mathematics we programmers are going to have an extremely hard time making real progress in our field.</p>
tag:j2kun.svbtle.com,2014:Post/will-the-world-ever-need-more-than-5-quantum-computers2014-06-24T13:18:44-07:002014-06-24T13:18:44-07:00The world will never need more than five quantum computers<p>I have been gradually making my way through Scott Aaronson’s wonderful book, “Quantum Computing Since Democritus.” The book is chock-full of deep insights phrased in just-technical-enough language (the kind which I want to relay to the world through an <a href="http://jeremykun.com/2014/06/12/three-years-old-and-an-idea-for-a-podcast/" rel="nofollow">internet megaphone</a>). Scott really has learned how to apply the good and bad attitudes of the past to the problems of today. </p>
<p>For example, did you know that originally computers had so many problems with errors that many people argued fault-tolerant computers would never exist? This was before the transistor, of course, but it was believed that the external world would always have such an adverse interference with the physical machine that one could not reliably use the outputs. John von Neumann proved to the contrary that even with the error-prone hardware of the time it was possible to design perfect fault-tolerance into a machine. But his accomplishment was largely forgotten after the transistor was invented and shown to be so reliable as not to need any extra error-correction scaffolding. </p>
<p>But the idea that computers would never be error-tolerant enough was probably the origin of the famous slew of quotes that <a href="http://en.wikipedia.org/wiki/Thomas_J._Watson#Famous_misquote" rel="nofollow">the world would never need more than five computers</a>. It’s not so ridiculous a proposition in that context, since the world also only has need for around five particle accelerators. Scott notices the parallel for quantum computers, the worry that the outside world would interfere with the computations so as to render them useless, and discusses the existence of quantum fault-tolerance in the same vein as von Neumann’s theorem.</p>
<p>Nevertheless, the question of whether the world will ever need more than five quantum computers (assuming they’re feasible to scale) is still a poignant one. It’s not because of error, but because of what kinds of problems quantum computers are believed to be better at than classical computers. </p>
<p>You see, it’s widely known that quantum computers <em>aren’t</em> more powerful than classical computers in the sense that they can compute things that classical computer cannot. The real question is one of efficiency, and by efficiency I mean the difference between polynomial time and worse-than-polynomial time and the problem scales. The truth is we only know of a few key problems that we know quantum computers can solve efficiently, and that we don’t know for sure that classical computers can’t.</p>
<p>One example is factoring integers. We know that quantum computers can factor integers quickly, but we don’t know for sure that classical computers cannot. In fact, many researchers believe that, because of recent advances in computer science and cryptography, we <em>will</em> find a polynomial-time algorithm for factoring integers relatively soon.</p>
<p>The question is, who really needs to factor integers on a regular basis? The only answer I can come up with is number theorists (trying to prove theorems) and the government (trying to break encryption). But these days <a href="http://rjlipton.wordpress.com/2013/03/02/cryptography-is-dead/" rel="nofollow">people are moving away from factoring-based encryption</a>. So who’s left to care?</p>
<p>There are, admittedly, other ways that quantum computers can speed up things, but it’s not as drastic as the mainstream media would have one believe. For example, the best known speedup for solving NP-complete problems, which includes most scheduling, packing, and routing problems (an efficient algorithm for this would revolutionize the world), is on the order of a square-root. That is, it reduces the time from an exponential to a square-root-of-an-exponential, which is still egregiously slow. </p>
<p>This is not to downplay the importance of quantum computing. It’s a multifaceted subject providing a vast trove of interesting problems, answers, and discussions. It excites me that one day I might actually contribute some small fact to nudge forward human knowledge about quantum computing. But the set of useful problems we know how to solve efficiently with quantum computers is just so minuscule. In order to convince me that quantum computers may someday become commonplace, one would need to present a problem that quantum computers can solve with applications on the scale of Facebook. It needs to be something that potentially every human could have use for. And while I am not an expert in quantum computing, if such a problem and solution existed I’d probably have heard of it by now (it would be trumpeted along with factoring and the hidden subgroup problem as triumphs of the model).</p>
<p>So unless there are extreme revolutions in theoretical computer science, which is certainly possible, it seems safe to reuse that infamous quote here: the world will never have need for more than five quantum computers.</p>
tag:j2kun.svbtle.com,2014:Post/polynomial-time-reductions-are-hacks2014-05-05T10:44:10-07:002014-05-05T10:44:10-07:00Reductions are the Mathematical Equivalent of Hacks<p>Though I don’t remember who said it, I once heard a prominent CS researcher say the following:</p>
<blockquote class="large">
<p>Reductions are the lifeblood of theoretical computer science.</p>
</blockquote>
<p>He was totally right. For those readers who don’t know, a reduction is a systematic way to transform instances of one problem into instances of another, so that solutions to the latter translate back to solutions to the former.</p>
<p>Here’s a simple example. Say you want to generate a zero or a one at random, such you’re equally likely to get either outcome. You can reduce this problem to the problem of generating a zero or a one with some biased probability (that’s not <em>completely</em> biased).</p>
<p>In other words, you can simulate a fair coin with a biased coin. How do you do it? You just flip your biased coin twice. If the outcome is “heads then tails,” you call the outcome of the fair coin “heads.” If the outcome is “tails then heads” you call the outcome of the fair coin “tails.” In any other event (TT or HH), you try again. This works because if you know you flipped one heads and one tails, then you’re just as likely to get the heads first as you are to get the tails first. If your coin is biased with probability p, these two events both happen with probability p(1-p).</p>
<p>Even more fascinating is that you can go the other way too! Given a fair coin, <a href="http://jeremykun.com/2014/02/12/simulating-a-biased-coin-with-a-fair-coin/" rel="nofollow">you can simulate coins with any bias you want!</a> This is a quantifiable way to say, “biased coins and unbiased coins are computationally equivalent.” Theoretical computer science is just bursting with these cool proofs, and they are the mathematical equivalent of a really neat “hack.”</p>
<p>Why do I call it a hack? The word is primarily used for bad but effective solutions to programming problems (avoiding bugs without fixing their root cause and such). But another use of the word is to successfully use a thing for a purpose against or beyond its original intention. Like exploiting a buffer overflow to get access to sensitive data or <a href="http://hacks.mit.edu/Hacks/by_year/2012/tetris/" rel="nofollow">using building lights to play Tetris</a>, hacks have a certain unexpectedness about them. And most of all hacks are slick.</p>
<p>Reductions come in many colors, the most common of which in computer science is the <em>NP-hardness</em> reduction. This is a reduction from a specific kind of problem (believed to be hard) to another problem while keeping the size “small,” by some measure. And the reason it’s important is because if you show a problem is NP-hard (has a reduction from a known NP-hard problem), then you are including it in a class of problems that are believed to have no efficient solution. So in this case a reduction is one way to measure the difficulty of a problem you’re studying.</p>
<p>One really fun example is that the rule-sets of <a href="http://arxiv.org/abs/1203.1895" rel="nofollow">most classic Nintendo games are NP-hard</a>. That is, you can design a level of Donkey Kong Country (or Super Mario Brothers, or Pokemon Red) so that getting to the end of the level would require one to solve a certain kind of logic problem. So if you could write a program to beat any Donkey Kong level (or even tell if there is a way to beat it), you could solve these hard logic problems. </p>
<p>The key part of the reduction is that, given <em>any</em> such logic problem, you can design a level that does this. That is, there is an algorithm that transforms descriptions of these logic problems into Donkey Kong levels in an efficient manner. The levels are quite boring, to be sure, but that’s not the point. The point is that Donkey Kong is being used to encode arbitrary logic, and that’s a sweet hack if I’ve ever see one.</p>
<p>If you enjoy the hacker mindset, and you want to get more into mathematics, you should seriously try reading about this stuff. You have to wade through a little bit of big-O notation and know that a Turing machine is roughly the same thing as a computer, but the ideas you unlock are really fun to think about. Here’s an <a href="http://jeremykun.com/2012/02/23/p-vs-np-a-primer-and-a-proof-written-in-racket/" rel="nofollow">article I wrote about P vs NP</a>, actually implementing one of the famous reduction proofs in code.</p>
<p>Even better, once you understand a few basic NP-hardness reductions, you can already start contributing to open research problems! For example, nobody knows if the problem of factoring integers is NP-hard. So if you could find a way to encode logic in a factoring problem the same way you can for a Donkey Kong level, you’d be pretty famous. On the easier side, it just so happens that potentially NP-hard problems show up a lot in research. Two of my current research projects are about problems which I suspect to be NP-hard, but for which I have no proof. And once you prove they’re NP-hard then you can start asking the obvious follow-ups: can I find good <em>approximate</em> solutions? How much easier do I need to make the problem before it becomes easy? The list goes on, giving more and more open questions and, the best part, more opportunities for great hacks.</p>
tag:j2kun.svbtle.com,2014:Post/my-new-linkedin-summary2014-05-01T15:38:56-07:002014-05-01T15:38:56-07:00My New LinkedIn Summary: Breaking the Fourth Wall of a Resume<p>LinkedIn is a weird niche in the internet: it’s a place for recruiters to reach out to candidates without a completely cold-email approach, along with a smattering of other relatively unimportant things going on (lots of “congrats” notes and the occasional unsubstantiated endorsement).</p>
<p>It’s not clear whether it’s a good niche or a bad one, but what is clear is that the most likely person to get their first introduction to me via my LinkedIn profile is a recruiter. So I can target my resume more effectively. I know exactly where to aim in terms of the reader being familiar with me and my work. With that in mind I recently rewrote my profile summary:</p>
<p>If you’re looking at my LinkedIn profile (as opposed to my academic CV [<a href="http://homepages.math.uic.edu/%7Ejkun2/cv/cv.html" rel="nofollow">1</a>] or my blog [<a href="http://jeremykun.com/" rel="nofollow">2</a>]), then chances are you’re a recruiter at a software company. Chances are also good that you haven’t got the first impression most people have of me: I love math.</p>
<p>Let me say it again: I <em>really</em> love math. I like doing it [<a href="http://scholar.google.com/citations?user=2tN47wQAAAAJ&hl=en" rel="nofollow">3</a>], learning it, talking about it, and writing about it [<a href="http://j2kun.svbtle.com/" rel="nofollow">4</a>]. So it would be foolish to try to get me into a job where I’m not spending at least 20% of my time thinking about math.</p>
<p>That being said, my favorite kinds of math are the kinds that unlock fascinating programs. I was originally trained as a software engineer, and so I love it when mathematical ideas and programs together allow one to, for example, recognize faces [<a href="http://jeremykun.com/2011/07/27/eigenfaces/" rel="nofollow">5</a>], design economic markets [<a href="http://jeremykun.com/2014/04/02/stable-marriages-and-designing-markets/" rel="nofollow">6</a>], or create fun games [<a href="http://jeremykun.com/2014/03/17/want-to-make-a-great-puzzle-game-get-inspired-by-theoretical-computer-science/" rel="nofollow">7</a>]. That’s part of the beauty of math: it can apply in wild and unexpected places.</p>
<p>If I’m going to work for a company that isn’t explicitly mathematical, this would be my dream job: finding ways to apply mathematics to improve existing features or add new ones. It doesn’t have to involve genuinely original mathematics, it doesn’t even have to involve particularly clever mathematics. But I require some minimal amount of mathematical engagement in whatever I do.</p>
<p>Finally, I’m guaranteed to decline all job offers before I finish my PhD. But if we have a chat and your company seems to fit, I’d be glad to contact you once I’m on the market.</p>
tag:j2kun.svbtle.com,2014:Post/what-counts-as-a-mathematician2014-04-18T09:30:17-07:002014-04-18T09:30:17-07:00What "Counts" as a Mathematician?<h1 id="the-quotbest-jobquot-of-2014_1">
<a class="head_anchor" href="#the-quotbest-jobquot-of-2014_1" rel="nofollow"> </a>The “Best Job” of 2014</h1>
<p>A few days ago the website CareerCast (Adicio, Inc) released <a href="http://www.careercast.com/jobs-rated/jobs-rated-2014-ranking-200-jobs-best-worst" rel="nofollow">a list of the top jobs in 2014</a> which put “Mathematician” as number 1. Most news sites have used this as a platform to discuss the centrality of mathematics and technology in the world economy, or the importance of STEM (Science, Technology, Engineering, and Mathematics) in education. I’m not against such discussions — indeed I spend a large fraction of my time writing <a href="http://jeremykun.com/" rel="nofollow">long and detailed posts explaining mathematics</a> to anyone who will listen — but I do suspect the ranking is misleading.</p>
<p>As every mathematician knows definitions are <em>extremely important,</em> so I wonder how mathematician is defined for the purpose of this ranking. After a bit of snooping it appears at least part of their analysis comes directly from the <a href="http://www.bls.gov/ooh/math/mathematicians.htm#tab-1" rel="nofollow">US Bureau of Labor Statistics website</a>, which in turn uses an aggregation of two occupational classification models. </p>
<p>The first is the <a href="https://www.census.gov/cgi-bin/sssd/naics/naicsrch?chart=2012" rel="nofollow">North American Industry Classification System</a>, which has no record solely for a mathematician. The closest they get is the following</p>
<p><a href="https://svbtleusercontent.com/anvpczkzc8esiq.png" rel="nofollow"><img src="https://svbtleusercontent.com/anvpczkzc8esiq_small.png" alt="NAICS-math.png"></a></p>
<p>The second source, the <a href="http://www.bls.gov/soc/2010/soc_alph.htm" rel="nofollow">2010 Standard Occupational Classification</a> is more useful. They distinguish between a variety of mathematical fields, but still give no information about how the data was collected for any occupation. Mathematicians just has a list of examples that are clearly biased:</p>
<p><a href="https://svbtleusercontent.com/ylshhltbqhcwmq.png" rel="nofollow"><img src="https://svbtleusercontent.com/ylshhltbqhcwmq_small.png" alt="SOC-math.png"></a></p>
<p>But on the other hand the <a href="http://www.bls.gov/soc/2010/soc150000.htm#15-2000" rel="nofollow">major group</a> category gives some more pleasing discriminations, such as operations research analyst and “<a href="http://www.bls.gov/soc/2010/soc152091.htm" rel="nofollow">mathematical technician</a>” (which I think is a wonderfully useful category).</p>
<p>The search hits a dead end here, because neither does the Census Bureau or the Bureau of Labor Statistics state how they determined their statistics from the classification (Did they use job title? Did they ask the people surveyed what they consider themselves?) nor does CareerCast state how they aggregated their 200 jobs out of the 500+ jobs that the BLS aggregated statistics for. </p>
<p>Mathematicians also have an odd place in a survey of occupations because mathematics is so intertwined with other disciplines. Two people with the same job title (say, “Security Expert”) could have different enough jobs that one is a mathematician while the other isn’t. Indeed, even the Bureau of Labor Statistics <a href="http://www.bls.gov/ooh/math/mathematicians.htm#tab-2" rel="nofollow">agrees</a>,</p>
<blockquote class="large">
<p>Most people with a degree in mathematics or who develop mathematical theories and models are not formally known as mathematicians.</p>
</blockquote>
<p>So the question is what counts as a mathematician? Or better, since nobody has seemed to ask this question: what <em>should</em> count as a mathematician? I’ll give my answer with a few examples to ramp up to my final answer. I want to preface my examples with a large and bold claim: I am <em>not</em> making a value judgement either about mathematician’s being good or other jobs being bad. I like being a mathematician, but I don’t consider people in mathematical fields (who I don’t consider mathematicians) to be lesser in any way. I am simply trying to come up with a well-defined (if somewhat informal) classification rule that aligns with my idea of what a mathematician is. So here it goes.</p>
<h1 id="examples-and-a-definition_1">
<a class="head_anchor" href="#examples-and-a-definition_1" rel="nofollow"> </a>Examples and a Definition</h1>
<p>I don’t consider an actuary to be a mathematician (and I’m glad to see that CareerCast appears to agree). Actuaries certainly need to know and use a lot of mathematics in what they do; they manage risk and risk is naturally mathematical. But they <em>use</em> mathematics as opposed to <em>doing</em> mathematics. </p>
<p>In the same vein, many data scientists are not mathematicians. Why? Because despite their analytical skills and statistical know-how, data scientists largely apply known statistical and machine learning models as black boxes to their data sets. This of course depends on the scientist, and there are many critics of <a href="http://mathbabe.org/2012/07/31/statisticians-arent-the-problem-for-data-science-the-real-problem-is-too-many-posers/" rel="nofollow">posers in the land of data science</a>. As Cathy O'Neil puts it: </p>
<blockquote class="large">
<p>My basic mathematical complaint is that it’s not enough to just know how to run a black box algorithm. You actually need to know how and why it works, so that when it doesn’t work, you can adjust.</p>
</blockquote>
<p>I would take this even farther: a data scientist should not be considered a mathematician unless their job requires them to significantly modify standard models and algorithms to suit their needs. Even better, they should be creating new models.</p>
<p>More generally, I can now make the following definition of a mathematician.</p>
<p><strong>Definition:</strong> A mathematician is someone who, as part of their occupation, devotes a nontrivial portion of their time to the invention of new mathematics.</p>
<p>“Inventing new mathematics” also requires a definition, and I would consider it to be one of the two following things:</p>
<ul>
<li>Any original model, algorithm, problem, heuristic, or mathematical definition.</li>
<li>An original theorem, proof, conjecture, or analysis pertaining to one of the above.</li>
</ul>
<p>One might protest: nothing can truly be original anymore! I don’t mean to use original in the sense that something has never been done before, but that it is something that <em>you</em> have never seen or done before. You cannot be called a mathematician if your “new model” is <em>knowingly</em> (by you) and trivially derivative of someone else’s work. You cannot be called a mathematician if you use or implement someone else’s algorithm. You can be called a mathematician if you prove the correctness or efficiency of an algorithm. This is regardless of whether the algorithm is widely known, considered interesting or important, or even whether an identical analysis was done in the 70’s. All that matters is whether you’re the one producing original mathematics.</p>
<p>This extends to invalidate typical measures: you are not a mathematician just by having a mathematical publication, nor are you <em>not</em> a mathematician if you have no publications. But if you publish regularly in mathematical journals or conferences, then you are a mathematician.</p>
<p>As a thought experiment to illustrate my point, say you were being paid, as an occupation, to reinvent basic geometry while you were devoid of contact with the outside world. You might spend much of your time puzzling over trivial facts taught in high schools every day, and you might come up with definitions and theorems that are wholly <em>worse</em> than Euclid’s. But you are still a mathematician.</p>
<p>The finer point is that <em>doing</em> mathematics is equivalent to <em>inventing</em> mathematics. That’s why I think the classification “mathematical technician” is such a wonderful category: it gives a name to the people who apply standard techniques to solve problems that don’t require new mathematics. They are the engineers, financial analysts, and operations researchers using satisfiability-solvers to optimize their chip designs and Black-Scholes to price their options.</p>
<p>A perhaps displeasing consequence for mathematicians hoping to keep the #1 spot on that list is that this definition declares graduate students in mathematics to be mathematicians. And I would argue this is rightly so: the purpose of a PhD program is to induct one into the research community as a peer. So the second you start trying to tackle original problems is when you don the title of mathematician. (Fair disclosure, I am a PhD student in mathematics) </p>
<p>The unfortunate part is that the median salary of a graduate student is quite low. Many, including myself, make roughly the federal minimum wage. Considering that mathematics PhDs take 4-6 years and that drop-out rates are nontrivial and career switches are common (many PhDs end up teaching without inventing any new mathematics), it seems deceitful not to factor that into the analysis. </p>