## Jan 27, 2010

### Algorithms vs. Machine Learning: Cultural Commentary

In a recent talk, I included a slide with the following:
Algorithms and Theory: A research HOWTO
1. Determine desired theorem statement
2. Design an associated algorithm
3. Prove that algorithm satisfies the claims of the theorem
Machine Learning: A research HOWTO
1. Determine desired algorithm
2. Design associated theorem statement
3. Prove that theorem holds for given algorithm

This (slightly tongue-in-cheek) characterization of the ML and Algorithms research cultures does have a bit of truth. If you ask a typical Algorithms person at a conference for a quick summary of his/her result, they'll tell you "we have a $2 + \epsilon$ approximation to the foobar problem". Similarly, if you ask someone at NIPS what their paper is about, they'll say "we have a faster way to do belief propagation for certain graphs" or "we have a modified version of boosting for sparse data". In other words, the former emphasizes the new bound or result, whereas the latter emphasizes the technique.

This has implications for both sides. If you want to know about, say, the known results on approximations for the "sparsest cut problem", an informed approx-algs researcher can quickly give you the known lower-bounds, upper-bounds, and open problems. But if you ask for the methods that achieves the upper bound, you will likely be sent to a 50+ page technical result describing the details. Often, results with only modest improvements on previous work use vastly different methods.

Machine Learning, on the other hand, suffers from quite a different problem. If you ask an ML researcher how to do binary classification, he/she can give you an array of simple and intuitive algorithmic frameworks. But the theoretical results motivating these algorithms are often scattered and incomparable (think Boosting vs SVM).

So, who should be more like the other? What do y'all think?