When reading machine learning papers, I couldn’t help but notice that I kept seeing the same structures over and over again. Some conferences have one more often than the others. So, I decided to catalog a few of them.
This is the paper with the simple algorithm. You get this when you proved a nicer bound or the main contribution of your work is a faster convergence rate.
Note even if the algorithm is trivial, you will still see an experiments section. As near as I can tell, this is just because reviewers want to know you didn’t do this entire paper on a whiteboard.
This paper is for those projects that are more software engineering than new algorithms. Open-source libraries will have paper with this structure. Expect something in here about how contributors or lines of code the software.
The primary thing this paper advertises is how easy it is to use a particular piece of software.
This is really a systems paper when you see it show up in a machine learning conference. PL papers that end to be more theoretical will just go to POPL or ICFP. This comes up a lot for probabilistic programming.
This happens when you made a really complicated algorithm, that solves a particular problem. This comes up often because this algorithm is to solve a problem specific to a particular domain. You’ll see this for natural language processing and computer vision papers. Theory is something that is given little space in these papers.
Note, this isn’t a systems paper. APIs are not talked about and you won’t be seeing many diagrams about the different components of the system. The authors had a concrete problem and this was how they solved it.
Think I missed a style? Tell me and I’ll add it.