At a glance, machine learning and statistics seem to be very similar, but many people fail to stress the importance of the difference between these two disciplines. Machine learning and statistics share the same goals—they both focus on data modeling—but their methods are affected by their cultural differences. In order to empower collaboration and knowledge creation, it’s very important to understand the fundamental underlying differences that reflect in the cultural profile of these two disciplines. To gain a deeper understanding of these differences, we need to take a step back and look at their historical roots.
A Brief History of Machine Learning and Statistics
In 1946, the first computer system, ENIAC, was developed with the vision of reforming numerical computation using a machine (instead of manual numerical computation using pencil and paper). The underlying idea, at that time, was that human thinking (human capital investment) and learning could be replicated into a logical format in a machine.
In the ‘50s, Alan Turing, the father of Artificial Intelligence (AI), proposed a test to measure to what extent a machine can learn and perform like a human. In the following decade, Frank Rosenblatt invented the notion of Perceptron at the Cornell Aeronautical Laboratory. The idea behind this revolutionary invention was that perceptrons can be very similar to linear classifier. He showed that by combining a large number of perceptrons, we can create a powerful network model—what we now know as a neural network.
The study of machine learning has been growing into the effort of a handful of computer engineers trying to explore whether computers could learn and mimic the human brain. Machine learning plays a crucial role into the discovery of knowledge from data and has an enormous number of applications today.
The field of statistics started around the second part of the seventeenth century. The idea behind the development of this discipline was to measure uncertainty in experimental and observational science as the basis for probability theory. Statistics, from its outset, was meant to provide tools to not only “describe” but, more importantly, “explain” phenomena.
Interestingly enough, beer has had a large influence on the development of statistics. One of the foundational concepts of the field, the t-statistic, was introduced by a Chemist as a way to account for differences between batches at Guinness brewery in Dublin, Ireland. This and other concepts led to the development of a structured mathematical theory, with well defined definition and principles. Statistics developed instruments that humans could facilitate and use to increase their power of observation, permuting, predicting, and sampling.
The Difference is Cultural
Capturing real-world phenomena is an exercise in dealing with uncertainty. To do so, statisticians must understand the underlying distribution of the population under study, as well as come up with parameters that will provide predictive power. The goal for a statistician is to predict an interaction between variables with some degree of certainty (we are never 100% certain about anything). Machine learners, on the other hand, want to build algorithms that predict, classify, and cluster with the most accuracy. They operate without uncertainty or assumptions, continuously learning in order to improve their accuracy score.
Here’s a snapshot that captures the cultural differences in machine learners and statisticians’ approaches:
So why should we care?
Better, More Informed decisions
A thorough understanding of the differences in culture and jargon between the two disciplines will result in more productive communication. And better communication will certainly lead to better collaboration, which will lead to improved decision-making processes among teams.
Many times, professionals in statistics or machine learning make assumptions about how others might approach a problem. Peter Norvig, director of research at Google, once told a story that’s a great example of how this can backfire.
Norvig teamed up with a Stanford statistician to prove that statisticians, data scientists and mathematicians think the same way. They hypothesized that, if they all received the same dataset, worked on it, and came back together, they’d find they all independently used the same techniques. So, they got a very large dataset and shared it between them.
Norvig used the whole dataset and built a complex predictive model. The statistician took a 1% sample of the dataset, discarded the rest, and showed that the data met certain assumptions.
The mathematician, believe it or not, didn’t even look at the dataset. Rather, he proved the characteristics of various formulas that could (in theory) be applied to the data.
Instead of showing that people in these fields work the same way, Norvig’s experiment demonstrated that communication is essential if people in these disciplines want to tackle tough problems together.
Narrowing the Gap
Understanding the interlocutor and knowing their cultural background enables machine learners and statisticians to expand their knowledge and even apply methods outside their domain of expertise. This is the notion of “data science” itself, which aims to bridge the gap. Collaboration and communication between these two fascinating data-driven disciplines—machine learning and statistics—allows us to make better decisions that will ultimately positively affect the way we live.
About the Authors:
Nir Kalderois the Director of Data Science and the Head of Galvanize Experts, Galvanize, Inc,. Nir also serves on the Faculty of the Master’s of Science in Data Science, powered by the University of New Haven.
Dr. Donatella Taurasiis a lecturer and a Scholar at Haas School of Business and the Fung Institute For Engineering Leadership in Berkeley, and at Hult International Business School in San Francisco.