The following is an excerpt from Andrew Ferguson’s 2017 book, The Rise of Big Data Policing and has been re-printed with his permission. Ferguson is a law professor at the University of the District of Columbia’s David A. Clarke School of Law.
The rise of big data policing rests in part on the belief that data-based decisions can be more objective, fair, and accurate than traditional policing.
Data is data and thus, the thinking goes, not subject to the same subjective errors as human decision making. But in truth, algorithms encode both error and bias. As David Vladeck, the former director of the Bureau of Consumer Protection at the Federal Trade Commission (who was, thus, in charge of much of the law surrounding big data consumer protection), once warned, “Algorithms may also be imperfect decisional tools. Algorithms themselves are designed by humans, leaving open the possibility that unrecognized human bias may taint the process. And algorithms are no better than the data they process, and we know that much of that data may be unreliable, outdated, or reflect bias.”