Site icon Brief News

Is “Big Data” racist? Why policing by data isn’t necessarily objective

Enlarge / Modeled after London’s “Ring of Steel,” the NYPD opened its coordination center in 2008. As seen in 2010, cops monitor feeds from over 1,159 CCTV cameras with the number increasing to 3,000 as the program expands. (credit: Timothy Fadek/Corbis via Getty Images)

The following is an excerpt from Andrew Ferguson’s 2017 book, The Rise of Big Data Policing and has been re-printed with his permission. Ferguson is a law professor at the University of the District of Columbia’s David A. Clarke School of Law.

The rise of big data policing rests in part on the belief that data-­based decisions can be more objective, fair, and accurate than traditional policing.

Data is data and thus, the thinking goes, not subject to the same subjective errors as human decision making. But in truth, algorithms encode both error and bias. As David Vladeck, the former director of the Bureau of Consumer Protection at the Federal Trade Commission (who was, thus, in charge of much of the law surrounding big data consumer protection), once warned, “Algorithms may also be imperfect decisional tools. Algorithms themselves are designed by humans, leaving open the possibility that unrecognized human bias may taint the process. And algorithms are no better than the data they process, and we know that much of that data may be unreliable, outdated, or reflect bias.”

Read 27 remaining paragraphs | Comments

Ars Technica

Exit mobile version