Big data is increasingly viewed as a strategic asset that can transform organizations through its use of powerful predictive technologies.
But when it comes to systems that help make such decisions, the methods applied may not always seem fair and just to some, according to a panel of social researchers who study the impact of big data on public and society.
The event, organized recently by New York University's Politics Society and Students for Criminal Justice Reform, centered on issues arising out of big data's use in machine learning and data mining to drive public and private sector executive decisions.
The panel that included a mix of policy researchers, technologists, and journalists, discussed ways in which big data—while enhancing our ability to make evidence-based decisions—does so by inadvertently setting rules and processes that may be inherently biased and discriminatory.
The rules, in this case, are algorithms, a set of mathematical procedures coded to achieve a particular goal. Critics argue these algorithms may perpetuate biases and reinforce built-in assumptions.