There is currently a debate in social epistemology at large about the best methods to extract knowledge from a group. In particular, on digital platforms, a question arises as to whether it is better to use aggregating tools, such as ranking systems, or discussion boards. One camp, which includes philosophers Jürgen Habermas and Helen Longino, and political theorist James Fishkin, advocates group deliberation. Proponents of deliberation argue that deliberation can expose and weed out deliberators’ subjective biases, and allow a group to rationally converge on the better argument. The other camp advocates aggregation of group members’ discrete contributions by an algorithmic procedure. This camp includes philosopher Miriam Solomon, legal scholar Cass Sunstein, and public intellectual James Surowiecki, author of the bestseller The Wisdom of Crowds (2004). Proponents of aggregation are highly critical of dliberation, which they regard as a poor method to enhance the epistemic performances of a group. Both sides cite empirical studies from social psychology that allegedly vindicate their position.
I argue that the question we should ask is not which method – deliberation or aggregation – is categorically better, but which method works best for which problems and under which circumstances. Drawing on Daston and Galison’s taxonomy of different forms of objectivity, I identify deliberation with “trained judgment” and aggregation with “mechanical objectivity”. At its best, trained judgment produces outstanding ideal types and separates well the wheat from the chaff, but it is prone to the influence of both prevailing and idiosyncratic biases. At its best, mechanical objectivity produces accurate answers to targeted questions, but it is prone to errors resulting from defective or unrepresentative data or dogmatic or thoughtless implementation of procedures. Thus, two central questions we should ask when deciding between deliberation and aggregation are which errors are more likely to occur in the case at hand, and which errors we care more about preventing. Bearing these questions in mind, I revisit the empirical studies about group epistemic performance and tentatively lay out principles for best employing both methods.