CDS Faculty Awarded NSF Grant
Using Markets to Address Manipulated Information Online
Societies function badly without free speech, namely when individuals are unable to speak their minds freely – even when their speech is objectionable or even false. At the same time, societies also function badly when members cannot agree on basic facts. How can a society separate honest from dishonest claims over public media, without quelching free speech?
Funded by a $550k NSF award, Boston University researchers will conduct cross-disciplinary research between economics and data science in order to minimize the adverse impact of misinformation while promoting free speech. This project seeks to develop, implement, and test market mechanisms to dissuade liars from lying, to decentralize detection of false claims (to make effective detection more practical and scalable), and to change the economic and social incentives that currently make false claims cheaper to produce and disseminate than honest claims. The Boston University team comprises CDS Founding Faculty member Marshall Van Alstyne and Professor Nina Mazar from the Questrom School of Business, along with CDS Founding Faculty member Ran Canetti and CDS Associate Professor Mayank Varia. They are joined by David Rand from MIT, and Gordon Pennycook from the University of Regina.
From an economics standpoint, the project team will use established theories of signaling and screening that allow authors to signal information regarding the veracity of their claims while helping readers believe which claims are honest. This puts the burden of proof on the author, and allows honest authors to use their private knowledge of supporting facts and their reputation in order to signal the merit of their claims. The proposed mechanism also uses a randomly-sampled peer jury to evaluate claims, making it difficult for liars to discredit the fact-checkers. “We are designing and testing decentralized mechanisms for truth discovery so that everyone has a say and no one party – not government, not firms, and not powerful individuals – have undue influence over accuracy,” says Prof. Van Alstyne.
From a data science standpoint, the project connects to the CDS Hub for Civic Tech Impact and includes research into transparency, auditability, and privacy guarantees that can be enforced by technology. First, the project team will use verifiable computation so that the public can check all aspects of the marketplace mechanism: whether a story comes from a legitimate news source, whether a truthfulness decision about the content derives from the view of a panel of referees, and whether a published statement logically follows from known facts or claimed sources. “This is going to be a great testbed for deploying advanced cryptographic technology that allows quick, public, and privacy-preserving verification of the provenance of information,” says Prof. Canetti. Second, the team will also design privacy-respecting systems that allow for running our marketplace mechanisms while respecting the principles of end-to-end encryption. “Cryptographically secure computing allows people to participate in a crowdsourced vote safely, where we can hide their identity and votes in order to protect them against retaliation,” says Prof. Varia.
In total, this technology-aided social science project will measure (i) the extent to which we can distinguish actors who share higher vs. lower quality information, (ii) how much false content we can reduce without affecting free speech, (iii) whether we can build transparent, auditable, and private markets of false claims, and (iv) how well these changes might improve discourse involving elections, public health, and advertising. The results of this project can also provide a principled basis for reforms to Internet and media law concerning platform liability exemptions for user-generated content, such as the much-debated Section 230 of the Communications Decency Act.