Increasing the Transaction Costs of Harassment
Woodrow Hartzog* & Evan Selinger†
Online Symposium: Danielle Keats Citron’s Hate Crimes in Cyberspace
95 B.U. L. Rev. Annex 47 (2015)
Wouldn’t it be nice if the rules, agreements, and guidelines designed to prevent online harassment were sufficient to curb improper behavior? As if. Wrongdoers are not always so easily deterred. Sometimes these approaches are about as effective as attacking tanks with toothpicks.
As Danielle Citron contends in her critically important work, Hate Crimes in Cyberspace, the design of the Internet facilitates vitriol and abuse, even when it is legally, contractually, and normatively prohibited. Communicating almost effortlessly at distance—sometimes anonymously and typically with minimized body language—can heighten emotional detachment and blunt moral sensitivity. Tragically, when a mediated environment makes it easy to harass others, harassment occurs, all things being equal.
Fortunately, there’s hope. Since mediated environments can fuel harassment, designing online spaces to make harassment difficult—or, in economic terms, costly—should diminish it. But as with anything important, the devil is in the details.
Citron attests to the wisdom of the “Designing for Better Selves” approach by arguing smart design choices can “nudge users to treat others as deserving respect rather than as objects that can be mistreated.” She also maintains that online intermediaries—such as website and app designers—should adopt conscience design principles and strategies along with “clear policies prohibiting cyber harassment” and “robust enforcement” of them (239-240).1
In order to advance the conversation Citron started, it’s worth exploring what this strategy entails. Since Citron’s vision for leveraging design to fight harassment needs a locus, we are proposing transaction costs.
Taking Transaction Costs Seriously to Prevent Abuse
In economic theory, transaction costs refer to a range of expenses that are required for participating in market exchanges.2 But the concept can be expanded to cover the expense required to do anything. For example, time and effort are valuable resources, and we often evaluate how desirable possibilities are by calculating how much of these goods are required to actualize them.
Consider how companies limit the amount of complaints they need to attend to by hiding contact information or only providing consumers with limited contact information such as street mailing addresses and PO box numbers.3 While an e-mail demanding a refund is quick to construct and easy to share, the effort required to compose and send a comparable physical letter—write it on (or print it to) paper, put it in an envelope, address the envelope, get a stamp, affix the stamp, and put the envelope in a mailbox—can be an effective deterrent. This is especially the case if you are suspicious that the note will be ignored the first time around and you will eventually have go through the laborious process again. By comparison, it is much easier to forward an archived e-mail missive.
Citron validates a comparable strategy for using time and effort as deterrents by citing Professor Nancy Kim’s nudge proposal: “Companies could nudge users to think about others’ humanity by slowing down the posting process in the hopes that posters would use the time to think more carefully about what they say”(240-41).4
We believe online abuse can be mitigated by a range of related strategies that manipulate transaction costs. In the remaining sections, we emphasize three key areas companies have targeted: speech, access, and defense.
The Cost of Speech
Online harassment usually requires communication, and so the most direct way to limit it is to make harmful speech costly to conduct. For example, social media messaging systems can be restricted to designated users. Twitter users can limit private messages to their “followers” and Facebook users can do the same with their “friends.” Additionally, nearly every social platform allows users to block others. Blocking restrictions impact behavior by forcing those who are blocked to comply with or else expend the effort required to work around (e.g., deception) or override (e.g., hacking) the constraints.
More modest interventions can effectively nudge civility, too. Content filters on the anonymous social media app Yik Yak aim to prevent users from posting someone’s full name. This intervention makes it harder for abusers to locate and learn about potential targets.
Yik Yak also targets potentially problematic content by prompting users (under triggering circumstances) with the following message: “Pump the brakes, this yak may contain threatening language. Now it’s probably nothing and you’re probably an awesome person but just know that Yik Yak and law enforcement take threats seriously. So you tell us, is this yak cool to post?” This notification imposes additional time and effort for offenders to process and respond to what has been conveyed, and thus increases the cost of speaking. Is it a foolproof plan? Of course not. But it just might let cooler heads prevail over temporarily heated emotional reactions.
The Cost of Access
Blocking features can make it harder for abusers to access their intended victims’ information. Users can be blocked at various levels, ranging from being unable to share, tag, and upvote other’s posts, to being fully unable to access any aspect of content associated with another user’s profile.
The efficacy of and desire for strong access to blocking restrictions was made explicit in 2013. During this period, Twitter briefly altered its policy. For a limited time, blocked users could follow, retweet, and favorite a public user who had blocked them, and blockees stopped being notified when someone decided they merited blocked status.5 Harassment victims responded swiftly and loudly. As a result, Twitter reversed course.
Harassment can even be deterred by raising transaction costs and making information unsearchable. In previous work, we argued that when information is hard to find, it is relatively safe.6 Indeed, this logic underlies Europe’s so-called “Right to Be Forgotten”—which is, in reality, a right to hide from search engines.7 It is also why harassment and abuse victims worry when Facebook makes all profiles searchable regardless of privacy settings.8
The Cost of Defending
Transaction costs also are important for defending against harassment. Unlike the previous two examples, however, the goal here is to facilitate action. For example, it should be easy to report abuse to social media administrators. Recognizing this, most popular social media have a report button in close proximity to users’ posts. Yik Yak even implemented a voting system that immediately removes any post that receives five “downvotes.” Crucially, all users easily can downvote a post without jumping through bureaucratic hoops (like satisfying registration requirements) or engaging in additional clicks.
By contrast, systems designed to make abuse difficult to report show little respect for users. Professor Mary Anne Franks thus criticized Twitter’s previous abuse reporting system, alleging that it “set up a game that targets of abuse can never win.”9 Under Twitter’s old policy, parties conveyed harassment complaints on forms that took more time to complete than reporting spam, which required only the click of a readily available button.10
***
In Hate Crimes in Cyberspace Citron makes it clear that harassment cannot be eradicated solely by policies that penalize bad behavior. Systems and technologies must be designed to protect against it as well. To this end, we have argued that transaction costs associated with speech, access, and defense are the right starting points for determining how design can play a critical role in improving online interactions.
Of course, these categories are fluid and imprecise. For example, barriers to access might simultaneously limit speech. And, transaction costs might even be unrecognizable in the extreme. Consider authentication requirements for websites that limit access to users with particular backgrounds, like medical training. You could classify the years it takes to acquire such training as a transaction cost investment. But such semantics would stretch the idea beyond useful limits.
And let’s not forget, design strategies aren’t magical techno-fixes. For example, increasing the transaction costs for communicating online can lead to unpopular opinions being censored and strongly worded convictions being watered down.11 Furthermore, designing potent large-scale civility nudges might constitute a form of techno-social engineering that adversely impacts people’s judgment and character.12
It is also important to determine whether the companies that are committed to identifying promising new design strategies will need to run new experiments on their users to determine optimum transaction cost levels. If so, questions arise as to whether corporate approaches to changing user experiences will be ethically sound.13
Finally, more attention needs to be given to the role of the law in encouraging or prohibiting certain kinds of design. Not all companies will embrace good design as a matter of individual discretion. And this means mandates might be preferable in some instances.
It is hard work to construct and assess a transaction-cost framework for fighting abuse. But the finished product will be a useful tool for helping companies and policy makers focus their design efforts on effective and wise options. A mature version can give us a sense of how committed companies really are to preventing harassment and fostering civility.
* Associate Professor, Samford University’s Cumberland School of Law; Affiliate Scholar, Center for Internet and Society at Stanford Law School.
† Professor of Philosophy, Rochester Institute of Technology.
1 By supporting civil design strategies, Citron joins scholars from across the disciplines in either advocating embedding ethics and etiquette into code (or code-like structures) or discussing the ramifications of doing so. See Lawrence Lessig, Code and Other Laws of Cyberspace (1999); Peter-Paul Verbeek, Moralizing Technology (2011); Jonathan Zittrain, The Future of the Internet and How to Stop It (2008); Joel R. Reidenberg, Lex Informatica: The Formulation of Information Policy Rules through Technology, 76 Tex. L. Rev. 553 (1997-1998); Batya Freedman, Value Sensitive Design, http://www.vsdesign.org/; Langdon Winner, Do Artifacts Have Politics?, 109 Daedalus 121 (1980); Bruno Latour, Where are the Missing Masses? The Sociology of a Few Mundane Artifacts, in Shaping Technology/Building Society: Studies in Sociotechnical Change 225-48 (W. E. Bijker and J. Law eds., 1994); Helen Nissenbaum, Values in technical design, Encyclopedia of Science, Technology and Society, (C. Mitcham ed., 2005); Helen Nissenbaum, How Computer Systems Embody Values, Computer 34.3, 119-120 (2001); Helen Nissenbaum, Values in the Design of Computer Systems, Computers and Society 28.1, 38-39 (1998).
2 See, e.g., Eduardo González Fidalgo, Transaction Cost Economics, Introduction to Business, http://intobusiness.weebly.com/transaction-cost-economics.html; Douglas W. Allen, Transaction Costs (1999), http://www.sfu.ca/~allen/allentransactioncost.pdf; Alexandra Benham and Lee Benham, Chapter 11: The Costs of Exchange, in The Elgar Companion to Transaction Cost Economics (Peter G. Klein and Michael E. Sykuta eds., 2010), http://www.elgaronline.com/view/9781845427665.00018.xml.
3 Zoya Sheftalovich, Sick of companies playing hide and seek?, Choice (Sept. 5, 2014), https://www.choice.com.au/shopping/consumer-rights-and-advice/taking-action-and-making-a-complaint/articles/companies-that-are-difficult-to-contact (“With some companies making it so difficult to find their contact details, or leaving you on hold or waiting for a reply email for so long, it’s hard not to conclude they’re intentionally trying to prevent their customers from being able to contact them.”).
4 Nancy S. Kim, Web Site Proprietorship and Online Harassment, 2009 Utah L. Rev. 993 (2009), http://ssrn.com/abstract=1354466.
5 Kashmir Hill, Blocking People On Twitter Now Just Mutes Them (Update: Psych!), Forbes (Dec. 12, 2013), http://www.forbes.com/sites/kashmirhill/2013/12/12/blocking-people-on-twitter-now-just-mutes-them/
6 Woodrow Hartzog and Evan Selinger, Obscurity: A Better Way to Think About Your Data Than ‘Privacy,’ The Atlantic (Jan. 17, 2013), http://www.theatlantic.com/technology/archive/2013/01/obscurity-a-better-way-to-think-about-your-data-than-privacy/267283/.
7 Evan Selinger and Woodrow Hartzog, Google Can’t Forget You, But It Should Make You Hard To Find, Wired (May 20, 2014), http://www.wired.com/2014/05/google-cant-forget-you-but-it-should-make-you-hard-to-find/.
8 Samantha Allen, How Facebook Exposes Domestic Violence Survivors, The Daily Beast (May 20, 2015), http://www.thedailybeast.com/articles/2015/05/20/how-facebook-exposes-domestic-violence-survivors.html; James Vincent, Facebook Tells Users They Can’t Hide From Searches, Independent (Oct. 11, 2013), http://www.independent.co.uk/life-style/gadgets-and-tech/facebook-tells-users-they-cant-hide-from-searches-8874562.html.
9 Mary Anne Franks, The Many Ways Twitter Is Bad at Responding to Abuse, The Atlantic (Aug. 14, 2014), http://www.theatlantic.com/technology/archive/2014/08/the-many-ways-twitter-is-bad-at-responding-to-abuse/376100/.
10 Id. (“Contrast this to the procedure for reporting spam. To report spam, a user must click a button that says ‘This account is spam.’ That’s it. Twitter is oddly unconcerned about false or unauthorized reports of spam: There are no questions about the user’s involvement with the alleged spam, no requirement to provide links or explain how the content qualifies as spam, no requirement of a signature, no need to fear retaliation from the reported spammer.”).
11 Evan Selinger, When Nudge Comes to Shove, Slate (July 7, 2013), http://www.slate.com/articles/health_and_science/new_scientist/2013/07/nudge_critiques_is_nudging_behavior_unethical_infantilizing_coercive_or.html.
12 See Brett Frischmann, Human-Focused Turing Tests: A Framework for Judging Nudging and Techno-Social Engineering of Human Beings, Cardozo Legal Studies Research Paper No. 441 (Sept. 22, 2014),. http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2499760.
13 See Ryan Calo, Consumer Subject Review Boards: A Thought Experiment, 66 Stan. L. Rev. Online 97 (Sept. 3, 2013),.http://www.stanfordlawreview.org/online/privacy-and-big-data/consumer-subject-review-boards.