Do Not Feed The Troll

Back in 2015, New Zealand passed the Harmful Digital Communications Act, to address the ways people use technology to harm others. It was a clear, legislative effort to give the victims of the most egregious online abuse some redress of the trauma caused to them by online bullying, harassment, revenge porn and other forms of intimidation; and to act as a deterrent for such victimising online activity. A civil amendment to the Act was introduced in November 2016. Netsafe.org (a non-profit online safety organisation in New Zealand) welcomed the new legislation, and provides much needed support to those who have been harmed by these types of abuses. And, according to a Stuff.co.nz article, published in April of this year, police have confirmed that 154 people have been charged under the Act, with 22 prosecutions so far. Seven of those prosecutions resulted in imprisonment. I’m sure there have been more since.

The Act, however, is not without controversy. A “News and Communications” post on the New Zealand Law Society’s webpage, dated 6th April 2017, reveals that the bar for prosecution is set quite high. Harm, as defined in the Act, is “serious emotional distress”. This can be a tricky thing to prove, it seems, and people have different thresholds of emotional distress, or express distress in diverse ways. Some of those ways may be quite hidden to the casual observer.  As LukeCunninghamClere partner, Sally Carter, told lawyers at a Cyber Law “Applying Cyber Law to the Real World” conference held in Wellington earlier this year, “the person who posted the digital communication must have intended that it cause harm to the victim, that posting it would cause harm to an ordinary reasonable person in the position of the victim and that the posting of the digital communication did in fact cause harm to the victim”. Which brings me to the lower end of online abuse: trolling.

While trolling can certainly include behaviours covered by the Act, I have chosen to exclude those behaviours, because the worst cases cross the line from online mischief making into targeted criminal harassment. For the purposes of this article, I feel this is an important distinction. Which brings me back to the acronym of our times: DNFTT.

In Scandinavian folklore, a troll was a dwarf or giant inhabiting caves or hills. As a transitive verb, it referred to pulling a lure through water to catch fish; or to prowl nightclubs, flea markets, and the like. The online troll combines both these definitions and makes something new of them. According to the online Urban Dictionary, a troll is slang for “someone who posts a deliberately provocative message to a newsgroup or message board with the intention of causing maximum disruption and argument”. In other words, mischief making. The Urban Dictionary’s definition, to my mind, is what separates trolling from the kinds of criminal harassment that is covered by the Harmful Digital Communications Act, and I’ll even go so far as to say that trolls can be the catalysts for productive communication in the digital public sphere. And there is some support for this view. There are also arguments to be made that challenges this view, but let’s start with the support.

Dr Anthony McCosker, a senior lecturer of Media and Communication Studies at Swinburne University in Melbourne, Australia, has completed research that has been published in an article in Convergence: The International Journal of Research into New Media Technologies, 2014, Volume 20, and titled “Trolling as provocation: YouTube’s agonistic publics”. Dr. McCosker explored the concept of “digital citizenship, as it is concerned with ethical behaviour in online environments and takes aim at problematic or aberrant forms of participation”.  His research came to some surprising conclusions.

Using YouTube as the platform for his research – largely because it is a “highly unstable and dynamic entity evolving constantly through iterations of interface, structures, rules, norms and cultures of use” and because it is a global platform serving millions of producers and consumers of both commercial and “vernacular and user-generated” video content – Dr. McCosker focuses on two specific events that occurred in New Zealand in 2011. One event is traumatic (the Christchurch earthquake of February, 2011), and the other celebratory (a flash mob performance of the haka in the lead up to the Rugby Union World Cup in September, 2011). Both videos received international attention and, as a result, a great many comments. These varied from sympathy and grief in the Christchurch video, and cultural pride in the haka video. And both, of course, received comments that were vitriolic, bigoted, cruel, and aggressive.

Given that some of the comments are far too coarse for a news publication such as this, it would be reasonable to ask how this spiteful, hateful online activity could in any way be productive? Yet, productive it was. The effect of the negative comments facilitated the forming of what German sociologist and philosopher, Jurgen Habermas, called “a public sphere”, which he defines as a “virtual or imaginary community which does not necessarily exist in any identifiable space”. Each of the videos effectively became “new publics” within the greater public sphere, and the provocative mischief makers and the outright “haters” were conducive to many commenters, over time, engaging with each other more in the formation of community: a positive outcome to negative actions.

In a completely different online public sphere, the American author and former Jezebel columnist, Lindy West, wrote an article in 2013 about Internet trolling, particularly regarding adversarial comments on articles about such topics as feminism and rape jokes. She also described a situation that went far beyond the merely adversarial, and crossed the line into the type of criminal harassment that would be prosecutable under our own Harmful Digital Communications Act, had the activity occurred within New Zealand.

An anonymous Twitter user had gone to the rather extreme trouble of creating a fake account using her deceased father’s name. Disturbingly, the creator of this fake account had achieved close and personal research on Lindy, using her father’s photograph, and a bio that read, “Embarrassed father of an idiot; the other two kids are fine.”, and he actively trolled her, sending her upsetting and frightening tweets. In her 2013 article for Jezebel, she described the affect this had on her. She wrote candidly and angrily about how much this troll had hurt her. What happened next was unexpected. She received an email from her offender, and he expressed great remorse for what he had done to her.
“I can’t apologize enough,” the email read. “It finally hit me: there’s a living, breathing human being who’s reading this shit. I’m attacking someone who never attacked me.” The e-mail also included an attachment confirming that the troll had donated $50 in West’s name to the Seattle Cancer Care Alliance, where her father had received care. Several days later, she received a second email that included the man’s real name and contact information. Lindy called him on the phone, and a difficult but fruitful discussion was had, resulting in both parties effectively becoming friends and sharing in mutual sympathy.

Admittedly, Lindy’s is an unusual case. In many instances of trolling, the usually anonymous internet participant is exhibiting what John Suler, Professor of Psychology at Rider University in New Jersey, USA, would call “The Online Disinhibition Effect”. Professor Suler explores six factors that interact with each other to explain why some people in an online space behave badly more frequently or intensely than they would in person. These factors are: “dissociative anonymity, invisibility, asynchronicity, solipsistic introjection, dissociative imagination, and minimization of authority”. Professor Suler also recognises that personality variables also will influence the extent of this disinhibition, and that the online behaviour is no representation of any “true self”. The online persona and offline persona can be quite different arrangements of “self-structure” that can bear little resemblance to each other.

Lindy’s troller, and those who engaged in the most inflammatory comments on the Earthquake and haka YouTube videos, were likely effected by Online Disinhibition. Their behaviour, despite their best efforts at disruption and hurt, nonetheless resulted in community and meaning-making for the participants of those communities. They were a messy, inarticulate, careless, and frustratingly adversarial engagement with digital citizenship and digital democracy. They caused arguments, they disrupted, they “culture jammed”, and they were often extremely uncomfortable to read. But, despite this, they brought people together in a community response, a “push back”, that had a positive dialogical effect overall.

With community and trolling in mind, there are places in cyberspace that are for trolls. 4chan is the most infamous, having been active since 2003. That’s a long time in internet years, and the site receives approximately 700,000 posts per day made by roughly seven million daily visitors. It is split into many discrete groups within the wider 4chan community, and is said to have been the origin of cyber-vigilante and hacktivist group, Anonymous.

In March of 2008, The Guardian described 4chan as “a message-board whose lunatic, juvenile community is at once brilliant, ridiculous and alarming.” They have been instrumental in all manner of online pranks (some so outrageous that they were successful in trolling mainstream media into affective moral panics), the creation of viral memes, and a mad variety of online activity that spans the teasingly amusing to the more darkly subversive. 4chan is certainly no place for the faint of heart. If you consider your sensitivities to be a in the slightest bit delicate, my advice would be to NOT go there.

In conclusion, trolling can encompass a wide range of online activity, dependent on both intent and reception of delivered messages. It is a fuzzy word that has no firm, scholarly agreement to any clear definition. One user’s hateful troll may be another user’s barrel of laughs, and they may just prank you. They may wish to elicit a specific response, but the response you give – or don’t give – is your choice. There are few public spaces on the internet that don’t have trolls, and it’s your choice as to whether you shall engage with the troll, provide support to those you may see as the trolls “victims”, or to scroll on by when encountering their comments. It’s the internet, it’s a vast globally networked public with all the cross-cultural and subcultural chaos that that can entail. If you choose to read the comments sections, you may well be advised to DNFTT: Do Not Feed The Troll.