Alt-C 2016 Keynote: In the Valley of the Trolls

 

Meh2

 

In the Valley of the Trolls

Tay, for 16 hours only

Tay, Microsoft’s Artificial Intelligence bot, was launched on Twitter on 23 March 2016. Text on Tay’s official website stated:

Tay“Tay is an artificial intelligent chat bot developed…to experiment with and conduct research on conversational understanding. Tay is designed to engage and entertain people where they connect with each other online through casual and playful conversation. The more you chat with Tay the smarter she gets, so the experience can be more personalized for you”.

Within 16 hours Tay had become known a racist, conspiracy theorist, sex bot, and Microsoft took it offline.

So how did this happen? Firstly, the Microsoft account was targeted by Twitter users who fed Tay with hate speech, discrimination, conspiracy theories, and lewd text, which it then mimicked and reproduced. While Microsoft seemed to have anticipated that some specific topics would be controversial, and programmed Tay with responses to these, they didn’t seem to have considered the possibility of Tay being targeted by a wide range of inappropriate interactions – of being trolled. Microsoft had released a (mostly) filter free curator and amplifier of the language of the users who interacted with the bot, and many users were lightening quick to understand and make use of this to turn Tay into a mouthpiece for hate and obscenity.

The story was quickly picked up by news sites, gleefully reporting on Microsoft’s bot becoming a holocaust denier within hours of going live. While the account was shut down, screenshots of Tay posting grim messages went up all over the internet.

Tay is currently back up, but now the account is private. You need to be approved by Microsoft to follow the account, or access any of the tweets.

I’m telling the story of Tay here because it’s pretty representative of a range trolling motifs – it’s practically a troll morality tale.

For the lulz

It’s not possible to say what the wider range of motivation of the people involved with the Tay trolling were. We can speculate that some of them were interested in attacking Microsoft, or suspicious of the commercial motivation for personalisation. Some may have seen this as an opportunity to get discriminatory messages up and to spread misinformation.

Lulz are what drive trolls. Lulz are the cultural currency of trolls. Screen Shot 2016-09-06 at 15.03.20Whitney Philips, in her excellent book on trolling cultures (This Is Why We Can’t Have Nice Things, 2015) (2015) defines lulz as LOL transfigured through the “anguish of the laughed at victim”. Lulz are what knit together a disparate and anonymous group of people who may meet only in passing, or not at all.

Using extremism, obscenity and conspiracy theories, a corporate experiment in AI was taken down within hours, and the trolls got their handiwork reproduced and publicised globally.

This ‘gaming’ of reporters and social commentators, the manufacture of news –is a win for media outlets who need quick-to-read outrage to increase their traffic.  Trolls love to troll the media, and trolls love to get their stories and memes reproduced by the media, and the media loves to promote sensationalistic and outrageous stories, even if the numbers of actual people involved are tiny, or in some cases, the story is entirely made up.

Also typical was the lack of interest on all sides of what is going on here – or, ‘because Trolls’. ‘Because trolls’ is always a win for trolls because it means journalists are taking them at face value, are missing the joke, and have become a part of the joke.

Of course not all trolling involves hate speech, discrimination, threats, obscenity or conspiracy theories. The almost universally agreed on aim of trolling is to disrupt, confront, and provoke individuals and communities online, for the purpose of amusement – for the lulz.

Trolling runs from innocuous pranking (for example Rickrolling) to behaviours which challenge the general sentiments or beliefs of a group, to online harassment and bullying.

Some trolls only target other trolls.

In the vast majority of cases, trolls will make use of anonymity. They may pretend to be other actual or invented people – they might act out being sympathetic, or take entirely opposing viewpoints to their own. They might ask naive questions or swear to blatantly untrue facts in order to frustrate or make someone seem like an even bigger idiot for taking them seriously. They might provide misleading or bad advice, or purposely just talk off topic.

But understanding this also isn’t to be naive, to say or to imply that the extremism we see in a lot of  trolling is coincidental or arbitrary.

Trolls are a diverse group, whose interests, ethics and actions are not all alike. This means that while some trolls are genuinely racist, homophobic, sexist, or otherwise discriminatory, equally, there will be trolls who are using hate speech and extremist views because they know that this is what will get them an outraged, offended or upset reaction. In this view, the statements being fed to the bot were inconsequential in themselves – just the weapons closest to hand. Some might even view the use of abusive language is part of the bigger game – that only idiots would agree with the sentiment being expressed. Some will frame it in terms of a characteristically insincere idea of freedom of speech – and it wasn’t surprising that the soon after the takedown, the hashtag #FreeTay was used to protest against the ‘corporate lobotomisation’, and censorship of Tay.

The key problem with this kind of equivalence, which is in essance, ‘one form of insincere attack is as good as another’, or, ‘all groups are treated equally through hate’,  is that there is no room for acknowledgement that specific social groups are already being harmed on a daily basis by discrimination. The reproduction of hate speech – whether sincere or not – adds to what is already there, helping to normalise marginalisation, and cause new harm.

Tay is a safe example. Tay isn’t a person. It doesn’t have feelings, a history, personal doubts and anxieties. It isn’t sometimes tired and short tempered. It doesn’t struggle to interpret subtly codified online behaviour, or take sexist, racist, or faith targeted abuse personally.

 

Open practice – an ethical gesture

Many of us here today appreciate and have benefited from working and learning in open contexts online – whether through blogging, online courses, or through networks on social media sites.  Talks from the conference are being streamed, so that people who aren’t able to be here in person can watch online. People in the room, people viewing at distance, and others not viewing are using the conference hashtag on Twitter to participate. The video and the tweets will provide access to people who aren’t able to join in with us right now. We are wringing as much value as we can from the effort and insight of all of the speakers and participants. We are creating new resources to be shared and developed.

This isn’t to say that there is no place for closed conversations, or that everything we do as educators and learners must be done in the open. It is a recognition of the enormous value that sharing our practice, thoughts and resources accessibly, discussing and developing these collectively, can provide for us as individuals, for our organisations, and for learners and educators online.  A commitment to open education is an ethical gesture. It’s a commitment to the importance of access to education, research, debate and ideas for all, not just those within designated educational communities. It’s a commitment to the value of co-production and the development of work across not already established networks. It’s an understanding that our work may be of benefit to those who we don’t know, in ways we can’t anticipate, and that we ourselves may benefit from the insight and input of strangers.

It’s also a commitment to putting ourselves in to contexts we don’t necessarily control, to having our views challenged and disagreed with, to being interpreted in ways we might not be happy with.

At it’s most basic, open educational practice is about creating, using and sharing work accessibly, which typically means online, across networked publics. It goes beyond just using and producing openly licensed resources, but OER remains essential to it. Open licences give permission, with some requirements, for others to interact with, take on, make use of, and develop your work.

Open educational practice is about making our work accessible to others, not just to people who agree with us. I’d extend the definition to include practice which is concerned with who gets to publicly engage, who gets to speak and be heard.

Anonymity

Trolls are typically anonymous or pseudonymous. This doesn’t mean that anonymity is a bad thing. People who are not trolling use and need anonymity online. They are anonymous so they can talk openly and frankly about issues they otherwise couldn’t. They use anonymity to keep themselves safe. They are anonymous to guard their privacy, to avoid online surveillance and commodification. They use anonymity to play, or to protest against laws or ideas or governments they don’t agree with.  They are anonymous to make comments and join in conversations that they otherwise wouldn’t.

Many of us here today had the luxury of not growing up online. It’s unsurprising that anonymous (for example, 4Chan) and ephemeral (for example, SnapChat) online platforms have grown in popularity at the same time that the importance and increasing insistence of ‘authenticity’ online has flourished. And while there are obvious professional and personal benefits to ‘being yourself’ online, some benefits may depend on whether or not the kind of person you ‘really’ are is ‘the right kind’ of person. Being ‘yourself’ online, linked to a physical identity, may be a risk, or a privilege.

So how do we protect ourselves?

There some simple, practical things we can all do now to mitigate against trolling and the fear of trolling. Keep your accounts secure. Limit the amount of public information available about you – for example, domain name registration information will include the address and phone number you registered with unless you’ve paid to keep this information secure.

Speak Up

 

There are some great resources online to help you – practical, positive advice to help people protect themselves and better respond to attacks are emerging – for example,  Feminist Frequency‘s Speak Up and Stay Safe(r) guide, produced by women who have been targeted by troll mobs.  If you are being attacked, there are some organisations and initiatives that might help you – for example,  TrollBusters, which mobilises peer support and advice for women writers who are being attacked.  The  Crash Override Network is an online abuse crisis helpline, advocacy group and resource centre.

Ignore, block, report.

The best advice in relation to trolling remains to not respond, not to participate – ignore, block, report. Frustratingly, this means that you don’t get to ‘win’ against the trolls. You can lessen your sense of frustration by remembering no one gets to win against trolls. The more you express your disgust, anger or disagreement, the more the troll will win. In the event of you actually getting the better of a troll – through devastating wit for example, the troll remains anonymous. And doesn’t care. And if they do care, will never show it.

The other important advice is to report. Reporting isn’t always easy. But if you can get some hate taken down – why not? Reporting will help make abuse statistics more realistic, and will also help check service provider assumptions of what kinds of abuse their communities are being subjected to.

Not being a silent bystander is also an important way of addressing abuse and showing support to people who may be feeling isolated. Don’t respond to the troll directly – just show your support and appreciation for the person having the hard time. And if you witness someone else being attacked, why wouldn’t you report it?

There are two main reporting routes:

A lot of offensive activity and content won’t be illegal. Mainstream websites will have acceptable use policies, and a range of ways to report incidents. If you can clearly demonstrate that their terms have been broken, some action will be taken. How easy things are to report, how long it takes for it to be reviewed, what the consequences might be vary.

If the activity is illegal, report it to the police. In the UK, hate crimes and illegal content can be reported online or to your local police.

If you are being repeatedly harassed online by someone in relation to your employment, then it’s also worth alerting your employer and your union if you have one. All employers have statutory and common law duties to look after the physical and mental health of their employees.

Digital wellbeing – taking the long view

One of the important ways we can consider navigating these differences is through the idea of digital JISCwellbeing. This image will be familiar to many of you – it’s Helen Beetham’s work on JISC’s digital competencies framework. I’m particularly interested in how Helen positions and prioritises digital identity and wellbeing in relation to the other competencies. I very much like the way she picks out the consideration of wellbeing in lives that are saturated with and lived through digital environments, within and across modes of participation.

The Welsh Government is taking a similar approach to supporting children and young people through it’s new national Digital Competencies Framework – which is made up of four strands, one of which is Digital Citizenship, which includes identity, digital rights, and online behaviours.

Troll culture?

In these post-truth times, it can seem that everyone and everything is trolling. Certainly, a wide range of groups, including political and corporate groups, have adopted the aesthetic and tactics of trolling to infiltrate or directly attack communities in order to disrupt them, to sway public opinion, and to generate attention and discussion. But we need to stop labeling all behaviors we don’t like as trolling. It’s a way of minimising real harm caused and the unacceptability of some activities, without actually addressing them.

The range of troll behaviours and motivations makes pinning down trolling extremely difficult, and at the same time, makes calling all behaviours online we find offensive – bullying, harassment, threats of violence – but also political disagreement, defence of others freedoms, viewpoints that are not our own – easy to dismiss as ‘trolling’.

The ways in which the word troll is currently being used, equating trolling with someone we don’t agree with or take to take offence at, should immediately alert us to some of the dangers here.  Solutions that work by taking away anonymity and erode privacy to ‘stop trolls’ typically boil down to all of us being presented with the blunt threat of “if you’ve done nothing wrong you’ve got nothing to hide.”

When so much trolling exacerbates and adds to existing inequality, how we address that inequality needs to focus on those people who are being silenced, and not just on those people doing the silencing. Closing accounts, using only protected forums, having our identities verified, cannot be the best solutions we have to offer.

 

 

3 thoughts on “Alt-C 2016 Keynote: In the Valley of the Trolls

  1. Hi Josie,

    This is great stuff. Thank you. I wish it were twice as long–I sense you have more to say, and I am listening.

    In the meantime, there’s a key sentence that’s a bit garbled, or maybe I’m being obtuse (always a possibility):

    “And while there are obvious professional and personal benefits to ‘being yourself’ online, these may depend on if the kind of person you really are is the right kind of person want to be fits in.”

    Can you clarify, please?

    Thanks again for a great post. I wish I’d been there to hear it delivered live.

  2. Hi Gardner – and many thanks for the comment! I wrote a long reply about normative authenticity, but then realised that you were just picking up on a typo. It should read: “And while there are obvious professional and personal benefits to ‘being yourself’ online, some benefits may depend on whether or not the kind of person you ‘really’ are is ‘the right kind’ of person.”

    I’ve amended!

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.