Josie Fraser

Alt-C 2016 Keynote: In the Valley of the Trolls

 

Meh2

 

In the Valley of the Trolls

Tay, for 16 hours only

Tay, Microsoft’s Artificial Intelligence bot, was launched on Twitter on 23 March 2016. Text on Tay’s official website stated:

Tay“Tay is an artificial intelligent chat bot developed…to experiment with and conduct research on conversational understanding. Tay is designed to engage and entertain people where they connect with each other online through casual and playful conversation. The more you chat with Tay the smarter she gets, so the experience can be more personalized for you”.

Within 16 hours Tay had become known a racist, conspiracy theorist, sex bot, and Microsoft took it offline.

So how did this happen? Firstly, the Microsoft account was targeted by Twitter users who fed Tay with hate speech, discrimination, conspiracy theories, and lewd text, which it then mimicked and reproduced. While Microsoft seemed to have anticipated that some specific topics would be controversial, and programmed Tay with responses to these, they didn’t seem to have considered the possibility of Tay being targeted by a wide range of inappropriate interactions – of being trolled. Microsoft had released a (mostly) filter free curator and amplifier of the language of the users who interacted with the bot, and many users were lightening quick to understand and make use of this to turn Tay into a mouthpiece for hate and obscenity.

The story was quickly picked up by news sites, gleefully reporting on Microsoft’s bot becoming a holocaust denier within hours of going live. While the account was shut down, screenshots of Tay posting grim messages went up all over the internet.

Tay is currently back up, but now the account is private. You need to be approved by Microsoft to follow the account, or access any of the tweets.

I’m telling the story of Tay here because it’s pretty representative of a range trolling motifs – it’s practically a troll morality tale.

For the lulz

It’s not possible to say what the wider range of motivation of the people involved with the Tay trolling were. We can speculate that some of them were interested in attacking Microsoft, or suspicious of the commercial motivation for personalisation. Some may have seen this as an opportunity to get discriminatory messages up and to spread misinformation.

Lulz are what drive trolls. Lulz are the cultural currency of trolls. Screen Shot 2016-09-06 at 15.03.20Whitney Philips, in her excellent book on trolling cultures (This Is Why We Can’t Have Nice Things, 2015) (2015) defines lulz as LOL transfigured through the “anguish of the laughed at victim”. Lulz are what knit together a disparate and anonymous group of people who may meet only in passing, or not at all.

Using extremism, obscenity and conspiracy theories, a corporate experiment in AI was taken down within hours, and the trolls got their handiwork reproduced and publicised globally.

This ‘gaming’ of reporters and social commentators, the manufacture of news –is a win for media outlets who need quick-to-read outrage to increase their traffic.  Trolls love to troll the media, and trolls love to get their stories and memes reproduced by the media, and the media loves to promote sensationalistic and outrageous stories, even if the numbers of actual people involved are tiny, or in some cases, the story is entirely made up.

Also typical was the lack of interest on all sides of what is going on here – or, ‘because Trolls’. ‘Because trolls’ is always a win for trolls because it means journalists are taking them at face value, are missing the joke, and have become a part of the joke.

Of course not all trolling involves hate speech, discrimination, threats, obscenity or conspiracy theories. The almost universally agreed on aim of trolling is to disrupt, confront, and provoke individuals and communities online, for the purpose of amusement – for the lulz.

Trolling runs from innocuous pranking (for example Rickrolling) to behaviours which challenge the general sentiments or beliefs of a group, to online harassment and bullying.

Some trolls only target other trolls.

In the vast majority of cases, trolls will make use of anonymity. They may pretend to be other actual or invented people – they might act out being sympathetic, or take entirely opposing viewpoints to their own. They might ask naive questions or swear to blatantly untrue facts in order to frustrate or make someone seem like an even bigger idiot for taking them seriously. They might provide misleading or bad advice, or purposely just talk off topic.

But understanding this also isn’t to be naive, to say or to imply that the extremism we see in a lot of  trolling is coincidental or arbitrary.

Trolls are a diverse group, whose interests, ethics and actions are not all alike. This means that while some trolls are genuinely racist, homophobic, sexist, or otherwise discriminatory, equally, there will be trolls who are using hate speech and extremist views because they know that this is what will get them an outraged, offended or upset reaction. In this view, the statements being fed to the bot were inconsequential in themselves – just the weapons closest to hand. Some might even view the use of abusive language is part of the bigger game – that only idiots would agree with the sentiment being expressed. Some will frame it in terms of a characteristically insincere idea of freedom of speech – and it wasn’t surprising that the soon after the takedown, the hashtag #FreeTay was used to protest against the ‘corporate lobotomisation’, and censorship of Tay.

The key problem with this kind of equivalence, which is in essance, ‘one form of insincere attack is as good as another’, or, ‘all groups are treated equally through hate’,  is that there is no room for acknowledgement that specific social groups are already being harmed on a daily basis by discrimination. The reproduction of hate speech – whether sincere or not – adds to what is already there, helping to normalise marginalisation, and cause new harm.

Tay is a safe example. Tay isn’t a person. It doesn’t have feelings, a history, personal doubts and anxieties. It isn’t sometimes tired and short tempered. It doesn’t struggle to interpret subtly codified online behaviour, or take sexist, racist, or faith targeted abuse personally.

 

Open practice – an ethical gesture

Many of us here today appreciate and have benefited from working and learning in open contexts online – whether through blogging, online courses, or through networks on social media sites.  Talks from the conference are being streamed, so that people who aren’t able to be here in person can watch online. People in the room, people viewing at distance, and others not viewing are using the conference hashtag on Twitter to participate. The video and the tweets will provide access to people who aren’t able to join in with us right now. We are wringing as much value as we can from the effort and insight of all of the speakers and participants. We are creating new resources to be shared and developed.

This isn’t to say that there is no place for closed conversations, or that everything we do as educators and learners must be done in the open. It is a recognition of the enormous value that sharing our practice, thoughts and resources accessibly, discussing and developing these collectively, can provide for us as individuals, for our organisations, and for learners and educators online.  A commitment to open education is an ethical gesture. It’s a commitment to the importance of access to education, research, debate and ideas for all, not just those within designated educational communities. It’s a commitment to the value of co-production and the development of work across not already established networks. It’s an understanding that our work may be of benefit to those who we don’t know, in ways we can’t anticipate, and that we ourselves may benefit from the insight and input of strangers.

It’s also a commitment to putting ourselves in to contexts we don’t necessarily control, to having our views challenged and disagreed with, to being interpreted in ways we might not be happy with.

At it’s most basic, open educational practice is about creating, using and sharing work accessibly, which typically means online, across networked publics. It goes beyond just using and producing openly licensed resources, but OER remains essential to it. Open licences give permission, with some requirements, for others to interact with, take on, make use of, and develop your work.

Open educational practice is about making our work accessible to others, not just to people who agree with us. I’d extend the definition to include practice which is concerned with who gets to publicly engage, who gets to speak and be heard.

Anonymity

Trolls are typically anonymous or pseudonymous. This doesn’t mean that anonymity is a bad thing. People who are not trolling use and need anonymity online. They are anonymous so they can talk openly and frankly about issues they otherwise couldn’t. They use anonymity to keep themselves safe. They are anonymous to guard their privacy, to avoid online surveillance and commodification. They use anonymity to play, or to protest against laws or ideas or governments they don’t agree with.  They are anonymous to make comments and join in conversations that they otherwise wouldn’t.

Many of us here today had the luxury of not growing up online. It’s unsurprising that anonymous (for example, 4Chan) and ephemeral (for example, SnapChat) online platforms have grown in popularity at the same time that the importance and increasing insistence of ‘authenticity’ online has flourished. And while there are obvious professional and personal benefits to ‘being yourself’ online, some benefits may depend on whether or not the kind of person you ‘really’ are is ‘the right kind’ of person. Being ‘yourself’ online, linked to a physical identity, may be a risk, or a privilege.

So how do we protect ourselves?

There some simple, practical things we can all do now to mitigate against trolling and the fear of trolling. Keep your accounts secure. Limit the amount of public information available about you – for example, domain name registration information will include the address and phone number you registered with unless you’ve paid to keep this information secure.

Speak Up

 

There are some great resources online to help you – practical, positive advice to help people protect themselves and better respond to attacks are emerging – for example,  Feminist Frequency‘s Speak Up and Stay Safe(r) guide, produced by women who have been targeted by troll mobs.  If you are being attacked, there are some organisations and initiatives that might help you – for example,  TrollBusters, which mobilises peer support and advice for women writers who are being attacked.  The  Crash Override Network is an online abuse crisis helpline, advocacy group and resource centre.

Ignore, block, report.

The best advice in relation to trolling remains to not respond, not to participate – ignore, block, report. Frustratingly, this means that you don’t get to ‘win’ against the trolls. You can lessen your sense of frustration by remembering no one gets to win against trolls. The more you express your disgust, anger or disagreement, the more the troll will win. In the event of you actually getting the better of a troll – through devastating wit for example, the troll remains anonymous. And doesn’t care. And if they do care, will never show it.

The other important advice is to report. Reporting isn’t always easy. But if you can get some hate taken down – why not? Reporting will help make abuse statistics more realistic, and will also help check service provider assumptions of what kinds of abuse their communities are being subjected to.

Not being a silent bystander is also an important way of addressing abuse and showing support to people who may be feeling isolated. Don’t respond to the troll directly – just show your support and appreciation for the person having the hard time. And if you witness someone else being attacked, why wouldn’t you report it?

There are two main reporting routes:

A lot of offensive activity and content won’t be illegal. Mainstream websites will have acceptable use policies, and a range of ways to report incidents. If you can clearly demonstrate that their terms have been broken, some action will be taken. How easy things are to report, how long it takes for it to be reviewed, what the consequences might be vary.

If the activity is illegal, report it to the police. In the UK, hate crimes and illegal content can be reported online or to your local police.

If you are being repeatedly harassed online by someone in relation to your employment, then it’s also worth alerting your employer and your union if you have one. All employers have statutory and common law duties to look after the physical and mental health of their employees.

Digital wellbeing – taking the long view

One of the important ways we can consider navigating these differences is through the idea of digital JISCwellbeing. This image will be familiar to many of you – it’s Helen Beetham’s work on JISC’s digital competencies framework. I’m particularly interested in how Helen positions and prioritises digital identity and wellbeing in relation to the other competencies. I very much like the way she picks out the consideration of wellbeing in lives that are saturated with and lived through digital environments, within and across modes of participation.

The Welsh Government is taking a similar approach to supporting children and young people through it’s new national Digital Competencies Framework – which is made up of four strands, one of which is Digital Citizenship, which includes identity, digital rights, and online behaviours.

Troll culture?

In these post-truth times, it can seem that everyone and everything is trolling. Certainly, a wide range of groups, including political and corporate groups, have adopted the aesthetic and tactics of trolling to infiltrate or directly attack communities in order to disrupt them, to sway public opinion, and to generate attention and discussion. But we need to stop labeling all behaviors we don’t like as trolling. It’s a way of minimising real harm caused and the unacceptability of some activities, without actually addressing them.

The range of troll behaviours and motivations makes pinning down trolling extremely difficult, and at the same time, makes calling all behaviours online we find offensive – bullying, harassment, threats of violence – but also political disagreement, defence of others freedoms, viewpoints that are not our own – easy to dismiss as ‘trolling’.

The ways in which the word troll is currently being used, equating trolling with someone we don’t agree with or take to take offence at, should immediately alert us to some of the dangers here.  Solutions that work by taking away anonymity and erode privacy to ‘stop trolls’ typically boil down to all of us being presented with the blunt threat of “if you’ve done nothing wrong you’ve got nothing to hide.”

When so much trolling exacerbates and adds to existing inequality, how we address that inequality needs to focus on those people who are being silenced, and not just on those people doing the silencing. Closing accounts, using only protected forums, having our identities verified, cannot be the best solutions we have to offer.

 

 

OER15 keynote – OER on Main Street

I had a fantastic time keynoting and attending OER15. You can watch the talk below, along with those from Cable Green, Sheila McNeil, and Martin Weller – who were all excellent.

Josie Fraser - OER15 Keynote as drawn by Mearso

Creative Commons License Josie Fraser – OER15 Keynote by mearso is licensed under CC-BY-SA 4.0

OER on Main Street from Josie Fraser

 


 OER15 Keynotes

OER15 reports & posts

OER15 and the nature of change in higher education (2015), Martin Weller

OER15 – Window Boxes, Battles, and Bandwagons (2015), Marieke Guy/Open Education Working Group

OER15 – Better Late Than Never! (2015), Lorna M. Campbell

Cracking Open Education (2015), David Walker/University of Sussex

The problem with the mother

Protection

Link love: This post builds on the case study I contributed to the Eduserv workshop on Digital Identities at the British Library today. Everyone's case studies are lodged over at the Pattern Language Network site, along with Yishay's Slidedeck pattern language tutorial on writing a case study. It also moves forward some observations I made in my post Pictures of Children Online a couple of years ago.

From the workshop intro:

"We use the term ‘digital identity’ to refer to the online
representation of an individual within a community, as adopted by that
individual and projected by others. An individual may have multiple
digital identities in multiple communities.

Eduserv have recently funded three projects on digital identity as
a result of our 2008 grants call. This workshop will help the projects
gather case-studies about the ways in which digital identity is
currently manifest in UK higher education.

This event is aimed at people who have an interest in the issues
around digital identity in higher education including employers, HR
staff, careers guidance staff, standards experts, students and
academics.

Prior to the workshop we will be collecting a series of “stories”
about digital identity from people attending the event. On the day, we
will be working in groups to discuss and add to the series. Following
this, we will analyse the stories in order to find reoccurring themes
or patterns."

The group I worked with looked at two case studies, my own and Controlling Flickr Contacts, from Margarita Perez Garcia

Case Study: other people's identities

Summary:   
This study looks at issues of parental responsibility & identity disavowal
Created 08 Jan 2009 by Josie Fraser
 
Situation:
What was the setting in which this case study occurred?

Like most people working in the field of social media, I have a purposefully easy to find online presence. I belong to multiple social networks, for work, for research, and for experience. The social networks (& I’m using a broad definition here, as outlined in http://www.digizen.org/socialnetworking/ )  I use most frequently are typically those that I can also most easily repurpose and use to maintain a constantly updated pubic presence – Twitter, Fickr, my own blogs, Delicious. Probably more importantly though, they are also the ones that allow me to socialise, discuss, hang out and meet new people. I started using the internet about 12 years ago to socialise, prompted by the physical limitations of being a single mother, of being broke all the time and not having a social or family network. For me the experience of being online was an extremely positive and liberating one, & remains so.

Task:
What was the problem to be solved, or the intended effect?

The primary issue was wanting to protect my son from harm, in the broadest sense, and to act respectfully towards him.

I am used to belonging to self-determined communities of people who I like and respect, who I often know exclusively or primarily online. It might seem like an obvious extension of my friendship and relationship building to share stories and pictures of my son, and to model a sense of my everyday experience – which heavily features the joys and logistics of motherhood -online.

However, there are several reasons why I don’t do this. Firstly, there’s thorny the issue of consent, and how my son negotiates and understands this at different points I his life.

There are also ethical, or just straightforwardly thoughtful, considerations. My mum has a particularly embarrassing picture of me that haunted the whole of my childhood. As an adult, I’m ok with it (no, really). Thankfully my mum was mostly sensitive about my particular loathing of this picture and didn’t get it out at every available opportunity – if she’d have put it online I can imagine I would have been mortified. Maybe not at the time she put it up, but certainly a few years down the line, and especially if anyone from my school had come across it.

There's also the issue of digital presence. Is it up to us to contribute to our children’s digital presence? Would you have liked your parents contributing to what searches of you might return? Perhaps by now I would have loved that embarrassing picture of myself – maybe it would have come to mean something entirely different to me. But at different points in my life it certainly wouldn’t have been at all welcome.

The other obvious issues are internet related child abuse and bullying. I’m very much against a moral-panic approach to using technology, and I also think it’s very important that we evaluate and regard risks appropriately. While the vast majority of child abuse takes place entirely offline, and is typically perpetrated by the victims family or immediate circle, that’s also no reason to dismiss the chances of a child or young person we know coming into contact with someone who could harm them. We take steps to educate them about a range of strategies they can use to look out for themselves in their offline and online dealings. In the same way, we need to model good practice ourselves.

Another reason for ‘protecting’ my son and not talking about being a mother was linked to financial insecurity. My career is on the way to being well established, and I’ve proven that I can manage to raise a child ‘alone’ (I moved closer to my mum and sister, so I have the luxury of a support network now) and so it worries me less that people might judge me and choose not to employ me because of my status as a single mother.

Actions:
What was done to fulfil the task?

Initially, I kept all pictures of my son strictly within private, friends or family only permissions on Flickr. This has changed – I have a couple of pictures of my son as a small child in public. I’m similarly careful about the rest of my young family members too – I posted a picture of my  then 14 year old niece last year only to have it immediately favourited by a complete pervert. I removed the picture from public view, and blocked the pervy guy.

Similarly I don’t really talk about being a mother, although I’ve noticed this changing as my son becomes more independent himself.

Basically, I negated any public online identity that explicitly represented me as a mother for a long time.

Results:
What happened? Was is a success? What contributed to the outcomes?
    
Yes, it worked very well, since I have been consistent and systematic , had clearly defined rules about representing my son which I’ve stuck too. However, my son is getting older, his and my identities are both significantly shifting, and I’m wondering about ‘not having been a mother’. Was it just a handy tactic, or was it a cowardly disavowal of parenthood?  Is ‘being a mother’ in this sense important? For me, or for others?

Lessons Learned:
What did you learn from the experience?

Protecting your children online is actually really easy; watch out for the political speculation.

As we worked through stories to patterns, a very strange thing happened – the role of motherhood disappeared. And this was very clearly another compromise on behalf of the child – in order to demonstrate the meta pattern/problem concerning the protection of the child, we had to make the troublesome issue of the mother go away. The problem of the mother turned out to be that she was the mother. The problem wasn't one that could be solved outside the context of wide spread social and political change. So our title became Others First Managing the tensions between identity & personal responsibility, where identity is enmeshed and shaped by, in this explicit case, the vulnerable other of the child. From this it's possible to extrapolate the pattern on to a broader context – for example, anyone who needs to manage their own or another's online identity or personal safety. If we had more time we could have extended the pattern to look at different kinds of identity management – for example the management of being gay within a homophobic society, the management of responsible friendship etc.

What really struck me today was how the solution to effective protection – that could be interperated as concealment, repression, or confinement to specific circles, mirrors and perpetuates existing social inequalities – making already under represented and less visible groups – namely children and mothers in this case, though I'd argue the same strategy can be applied to a lot of other troublesome identities/bodies – as shadowy in online public spaces as they are off line.