One Question Quiz
josie (1)

ScienceSeptember 15, 2020

The artificial intelligence trying to level Twitter’s toxic playing field

josie (1)

Tech start-up Areto Labs noticed online abuse was stopping women from going into politics – so it did something about it. CEO Lana Cuthbertson and creative technologist Jacqueline Comer talk to The Spinoff about their abuse-fighting bot.

“Imagine you have a job interview and every day, for a month, you have to walk down a dark alley, knowing the worst people in the world are in that alley and they will yell and scream at you,” proposes Aucklander Jacqueline Comer, a creative technologist.

“If you knew that, you wouldn’t apply for the job. And, unfortunately, that’s what women in politics have to put up with.”

Most people in the public eye cop some online criticism, but women get some of the most violent. Men receive their share of abuse, of course, with one study finding the leading subjects of online abuse for men are sexual orientation, religion, and physical or intellectual disability.

However, the gendered nature of much abusive language means the worst of it is generally directed at women.

To shine a light in the alley way, @ParityBOT_NZ was created. It’s an artificial intelligence designed to combat abuse by recognising abusive tweets and posting positive ones to create a parity of sentiment.

Comer and Areto Labs’ co-founder and CEO, Lana Cuthbertson, were intrigued by the idea of running the Canadian-born bot in New Zealand because of the country’s perceived acceptance of women in politics. We have a woman as our prime minister and also as the leader of the opposition – as a theoretically woman-friendly country, is Twitter safer for women here?

To find out, Areto Labs and Comer set the bot up to monitor mentions of women running in our 2020 election. Tweets are scored according to their toxicity, and those that meet a certain threshold, which is currently set at 0.9 or 90% likely to be toxic, are considered abusive by the bot and triggers it to tweet something nice.

The bot is trained on a data set largely from the New York Times comment section. “We’re using the one category of toxicity, but there are other labels like racist, identity attack – there are about 15 of them that are available in the system that we could use to analyse this data,” explains Cuthbertson.

Lana Cuthbertson, CEO of Areto Labs

Areto Labs recognises that Māori, Pacific people, women of colour and gender-diverse women are particularly targeted, and this was an uncomfortable part of training the bot. “One of the things we had to work on in setting up the bot for New Zealand was trying to understand the colloquialisms here,” says Comer. “I had to ask some family members, ‘what are some racist and derogatory terms for different New Zealanders?’ and they had to actually tell me.”

The AI uses natural language processing that looks beyond a tweet’s vocabulary and into how the sentence as a whole is constructed. “There’s sentiment analysis involved,” says Cuthbertson. She knows it’s not perfect. “It’s pretty likely to score a tweet as toxic if it’s got a fairly harsh swear word in it, but sometimes people use swear words in enthusiastic or positive ways.”

“The way people speak here is a bit more passive aggressive, so how do you create a machine learning model that can start to understand that?” adds Comer.

It’s been running for almost a month, and has found about 2% of tweets sent to women running for parliament are toxic enough to trigger the bot. In three weeks, they’ve found 737 tweets that are more than 90% toxic.

“The distribution of toxicity is not even across the candidates,” says Cuthbertson. “Higher-profile women will receive a lot of toxicity, so Jacinda Ardern and Judith Collins are getting lots more than some of the maybe lesser-known candidates, who aren’t receiving any, and then there’s a distribution in between.”

On the higher end of this distribution is Green MP Golriz Ghahraman, who’s received barrages of messages via social media and directly to her emails since she first ran for parliament in 2017.

Ghahraman recently received her first tweet from Parity Bot, she tells The Spinoff. “It actually made me worry, because I was like, ‘oh God, what’s happened, have I met this threshold of abuse?’”

She says she’s better able to tune out online abuse these days, but it still takes its toll. “It’s latent, but you’re still expending energy tuning it out. It’s not nothing.”

She says Parity Bot is one approach to addressing the issue, but she’s also interested in seeing more humans take to the keyboards and combat the trolls. “I think that is needed, not necessarily for me, but for making the whole environment safer. Because people come to my feed to get more information, they want to see Green Party policies.”

Ghahraman says there are quiet days and there are spikes. After the Christchurch attacks, she noticed online abuse of anyone even adjacent to the Muslim community soared. “It went up, exponentially,” she says. “Forums that formed around that still exist, and I think they flare up again. There was a whole backlash group that started to form after there was a perceived sympathy for these communities.”

Marama Davidson and Golriz Ghahraman in Aotea Square (Photo: Todd Henry)

She says the abuse has made her question whether there’s a better way of achieving change than parliamentary politics. “I was getting quite serious death threats, it was like, ‘oh, I’m putting my family through this’. I was actually facing a physical threat.

“It’s much harder for me sometimes to cut through on some of the issues I actually want to talk about, because I have to talk about these things, or I don’t have the energy that day to deal with the repercussions. It’s exactly the effect that it’s meant to have, which is silencing.”

After the Christchurch attacks, a campaign Ghahraman had started the year before to reduce hate speech became difficult to progress because of the abuse directed at her in particular, so she gave it up. “Andrew Little picked that piece of work up, and we decided I wouldn’t talk about it any more, because it was so threatening when I did.

“I particularly can’t talk about it because I’m from one of the communities that hate speech impacts on.”

Cuthbertson said there’s no acceptable level of toxicity. “One toxic tweet is one too many,” she says. “It can be a really traumatising experience, and if you’re a woman putting your name forward in politics, it’s likely, unfortunately.”

Currently, the positive tweets sent by the bot are written by humans and vetted by the Areto Labs team. “There might be a world relatively soon where we’re able to automate that, says Cuthbertson. “But we’d rather craft those to the appropriate context.

“It still feels like it needs human strategy and consideration in that way.”

For her part, Ghahraman believes her work is worth these drawbacks. “I think it’s worth it in the end, and that’s why I’m still here.

“I think one of the things that makes it worthwhile for me is all of the other women, and in particular the women of colour, that I’ve ended up connecting with and who create these spaces for each other.”

To any woman hesitant about entering the dark alley that leads to a career in politics, Ghahraman says there is a space on the other side. “We are here for you.”

Keep going!