An Amnesty International SA report finds women are increasingly receiving threats of violence and abuse on Twitter
Social media network Twitter is an increasingly violent platform for women to use.
This is according to a new study by human rights organisation Amnesty International SA, which found that violence and abuse against women on the site often contains sexual and/or misogynistic remarks and may target different aspects of a women’s identity, such as race, gender and sexuality.
The study, titled Twitter Scorecard: Tracking Twitter’s progress in addressing violence and abuse against women online in SA, found women were often threatened with violence, rape, or death.
And women who reported abusive tweets were met with inconsistent enforcement of Twitter rules.
According to the report, released on Tuesday, the aim of online violence and abuse against women is to create a hostile online environment with the goal of shaming, intimidating, degrading, belittling or silencing women.
The organisation believes the issue lies with the number of content moderators who are a “critical indicator of Twitter’s overall capacity to respond to reports of abusive and problematic content”.
To come up with a scorecard, Amnesty International reviewed statements made by Twitter in written correspondences and the publicly available information on Twitter’s website, including its policies, transparency reports, blog posts, tweets, and help centre pages.
The organisation had since 2018 recommended implements to curb this violence. Their latest scorecard also looked at the company’s response to these recommendations.
To determine whether Twitter had implemented any of their recommended changes, Amnesty International used their transparency report, which covered July to December 2020.
The report found that 82% more accounts were actioned, 9% more accounts suspended, and 132% more content removed compared to the last reporting period, from January that year.
“Twitter say they have deployed more precise machine learning and better detected and took action on violative content, leading to an increase of 142% in accounts actioned compared to the previous reporting period. Regarding enforcement of the hateful conduct policy — which includes content that incites fear and or fearful stereotypes about protected categories such as gender, sexual orientation, race, ethnicity, or national origin, 77% more accounts were actioned,” the report read.
Twitter said: “While we understand the value and rationale behind country-level data, there are nuances that could be open to misinterpretation, not least that bad actors hide their locations and so can give very misleading impressions of how a problem is manifesting, and individuals located in one country reporting an individual in a different country, which is not clear from aggregate data.
“At Amnesty’s request, the transparency report now includes data broken down across a range of key policies detailing the number of reports we receive and the number of accounts we take action on.” But, Amnesty said, Twitter’s transparency report does not provide any data on reported content that failed to receive a response, nor does it provide data on how many reports were reviewed but found to not be in violation of community guidelines.
“
As such, it doesn’t specify how many reports were actually reviewed, as opposed to ignored.”
Twitter argued measuring a company’s progress or investment on these issues with a measure of how many people are employed “is neither an informative nor useful metric”.
They said their operations were severely affected by Covid-19 and that varying country-specific restrictions and adjustments affected the efficiency of their content moderation and the speed at which they enforced policies.
“We increased our use of machine learning and automation to take a wide range of actions on potentially misleading and manipulative content.”
Amnesty believes the number of content moderators is a critical indicator of Twitter’s overall capacity to respond to reports of abusive and problematic content.
“Even with investments in machine learning to detect online abuse, it is important to have a measure of the number of human moderators reviewing automated decisions. This is especially important during disruptive times such as the Covid-19 pandemic.”
Sharing her experience, former MP Phumzile van Damme said threats she received were: “‘I will find you, and I will stab you’. ‘I will shoot you’, you know, real life threats. And while blocking them [did] kind of get rid of that, you never know if it could actually result in real life harm. And also consistently receiving that abuse does have psychological harm.”
Legal journalist Karyn Maughan was told that “you must be raped and murdered, you must be necklaced”.
“If you are a female in the public space and you express opinions, you are, I think, disproportionately more likely to be abused,” she said.
Political commentator Nomboniso Gasa said women who express opinions on Twitter receive sexually charged messages from people who disagree with them.
“If I say something about any political figure, people say, ‘Oh, you want to be laid by this person’. So, it has been interesting, this kind of sexualisation of women with whom we disagree. People actually come out and say things like, you know, ‘if you came across me, I would rape you’, or ‘I would not even rape you’.”
News editor in chief Mahlatse Mahlase said she has “been insulted as a black woman in particular, and also portrayed as a black woman who has no saying capacity and is just being used by white male counterparts to say what it is. And then there is others about, you know, people threatening your work, rape …”
LGBTQI+ activist Moude Maodi said just sharing content of herself with her partner would result in hate speech, verbal attacks based on sex, gender identity and sexual orientation, and threats on her life.
“… They would be mocking our relationship. They would be saying that we will follow you, we will try to find you lesbians who have to be killed.”
Van Damme added: “I used to report a lot of sexism but there is no point in it because it is not going to do anything. It leaves you feeling very kind of defeated to get a response that says, ‘there was no violation of our rules’.
“The only kind of tweets I will report and there will be action is like if it is blatant, like, racist, racism. But I have reported instances where someone said, ‘I will shoot you’ and there was ‘no violation of our rules’. So, you get to a point when you realise that there is actually no point in reporting.”
Investigative journalist Pauli van Wyk said a language barrier is sometimes an issue for Twitter.
“Sometimes [Twitter] is good, sometimes when the threat is … overtly rape threats, or when it is very clearly in transgression of their regulations and rules they do block. But then it is obviously easy to just create another farce or stupid kind of new [handle]. I do not say they always understand the current situation in our country and what is being said, especially if it is in Xhosa, or Zulu, or Setswana, or Afrikaans. So, if it is in English, the chance of it getting banned is much better.”
Amnesty said while the constitution protects the Right to freedom of expression, that protection does not extend to “the incitement of imminent violence” or “advocacy of hatred that is based on race, ethnicity, gender or religion, and that constitutes incitement to cause harm”.
SA is in the process of implementing the Cybercrimes Act, which criminalises the sharing of messages on electronic communication services, like Twitter, that intend to incite violence against a person or group of people.
The Cybercrimes Act was signed into law this year but is yet to come into effect.
TimesLIVE