Researchers Model Online Hate Networks in Effort to Battle Them

The Christchurch mosque shootings, which left 51 dead and 49 injured on March 15, were a modern-day horror by any measure—from the pain and suffering of the 100 victims to the agony of their hundreds of family members and friends and the howls of grief and outrage from a wounded city and country. Now, as if all that were not nightmare enough, George Washington University researcher Neil Johnson and his colleagues point out in a new paper that, in the sum of its details, the Christchurch massacre epitomizes the border-defying complexity of what they call “online hate ecology,” the global networks that link groups of neo-Nazis, white supremacists and other extremists.

In this case, Johnson and his colleagues explain in the paper, the man charged in the attack is Australian, the shootings took place in New Zealand, and the guns were covered in “messages in several European languages on historical topics that are mentioned in online hate clusters across continents.” Those topics included historical defeats of Islamic forces at the hands of Europeans.

In addition, the attack on the first of the two targeted mosques was livestreamed by the shooter on Facebook. And less than five months later, on August 3, shortly before 22 people were killed and more than two dozen injured in a shooting at an El Paso, Tex., Walmart, the man charged with capital murder in that case is believed to have posted a statement online that began by saying, “In general, I support the Christchurch shooter and his manifesto.”

As that litany of links suggests, Johnson says, the “networks of networks” that spread hate around the world with the click of a mouse and that can nurture mass murderers such as the Christchurch and El Paso gunmen, transcend physical and mental barriers. “They jump continents, they jump cultures, they jump languages,” he says. “We see ‘Franco nationalists’ in Spain connecting with ‘lovers of Aryan women’ in Nordic countries, then immediately flipping back to Virginia, where they’re talking about militia groups.”

In the new paper, published this week in Nature, Johnson, a professor of physics who studies complexity in real-world systems, and his team explain why disrupting and degrading such networks is so difficult. They describe the “collective online adaptations” the groups use to thwart efforts to identify and banish them.

Using public data, the researchers write, they were able to observe the networks “rapidly rewiring and self-repairing at the micro level when attacked.” In the wake of the high school shooting in Parkland, Fla., in February 2018, for example, “the ecology of the [Ku Klux Klan] on VKontakte,” a Russian social media platform, underwent significant, spontaneous reorganization (with old bonds among groups broken and new bonds formed) in an apparent protective response to news reports of the accused shooter’s interest in the KKK.

“It’s an intriguing, contrarian paper,” says Susan Benesch, a faculty associate at the Berkman Klein Center for Internet & Society at Harvard University and founder and director of the Dangerous Speech Project. “The authors have tried to discover the consequences of deleting posts or accounts, which is itself a contribution, since it is too often assumed that deletion is the most effective response to bad content—without evidence.”

In addition to describing the Darwinian resilience of online hate groups when targeted by Facebook and other social media platforms, the researchers suggest specific interventions, based on their mathematical modeling, that, they write, “can help to defeat online hate.” And they argue that policing hate within a single platform, such as Facebook, may actually “make matters worse.”

To illustrate the latter point, Johnson uses a lawn-care analogy: “If there’s an infestation problem on your block, and you just focus on your own yard, you may do a really good job of eradicating stuff from your yard,” he says. “But in some sense, you’ve reinforced what’s just outside your yard, because you’ve pushed it back. And you’ve forced it to concentrate on the border of your yard, the border of your control.”

The paper describes how, after Facebook banned KKK groups and the Ukrainian government subsequently banned VKontakte, clusters of Ukrainian KKK groups migrated to Facebook with “Ku Klux Klan” written in Cyrillic, making them harder to detect with English-language algorithms. Johnson calls this type of adaptation “hate-cluster reincarnation.” The research also suggests the lack of coordinated policing by various platforms can lead to the creation of isolated, online “dark pools,” where hate can flourish on the less policed platforms.

To counter hate groups’ ability to adapt to attacks, the paper proposes first that, because large groups and clusters form from smaller ones, social media platforms should target the smaller groups (which are more numerous and easier to find) to prevent the formation of the larger ones. A second, similar suggestion in the study recommends randomly banning “a small fraction of individual users across the online hate population” rather than banning well-known hate-group leaders. As Johnson explains, this approach not only avoids the creation of martyrs, it can seriously weaken a group. “Our models show that randomly banning 10 percent of individuals can lead to a reduction in size of the global hate network by as much as 50 percent,” he says.

The researchers’ third policy proposal involves the creation of artificial antihate accounts by platform managers to help such groups find one another and to encourage the formation of antihate clusters to neutralize ones devoted to hate.  And their fourth proposal seeks to exploit hate-group infighting by introducing a third population of users to foment dissension within a given group.

For her part, Benesch questions the suggestion that Internet platforms create artificial antihate accounts—bots, she presumes—to give rise to clusters of real accounts that would neutralize hateful clusters. “I wonder how the neutralizing would work,” she says, “since my own research suggests that exposing hateful accounts to antihate material doesn’t automatically neutralize them and can only deepen their commitment to hatred.”

Johnson views his and his colleagues’ suggested tactics as “organic” alternatives to more oppressive, Orwellian options. “The easiest way to remove hate is to just shut down all the networks,” he says. “But such top-down approaches come with a cost in terms of perception. So we want to explore bottom-up approaches.”

Johnson acknowledges that such methods might seem beyond what tech companies would be willing to do. But if the approaches are shown to be effective, he says, they could potentially save the companies some of the millions they spend on policing costs, not to mention the huge fines they are subject to if they are found to be in violation of the hate speech laws that more countries are adopting. “It’s like having to buy more and more things to take care of the weeds,” Johnson says, reverting to his lawn-care analogy. “So if I can do it organically, it may be cheaper.”

Share With Your Friends !

Products You May Like