by  | Oct 21, 2020 | Politics

From left to right: Cameron Hickey, Program Director, Algorithmic Transparency at National Conference on Citizenship (NCoC); Jacquelyn Mason, Senior Investigative Researcher at First Draft; Jacobo Licona, Disinformation Research Lead at Equis Labs

Disinformation, misinformation and hate speech, among other types of content, can lead to vote suppression and undermine democratic institutions. Experts offer tips on how to identify them.

By Pilar Marrero, Ethnic Media Services

Problematic content – what some call “fake news” — regarding the elections is spreading rapidly in the media environment, potentially laying the groundwork for chaos in this election season. And the most potentially damaging content, according to some experts on disinformation, doesn’t have to be patently false to have an impact.

“That’s exactly why sometimes it’s so hard to identify,” said Cameron Hickey, a former journalist and program director of Algorithmic Transparency, a project of the National Conference on Citizenship (NCOC). Hickey led off a recent EMS conference on disinformation.

“It doesn’t matter exactly what form this problematic content is, if it’s misleading, it’s a problem,” Hickey said. “But the stuff that’s absolutely false and can easily be fact checked is not what I am talking about. It’s the murkier gray area

According to Hickey, problematic content includes disinformation (intentional), misinformation (unintentional), rumors, junk news, and conspiracies. Subject matter varies, he said, noting categories such as:

Fear and manipulation: content that tries to make you feel scared or angry or self-righteous in order to change your behavior.

Conspiracy theories: theories that reference the “deep state” or so-called “boogeymen” such as Bill Gates or George Soros.

Missing Context: information that leaves out a key piece of the context to distort people’s understanding.

Pseudoscience: such as bogus cures for the Coronavirus or theories arguing that masks do “more harm than good” which have been disproven by science.

Hate and dog whistles: divisive language or images designed to elicit a feeling but not to clarify an issue.

Faulty Logic: logical fallacies and false equivalencies.

Old: content that is out-dated and no longer relevant to the current topic.

Q-Anon conspiracy theory illustrates the worst of problematic content.

“It started very narrowly as a theory about the deep state and pedophilia, and then broadened to include a lot of different things,” Hickey said. “Some people refer to it as a cult, others even as a religion,”

The conspiracy includes the idea that Democratic politicians, leaders and celebrities are pedophiles who drink children’s blood to remain young, that Donald Trump is the hero, and that “Q” himself is battling the deep state and the forces of evil, Hickey noted. He added that while pedophilia and child trafficking are real problems as underscored by recent cases involving the Catholic Church and Jeffrey Epstein, Q-Anon goes way beyond any reality.

When asked to denounce Q-Anon in a presidential town hall, President Trump declined to do so. Both Facebook and Twitter recently eliminated Q-Anon- related content and accounts.

Another “problematic” type of content, according to Hickey, is the growing theme of “an impending civil war” that many predict will happen after the election.

“My concern about this is that there is no reason to believe at the moment that there will be such violence. All the conversation is problematic because it amplifies the potential risk for violence.”

Ideological hyperbole also qualifies as misinformation, Hickey says, such as referring to Republicans as “Nazis” and Democrats as “communists” or “socialists.”

“These messages are being deployed constantly, not just in social media memes and social discourse but in the ads,” Hickey added. “We are no longer having conversations about the issues or the identities of the politicians running for office but exaggerating narrow bands of their perspective and amplifying them in ways that distort reality.”

Some disinformation targets specific communities in different ways. Jacqueline Mason, senior investigative researcher at First Draft, monitors disinformation aimed at the African American community. She singled out a picture of Democratic Vice-Presidential nominee Kamala Harris — a collage of images of all the black men she locked up and kept in prison past their release date — that went viral on social media. Harris’ record as a prosecutor is often attacked and it’s fair game, Mason noted, but the picture itself was fake. The photographs in the background were not identified and consisted of the same six images repeated time and again. “This is a clear disinformation tactic,” Mason said.

Problematic content targeting LatinX communities varies according to the cultural nuances and demographics within each, according to disinformation expert Jacobo Licona of Equis Lab. “Many false narratives are co-opted and rapidly amplified in Spanish, which often goes largely unchecked on social media.” The sharing of information through closed (private) platforms like WhatsApp makes them hard to monitor, he added.

“We have seen bad actors using social media with the goal of suppressing the vote and depressing enthusiasm,” Licona said.

The perceived threat of socialism and vote-by-mail attacks are two current examples. “We continue to see some bad actors spreading the idea that socialism is coming to America and they tailor them to their audiences by targeting people who come from Cuba and Venezuela,” Licona said. “This type of speech has been amplified by far-right accounts, and pro-Trump LatinX influencers connecting Biden with socialism and Castro’s Cuba or Maduro’s Venezuela.”.

“We all have a role to play in reducing the impact of problematic content,” Hickey concluded. “I believe it’s our shared responsibility to monitor for content like that in our own feeds and track and engage others to be vigilant about it.”

Responding negatively is usually not as effective as debunking the actual facts or sharing credible information that “insulates people” from being taken in by problematic content, he said.