{"id":30789,"date":"2021-01-18T10:03:29","date_gmt":"2021-01-18T18:03:29","guid":{"rendered":"https:\/\/lapost.us\/?p=30789"},"modified":"2021-01-18T10:03:29","modified_gmt":"2021-01-18T18:03:29","slug":"why-are-social-media-platforms-still-so-bad-at-combating-misinformation","status":"publish","type":"post","link":"https:\/\/lapost.us\/?p=30789","title":{"rendered":"Why Are Social Media Platforms Still So Bad at Combating Misinformation?"},"content":{"rendered":"<div class=\"pt-4 md:pt-0 head sm:pb-20 print:pb-2 xl:pb-8 \">\n<h2 class=\"font-sans text-sm leading-normal pb-3 tracking-itty\">Facebook, Twitter, and users themselves have few incentives to distinguish fact from fiction.<\/h2>\n<\/div>\n<div class=\"hidden print:block  lg:flex meta pb-10  pr-16 \">\n<div class=\"  pt-6  pr-10 print:pt-0 print:flex print:flex-wrap\">\n<div class=\"whitespace-no-wrap uppercase text-ns font-sans pt-6 print:pt-0 print:pb-1 print:w-full pb-2 print:pr-3 tracking-med\">BASED ON INSIGHTS FROM<\/div>\n<p class=\"print:pr-6 font-sans text-sm\"><a class=\"text-purple font-bold\" href=\"https:\/\/insight.kellogg.northwestern.edu\/author\/hatim-rahman\">Hatim Rahman<\/a><\/p>\n<p>In 2016, the Russian Internet Agency purchased ads and created content on Facebook in an act of information warfare aimed at disrupting the U.S. presidential election. Facebook estimates that\u00a0<a href=\"https:\/\/money.cnn.com\/2017\/10\/30\/media\/russia-facebook-126-million-users\/index.html\">126 million users<\/a>\u00a0viewed Russian-created content in what\u00a0<a href=\"https:\/\/www.theatlantic.com\/politics\/archive\/2019\/04\/mueller-report-release-summaries-barr-trump\/587182\/\">The Mueller Report<\/a>\u00a0described as a \u201csocial media campaign designed to provoke and amplify political and social discord in the United States.\u201d<\/p>\n<div class=\"bodytext max-w-lg mx-auto px-5 md:px-10 print:px-4 relative\">\n<p>In the months leading up to this November\u2019s election,\u00a0<a href=\"https:\/\/www.kellogg.northwestern.edu\/faculty\/directory\/rahman_hatim.aspx\">Hatim Rahman<\/a>, an assistant professor of management and organizations at the Kellogg School, is closely watching Facebook and other social-media platforms like Twitter, YouTube, and WhatsApp to see how they handle misinformation this time around. And the trends he sees concern him.<\/p>\n<p>\u201cThanks to increasingly powerful algorithms, the speed and scale at which misinformation can spread is unprecedented,\u201d says Rahman.<\/p>\n<p>Rahman points to three reasons why misinformation on social media is such an intractable challenge\u2014and what this might mean going forward.<\/p>\n<h2>Shrouded Sources<\/h2>\n<p>One shortcoming of the major social-media platforms is that it is often difficult for users to determine the sources of the information that makes it into their feeds.<\/p>\n<p>Most social-media users would agree that it is important to know who is generating misinformation\u2014but so far it has been difficult for users, regulators, or even the platforms themselves to pinpoint where messages are coming from, much less why they are being generated. Lone individuals in their basements? A cadre of foreign trolls, as in 2016? Or networks of organizations?<\/p>\n<p>Part of the problem is that most platforms don\u2019t require posters to identify themselves before spreading information. Nor, for that matter, do the platforms verify whether the information being shared is accurate. Knowing this, individuals and organizations often design propaganda with the intention of it spreading across platforms before its origins are determined.<\/p>\n<p>Research on the major sources of misinformation\u00a0<a href=\"https:\/\/www.state.gov\/wp-content\/uploads\/2019\/05\/Weapons-of-Mass-Distraction-Foreign-State-Sponsored-Disinformation-in-the-Digital-Age.pdf\">is ongoing<\/a>, Rahman says, but he observes that it is becoming increasingly clear that political misinformation is often seeded and spread through surprisingly coordinated, well-financed campaigns that might prefer to stay under the radar.<\/p>\n<\/div>\n<div class=\"bodytext max-w-lg mx-auto px-5 md:px-10 print:px-4 relative\">\n<p>\u201cIt\u2019s usually very well shrouded, especially because coordinated campaigns might want to give the impression that their message is grassroots,\u201d Rahman says. \u201cOr if there are prominent funders involved, they don\u2019t want those ties to be revealed.\u201d<\/p>\n<p>Of course, if social-media platforms wanted to, they could require more stringent indentity-verification procedures, thus ensuring that sources are in fact who they say they are. Google has done this\u00a0<a href=\"https:\/\/www.cnbc.com\/2020\/04\/23\/google-advertiser-verification-process-now-required.html\">for its advertisers<\/a>, for instance. And Twitter already has a mechanism in place to verify users\u2014its blue-checkmark badges\u2014but Twitter reserves this function for what it describes as \u201caccounts of public interest.\u201d That leaves the vast majority of Twitter accounts unverified.<\/p>\n<h2>The Wrong Incentives<\/h2>\n<p>Why are social networks so reluctant to take action on misinformation and propaganda?<\/p>\n<p>\u201cIf you look at Facebook and Twitter \u2014 the platforms themselves \u2014 they knew to a certain extent that these things were occurring,\u201d Rahman explains. \u201cIt\u2019s just that they have different motivations\u201d than many users and regulators.<\/p>\n<\/div>\n<div class=\"print:hidden pullquote w-full overflow-hidden\">\n<blockquote class=\"text-2xl md:text-4xl text-center font-sans p-8 md:p-16 max-w-2xl mx-auto\"><p>\u201cSome people don\u2019t really care about the source or veracity of the information. As long as it aligns with their political view, they will spread it.\u201d<\/p>\n<p class=\"text-center md:text-2xl pt-4 block\">\u2014 Hatim Rahman<\/p>\n<\/blockquote>\n<p>What Rahman means is that ignoring misinformation serves these sites\u2019 interests to an extent. After all, shareholders reap rewards when user numbers grow and viral content spreads, and\u00a0<a href=\"https:\/\/journals.plos.org\/plosone\/article?id=10.1371\/journal.pone.0196087#sec012\">research has shown<\/a>\u00a0that divisive content tends to get higher engagement. This set of conditions provides platforms with a perverse incentive to look the other way at fake, violent, or racist content.<\/p>\n<p>\u201cViral content pays,\u201d Rahman says. \u201cIt attracts more advertisement, more eyeballs, more time and attention to the platforms. But we\u2019ve seen serious trade-offs to maximizing those type of metrics for platforms.\u201d For instance,\u00a0<a href=\"https:\/\/www.propublica.org\/article\/outright-lies-voting-misinformation-flourishes-on-facebook\">one study finds<\/a>\u00a0that, of the top 50 most popular Facebook posts that mentioned voting by mail, an essential part of our election infrastructure, 44 percent contained misinformation.<\/p>\n<h2>Willing Spreaders<\/h2>\n<p>With platforms largely focused on growing user rates, the responsibility for discerning the veracity of posts largely falls on users. Rahman sees this as unfair, especially given that users are at an information disadvantage. Not only that, the only real leverage users have to demand change is to vote with their feet by deleting their accounts and leaving the platform.<\/p>\n<p>Leaving this responsibility up to users is also unlikely to be effective, in part given users\u2019 predilection toward believing what they want to believe, and in part because of an even more disturbing phenomenon: users simply not caring whether something is true.<\/p>\n<p>In this regard, Rahman sees a troubling trend: although users are becoming more sophisticated about judging whether online content is accurate, some users are also becoming more comfortable in willingly spreading misinformation when they agree with the underlying message it is trying to convey. They are also\u00a0<a href=\"https:\/\/www.pnas.org\/content\/113\/3\/554\">less likely to verify sources<\/a>\u00a0or fact-check posts that support their worldview.<\/p>\n<p>\u201cWhat sometimes gets missed is that some people don\u2019t really care about the source or veracity of the information,\u201d Rahman says. \u201cAs long as it aligns with their political view, they will spread it.\u201d<\/p>\n<p>According to Rahman, this behavior represents a shift in how we think about misinformation spreading from the last presidential election. In 2016, many believed that most users shared information that they thought was true and were only vaguely aware of online trolls, \u201cfake news,\u201d and hackers. Today, people are more aware of online deception. But many users still perpetuate it.<\/p>\n<h2>Glimmers of Hope<\/h2>\n<p>So if platforms won\u2019t act to mitigate misinformation\u2014and individuals are inclined to spread it\u2014what\u2019s to be done? It\u2019s not entirely clear, Rahman says.<\/p>\n<p>While regulating complicated, fast-developing, global technologies can be difficult for lawmakers, there remains a role for policy that balances accountability and consumer protection with free-speech concerns.<\/p>\n<p>\u201cThe role of regulation is to incentivize and hold organizations accountable for being more proactive, rather than telling the platforms what to do,\u201d Rahman says.<\/p>\n<p>Still, Rahman sees a glimmer of hope that the platforms themselves might be coming around to addressing the problems with misinformation.<\/p>\n<p>Given the storm of COVID-19, the surge of Black Lives Matter protests in the wake of George Floyd\u2019s death, and the lead-up to November\u2019s election, it feels like the platforms\u2019 tolerance for misinformation may be shifting.<\/p>\n<p>For instance, Twitter added \u201cGet the Facts\u201d labels to potentially\u00a0<a href=\"https:\/\/blog.twitter.com\/en_us\/topics\/product\/2020\/updating-our-approach-to-misleading-information.html\">misleading information<\/a>, including\u00a0<a href=\"https:\/\/twitter.com\/TwitterSafety\/status\/1265838823663075341\">President Trump\u2019s tweets<\/a>\u00a0about California\u2019s plans for vote-by-mail. It also placed a warning on one of his tweets following the Minneapolis protests for\u00a0<a href=\"https:\/\/www.washingtonpost.com\/nation\/2020\/05\/29\/trump-minneapolis-twitter-protest\/\">glorifying violence<\/a>. Facebook is currently facing an unprecedented push\u2014including a boycott from major advertisers\u2014to take similar steps.<\/p>\n<p>Platforms could also reconfigure their algorithms to prioritize information that is accurate, from sources that can be readily identified.<\/p>\n<p>\u201cAI can be viewed as a tool that\u2019s neither good nor bad. Depending largely on how they use it, it reveals the values and intent of an organization,\u201d Rahman says.<\/p>\n<p>With so much at stake within a short time frame, it\u2019s \u201call hands on deck\u201d in what he calls a \u201cpush and pull\u201d moment where all stakeholders must play a role.<\/p>\n<p>\u201cWe need researchers for their rigorous interdisciplinary\u00a0<a href=\"https:\/\/spia.princeton.edu\/news\/tracking-misinformation-campaigns-real-time-possible-study-shows\">problem-solving approaches<\/a>. We need community organizations for their ability to voice concerns from underrepresented groups. We need users for their lived experiences and governments for their regulatory powers. We need all these brought to together in ways that are necessary, but that platforms have thus far resisted.\u201d<\/p>\n<\/div>\n<\/div>\n<\/div>\n<p>Chinese Version:\u00a0<a href=\"https:\/\/insight.kellogg.northwestern.edu\/zh\/article\/social-media-platforms-combating-misinformation\">https:\/\/insight.kellogg.northwestern.edu\/zh\/article\/social-media-platforms-combating-misinformation<\/a><\/p>\n<p>Source:\u00a0<a href=\"https:\/\/insight.kellogg.northwestern.edu\/article\/social-media-platforms-combating-misinformation?utm_source=subscriber&amp;utm_medium=email&amp;utm_campaign=pianomailer012021&amp;pnespid=letrsuMHF12N60U_Her0KOpyPKEkBmSfkBDO0KZD\">https:\/\/insight.kellogg.northwestern.edu\/article\/social-media-platforms-combating-misinformation?utm_source=subscriber&amp;utm_medium=email&amp;utm_campaign=pianomailer012021&amp;pnespid=letrsuMHF12N60U_Her0KOpyPKEkBmSfkBDO0KZD<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Facebook, Twitter, and users themselves have&#46;&#46;&#46;<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[9],"tags":[],"class_list":["post-30789","post","type-post","status-publish","format-standard","hentry","category-opinion"],"_links":{"self":[{"href":"https:\/\/lapost.us\/index.php?rest_route=\/wp\/v2\/posts\/30789","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/lapost.us\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/lapost.us\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/lapost.us\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/lapost.us\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=30789"}],"version-history":[{"count":1,"href":"https:\/\/lapost.us\/index.php?rest_route=\/wp\/v2\/posts\/30789\/revisions"}],"predecessor-version":[{"id":30790,"href":"https:\/\/lapost.us\/index.php?rest_route=\/wp\/v2\/posts\/30789\/revisions\/30790"}],"wp:attachment":[{"href":"https:\/\/lapost.us\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=30789"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/lapost.us\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=30789"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/lapost.us\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=30789"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}