One of the most pressing social issues of our time is fake news. Nurtured by the great diffusive strength of the internet and by the easy access modern technology gives to this stream of power, fake news has grabbed a concerningly wide foothold in our society. Fake news appears as a rampaging menace, crashing through any obstacles we have to this point put in its way, its evil tentacles viciously spreading and growing as it feasts on technological evolutions. However, rather ironically, hope in combating this plague lies largely in the very thing that has made it such a dangerous infestation in the first place: technology. As evidenced by the work done by scholars such as Ray Oshikawa of the University of Tokyo and Jing Qian and William Yang Wang of the University of California, Santa Barbara, methods in machine learning can be used to accurately detect fake news. Moreover, studies done by Katherine Clayton, Spencer Blair, and others of Dartmouth College have shown that warnings for fake news that are implemented by social media companies on their sites have a significant effect in reducing people’s acceptance of false information. These studies sponsor a belief that the application of technological innovations can be effective in stopping the spread of misinformation. Contrary to this belief lies the arguments made by techno-pessimists that technology can only be a means of destructive ability, as argued by John Danaher, a Senior Lecturer in Law at the National University of Ireland. Despite techno-pessimistic arguments that technology can only be a source of negativity, the great potential of implementing new technologies that can accurately detect false information provides us hope in the fight against fake news.
Great advances in machine learning algorithms that can detect fake news strongly indicate that technology can be used to fight misinformation. In their article, A Survey on Natural Language Processing for Fake News Detection, scholars Ray Oshikawa, Jing Qian, and William Yang Wang summarized current methods in machine learning that are being used to detect fake news. They explain that essence of these methods is to use various “Fake news detection related datasets, ” such as LIAR, FEVER, and CREDBANK, which contain samples of what has already been determined as fake news, to train machine learning algorithms to accurately predict whether some information is false or not (Oshikawa). The information for these data sets is collected from, as the authors categorize them, “claims, entire articles, and Social Networking Services (SNS) data.” Claims come from “manually labeled short claims in news,” while entire articles come from “fake news articles based on BuzzFeed and PolitiFact.” Lastly, SNS data comes from posts on social media websites such as Facebook and Twitter (Oshikawa). We can thus see how great the extent of the data in these data sets is. This is because all the major sources of fake news that people come into contact to, from news publishers to social media posts, are covered. Not only are these collections thorough, but they also are made up of a large quantity of data from each category. As an example, the dataset LIAR contains “12,836 real-world short statements,” and the dataset SOME-LIKE-IT-HOAX contains “15,500 posts from 32 Facebook pages”(Oshikawa). This gives us confidence in the effectiveness of the training these machine learning algorithms are being based on, and thus gives us confidence in their accuracy when determining the validity of certain information. In fact, the aforementioned article found that the accuracy of “2-class prediction”(i.e. determining some data as either True or False) is “over 90%” using current machine learning technology (Oshikawa). Because of the clear effectiveness of these methods, not even mentioning the fact that machine learning technology will only continue to improve, our hope in the fight against fake news can comfortably start with them.
Warnings that social media companies could give to users of their sites would slow down the spread of misinformation. Social media has now become an immense distributor of news. In fact, a 2017 Pew study found that 62% of American adults get their news from social media sites (Gottfried). Social media also maintains a continuous presence in many people’s lives. A 2014 Pew study demonstrated this by finding that 70% of American adults that have a Facebook account visit it daily (Duggen). This high activity has thus led to the easy spread of misinformation from one user to another. However, this very fact is what creates a great opportunity for stopping the spread of misinformation through social media (Bode). This is because the power of social media to distribute information quickly can be harnessed to spread messages counteracting the effects of fake news. Two main investigations prove this. One of these tests was done by scientists at Dartmouth College. In it, two groups of subjects were given false articles to read. The difference between the two groups, however, was that the articles given to one group had a “Rated False” tag implemented. The study found that, from the group that didn’t receive articles with the “Rated False” tag to those that did, there was a 13 percentage point decrease in those that believed the information to be true (Clayton). This proves that warnings that social media sites can implement can significantly reduce people’s belief in false information. Combining this ability with the power of machine learning algorithms, as previously explained, would be a potent formula for slowing down the spread of misinformation through social media. Moreover, professors Leticia Bode and Emily K. Vraga of the University of Wisconsin, Madison conducted a study in which participants were exposed to false posts on social media that also contained links to information that corrected the false claims. Their findings show that the initial misconceptions decreased significantly (Bode). This signifies that when users of social media sites have easy access to fact-checked information alongside a false article, they are likely to explore the fact-based information and change their initial perceptions made by the false article. This gives another clear path social media companies can take to stop the spread of misinformation, again proving how there is great hope in technology as a secret weapon in the fight against fake news.
Although it is true that technology has often been a source of negativity, especially in regards to fake news, this does not dispel the fact that technology can be a source of positivity against it. In his book Automation and Utopia, John Danaher states that we should give “techno-pessimism its due”(Danaher). To clarify, techno-pessimism is defined as the belief that “modern technology has created as many problems for humanity as it has solved” and that making continued advances in technology will only result in further problems and dangers to society (Diana). Following these techno-pessimistic lines, Danaher believes that modern technology has resulted in a negative “attention capture,” and one of the ways this has manifested itself is in the “spread of fake news and conspiracy theories” (Danaher). He further believes that technologies such as “robotics and AI” are likely to “exacerbate and deepen” the problem of attention capture (Danaher). Essentially, he conjectures that since modern technology has to this point resulted in peoples’ attention being swept towards negative directions, such as through misinformation, continuing to advance technology will only perpetuate this action. According to his argument there is no hope in stopping the spread of rampant fake news as technology is to blame for it, and technology will only continue to make it worse. Although it is true that technology has indeed been a large cause of the spread of fake news, as was previously discussed, this by no means indicates that there is no hope in the good use of technology. Technology is but a tool; it is neither good nor evil in-of-itself. We have already explored two examples that prove this. We saw technology being used for good in the case of the machine learning developments that were being used to detect fake news. In addition, we saw how technology could be used to grab peoples’ attention in positive ways. This was shown in the study where subjects were exposed to false articles that had links to factual information attached to them. Following the links, the subjects ended up being accurately informed on the issues.
Letting fake news run unchecked has dire consequences. From life altering decisions being made based on false beliefs, to the loss of trust in the authority of experts, to the perpetuation of impetuous rebellious behavior, the dangers of fake news have already shown their sinister heads. In the infamous year of 2020 we witnessed fake news take a toll on our nation in the form of misinformation on vaccination and in the form of the Capital Riot. These events have heightened our sense of how important the issue of fake news is. With this increased sensitivity also comes great hope. Hope is robust as it means optimism-filled action towards positive change. Fake news now stands at the edge of its power, facing the dark chasm of destruction below its feet. All that is needed is a push. A single push would break free the soothing sunlight that was blocked by the shadow of its ominous presence. With hope firmly lodged in our chests and with the power of modern technology at our hands this push can be made seamlessly. This push lies in all our hands. Actions as simple as making sure to fact check information that we receive, especially from social media sources, and sharing the message of the fight against fake news contribute to the push. Make the push.
Bode, Leticia, and Emily K. Vraga. “In Related News, That Was Wrong: The Correction of
Misinformation through Related Stories Functionality in Social Media.” Journal of Communication, vol. 65, no. 4, 2015, pp. 619–638., https://doi.org/10.1111/jcom.12166.
Clayton, Katherine, et al. “Real Solutions for Fake News? Measuring the Effectiveness of
General Warnings and Fact-Check Tags in Reducing Belief in False Stories on Social Media.” Political Behavior, vol. 42, no. 4, 2019, pp. 1073–1095.,
Danaher, John. “Giving Techno-Pessimism Its Due.” Automation and Utopia: Human Flourishing in a World without Work, Harvard University Press, Cambridge, MA, 2019. Diana, Frank. “Techno-Optimist or Techno-Pessimist?” Medium, Medium, 10 Oct. 2016, https://frankdiana.medium.com/techno-optimist-or-techno-pessimist-8d4da31047c. Duggan, Maeve, et al. “Social Media Site Usage 2014.” Pew Research Center: Internet, Science
& Tech, Pew Research Center, 31 July 2020,
https://www.pewresearch.org/internet/2015/01/09/social-media-update-2014/. Gottfried, Jeffrey, and Elisa Shearer. “News Use across Social Media Platforms 2016.” Pew
Research Center’s Journalism Project, Pew Research Center, 27 Aug. 2020, https://www.pewresearch.org/journalism/2016/05/26/news-use-across-social-media-platf orms-2016/.
Oshikawa, Ray, et al. “A Survey on Natural Language Processing for Fake News Detection.” Language Resources and Evaluation, vol. 52, no. 4, 2 Nov. 2018.