2024 Digital Policy Award Winner: Kylla Castillo


The Union of Automated and Manual Methods of Detection

Cognitive AI’s Role in Policy Making for Mitigating Misinformation in Canada

By Kylla Castillo

Introduction

In January 2024, the World Economic Forum named misinformation and disinformation as the top risk the world will face within the next two years. In this report they credit this ranking to the rise and accessibility of artificial intelligence (AI) and its ability to create ‘synthetic’, or fake content that’s hard to distinguish from what’s real. This proliferation of AI misinformation is a major concern, especially with elections taking place across the globe in the next two years from Indonesia to the US, and to here in British Columbia. Misinformation poses a serious threat to the democratic process and can easily change the course of elections as well as lead to increasing political polarisation. This presents policy makers here in Canada with a difficult issue, how can we effectively identify and prevent the spread of AI misinformation, especially during elections? Current policies, such as the Digital Charter Implementation Act, 2022, don’t adequately address this dilemma. Because of this, to find a solution, two studies on cognitive AI methods in mitigating misinformation were consulted. From this it was shown how important it is to implement these methods into places most vulnerable to AI misinformation such as social media sites.

Background 

The Covid-19 pandemic saw a boom in the adoption of AI, particularly amongst businesses through 2020-2021 (McKendrick, 2021). This was due to the pandemic causing a skills shortage which saw businesses scrambling to meet demand using AI’s automation and support capabilities. While this adoption of AI came in response to the pandemic, according to PwC, 86% of companies say AI is here to stay. However it wasn’t until late 2022 to early 2023 that AI went mainstream amongst the public due to programs like ChatGPT (Shrivastava, 2023). ChatGPT launched in November 2022 and gained 100 million monthly users within two months – in comparison, Tiktok took nine months to reach the same point (Hu, 2023). This widespread proliferation of AI usage has led to problems with the spread of misinformation. This is because programs like ChatGPT have been found to give false information and support it with plausible fake sources (Moran, 2023). In addition to this, according to a 2023 study, AI can also generate convincing disinformation in the form of social media posts that’s hard to distinguish as generated by the common person (Spitale et al., 2023, 1). This is a problem as it poses a threat to democracy, particularly during elections, due to AI potentially swaying election results or undermining the legitimacy of elected government through the use of misinformation. This is an issue that even the founder of OpenAI, ChatGPT’s parent company, Sam Altman is worried about, hoping to put in place safeguards to combat the dilemma (Swenson, 2024).

Research Overview 

When evaluating possible policy options on AI and misinformation, two studies were used to inform recommendations. The first, a 2020 study from the Journal of Experimental Political Science, used three experiments to evaluate the extent AI is capable of generating credible sounding content and influence perceptions of foreign policy. Experiment 1 used three versions of GPT-2: the 355 million, 724 million, and 1.5 billion parameter models. Generated text from these different versions of GPT-2 and human-written text from the New York Times on the same story were then shown to respondents who rated the credibility of the story out of four. From the 355M model, a majority of respondents found all of the texts credible. Meanwhile respondents found the 744M and 1.5B models as credible as the human-written texts. In experiment 2, respondents were shown AI-generated and human-written articles on the same topic that were either neutral, liberal, or conservative. A disclaimer was also added to some of the AI-generated articles. Respondents, who’s political ideologies were found beforehand, then rated the article’s credibility out of four. From this, respondent’s were found to judge articles that aligned with their views as more credible. Furthermore, a decline in credibility was more prominent for Democrats than Republicans when a disclaimer was put on an article that aligned with their views, however the articles weren’t found to change attitudes towards immigration for either side. Lastly, experiment 3 followed experiment 1 without any human intervention, only minimal text cleaning from a custom program. 600 respondents (200 per model) then read one of the AI-generated texts and rated its credibility. Here it was found that the 774M and 1.5B model produced much more convincing text than the 355M model.

The second, a 2022 article from IT Professional, argues for the importance of using both AI itself and cognitive psychology to mitigate the spread of misinformation online. Here, the researchers examined challenges faced by automated and manual fact-checking methods such as developer bias for automated methods and impracticality for manual methods. From there, they argue that combining both automated and manual methods is needed, particularly through the use of their cognitive AI framework. This framework outlines how, through the use of cognitive AI’s deep learning algorithms, suspicious information can be detected and notify the user before they share it. In addition to this, suspicious information can also be flagged manually, and from here “nudging” techniques like warning messages, the providing of trusted sources, or crowdsourcing techniques can help the user reevaluate the credibility of that information, potentially preventing the spread of misinformation. This use of “nudging” techniques come from cognitive psychology studies where they’re found to be effective in encouraging a person to reevaluate their biases, and therefore mitigate the spread and consumption of misinformation, hence their application within the cognitive AI framework.

Current Policies 

Currently in Canada, the major bills to watch in regards to AI and misinformation are Bill C-76 (the Elections Modernization Act) and Bill C-27 (the Digital Charter Implementation Act, 2022). Bill C-27 introduces three proposed acts, the one focused on for this case being the Artificial Intelligence and Data Act. This act introduces rules that regulate the creation and use of AI systems through ensuring responsible AI development and usage, establishing an AI and Data Commissioner to help oversee compliance of the act, and outlining clear consequences for unlawful or reckless use or development of AI. This act, although addressing AI, doesn’t sufficiently address the issue of the spread of AI misinformation. This is because it focuses more on the commerce and data privacy side of AI and while it does touch on harmful uses of AI, it’s very minimal and doesn’t delve specifically into misinformation and its spread.

Bill C-76 meanwhile focuses on modernizing regulations around elections. While this bill covers some aspects of misinformation such as prohibiting the publication of false statements, it is not adequately equipped to keep up with the fast paced nature of AI misinformation due to focusing on traditional methods of misinformation dissemination. Because of this, it needs further modification to cover this new threat.

Recommendations 

When creating policies on AI and misinformation, it’s important to note how crucial it is to use both AI based and human based methods in tandem to not only flag misinformation, but to prevent its spread. As seen in the 2020 study, humans have a difficult time distinguishing between AI-generated and human-written content, all three versions of GPT-2 used in experiment 1 and 3 were shown to be capable of producing human-like writing, even without the use of human intervention for quality checking as seen in experiment 3. However, experiment 2 showed that adding disclaimers to articles lead to the decrease in credibility perception by those who are more liberal. This suggests value in adding cues or “nudging” techniques to prompt questioning the validity of the content someone is reading.

This approach is further supported by the 2022 IT Professional article which argued for the use of a cognitive AI framework. This framework, which uses manual and automated AI methods to flag potential misinformation, also promotes the use of “nudging” techniques such as warning messages and showing alternative sources to consider before sharing. This “nudging” technique, as said in the article, is found in cognitive psychology research to be effective in mitigating the spread of misinformation, but due to the volume of content online it needs to be paired with AI systems to be used broadly.

Due to these findings, it’s recommended to make new policies or modifications to existing policy like C-76 that take into account the value of enforcing a cognitive AI framework be present on social media platforms, especially during times of information sensitivity like elections. Using the cognitive AI framework, AI misinformation can be both identified and flagged to prevent the spread of misinformation. This approach will not only help mitigate the dissemination of AI misinformation, but also will make people aware of these dangers and what they look like, potentially shifting the conversation around information on the internet in a productive way where it becomes commonplace for people to always question what they’re reading online.

Conclusion 

Artificial intelligence and its misinformation capabilities pose a great threat to the integrity of our political systems as well as the stability of our population. Because of this and its fast proliferation, it is imperative that the Canadian government address this currently unregulated and potentially devastating area of activity in an effective and timely manner. While artificial intelligence and the effect of its misinformation capabilities is yet to be seen here in Canada – especially in regards to elections – action must be taken before those consequences arise. This action must come in the form of researched back, impactful policy making, like the suggestion given in this brief, in order to ensure the protection of Canadian democracy here on and in the future.


Digital Charter Implementation Act, 2022 (2022) 

Elections Modernization Act (2018) 

The Global Risks Report 2024. (2024, January). World Economic Forum. https://www3.weforum.org/docs/WEF_The_Global_Risks_Report_2024.pdf

Home Tech Ef ect 2024 AI Business Predictions. (n.d.). PwC. Retrieved March 27, 2024, from https://www.pwc.com/us/en/tech-effect/ai-analytics/ai-predictions.html

Hu, K. (2023, February 2). ChatGPT sets record for fastest-growing user base - analyst note. Reuters. Retrieved March 27, 2024, from https://www.reuters.com/technology/chatgpt-sets-record-fastest-growing-user-base-analyst-no te-2023-02-01/

Kreps, S., McCain, R. M., & Brundage, M. (2020, November 20). All the News That’s Fit to Fabricate: AI-Generated Text as a Tool of Media Misinformation. Journal of Experimental Political Science, 9(1), 104-115. https://www.cambridge.org/core/journals/journal-of-experimental-political-science/article/all the-news-thats-fit-to-fabricate-aigenerated-text-as-a-tool-of-media-misinformation/40F27F06 61B839FA47375F538C19FA59

McKendrick, J. (2021, September 27). AI Adoption Skyrocketed Over the Last 18 Months. Harvard Business Review. Retrieved March 27, 2024, from https://hbr.org/2021/09/ai-adoption-skyrocketed-over-the-last-18-months

Moran, C. (2023, April 6). ChatGPT is making up fake Guardian articles. Here's how we're responding | Chris Moran. The Guardian. Retrieved March 27, 2024, from https://www.theguardian.com/commentisfree/2023/apr/06/ai-chatgpt-guardian-technology-ris ks-fake-article

Shrivastava, R. (2023, December 27). How ChatGPT And Billions In Investment Helped AI Go Mainstream In 2023. Forbes. Retrieved March 27, 2024, from https://www.forbes.com/sites/rashishrivastava/2023/12/27/how-chatgpt-and-billions-in-invest ment-helped-ai-go-mainstream-in-2023/?sh=7516e0467176

Spitale, G., Biller-Andorno, N., & Germani, F. (2023, June 28). AI model GPT-3 (dis)informs us better than humans. Science Advances, 9(26), 1-9. https://www.science.org/doi/10.1126/sciadv.adh1850

Swenson, A. (2024, January 16). How ChatGPT maker OpenAI says it plans to prevent 2024 election misinformation - National | Globalnews.ca. Global News. Retrieved March 27, 2024, from https://globalnews.ca/news/10230891/openai-chatgpt-election-misinformation-prevention/

V, I., & Thampi, S. M. (2022, November 30). Cognitive AI for Mitigation of Misinformation in Online Social Networks. IT Professional, 24(5), 37-45. https://ieeexplore.ieee.org/document/9967399

Kylla Castillo is a fourth-year Filipino-Canadian student majoring in Political Science at the University of British Columbia. Growing up with two older brothers working in technology, Kylla developed an interest in the way emerging technology intersects with our civil liberties early into her university career. After graduation, she hopes to pursue this interest further by studying the way law and technology interact.