Denali YoungWolfe

Denali YoungWolfe

Barbara Arneil

Bruce Baum

Yongzheng (Parker) Li

Leah Shipton

Addye Susnick

Val Muzik

Research Cluster Group Meeting

Research Cluster Group Meeting

2024 PSSA Digital Policy Award Runner-up: Samyukta Srinivasan


Striking a Balance

Navigating Intellectual Property Law and Innovation in Generative AI for the Canadian Legal Landscape

By Samyukta Srinivasan

Introduction 

The dynamic landscape of Artificial Intelligence (AI) technology is becoming increasingly difficult to navigate, particularly with the rise of generative AI, which is a type of AI technology capable of creating new content that is derived from some input content, and is able to mirror the complexity of this input data. Unlike traditional AI which relies solely on pre-existing data, generative AI has the ability to create new content, which is similar but not identical to original data, based on deep-learning models trained on text, audio, visual and other complex data types (Government of Canada, 2024).

While undoubtedly a revolutionary development, this type of technology has had significant implications for innovation, copyright issues, and the ethical use of AI technologies. It stands at the forefront of AI research and application, pushing the boundaries of creativity and machine learning capabilities. Intellectual Property (IP) has been defined as the “socially, culturally, and economically useful products of the human intellect” (Leung, n.d.). IP rights constitute rights that grant the holder power to legally stop others from using their IP without their consent (Leung, n.d.). Generative AI has resulted in the infringement of IP rights, and has threatened rights of use issues, uncertainty about ownership of AI-generated works, and has posed questions regarding unlicensed content in training data, and whether users should be able to prompt these tools with reference to other creators’ original works without their knowledge (Vargas, 2022). Even more perplexing is the question: How should we proceed when IP is generated directly by AI (Olijnyk, 2022)?

The Problem

AI has had notable impacts on the commercial interests of creators, artists, and authors as it trains on vast troves of data, including copyrighted materials, thereby infringing upon the rights of original content creators (ITALIE, 2023). Even though Canadian copyright law protects original works of authorship, the emergence of AI as a ‘creator’ blurs the lines of legal definitions and ownership (CCH Canadian Ltd. V. Law Society of Upper Canada, 2004). For example, AI-generated art such as that produced by OpenAI’s DALL-E-2, blurs the lines of ownership and originality, potentially undercutting traditional creators’ business and royalties. Moreover, the lack of transparency from AI creators regarding data sourcing and usage further complicates matters. On the commercial front, without any embargo on commercial use of these AI generated images, original creators of art can only hope to lose more business in terms of volumes as also royalties due to availability of cheap alternatives for their art generated by AI.

Thus, Canada’s existing legal system must adapt as the advent of machine learning and new AI technologies continue to challenge the very definitions of ownership and human ownership that once defined the country’s legal framework for the determination of IP rights.

Evaluating Existing Regulations & Policy Gaps 

The introduction (and ongoing consideration) of Bill C-27, also known as the Digital Charter Implementation Act 2022 (DCIA), shows that a cohesive (albeit still vague) nation-wide regulatory framework is in the works. If brought into force, Bill C-27 will go on to solidify the proposed Artificial Intelligence and Data Act (AIDA), a legislative endeavor which outlines the responsibilities of individuals and legal entities “responsible” for AI systems; this includes those who design develop or provide AI systems or manage their operations during the course of international or interprovincial trade and commerce (Medeiros & Beatson, 2022). Essentially, AIDA introduces regulatory measures for “high-impact” AI systems, focusing on risk mitigation, transparency, and prohibition of practices that could cause “material harm” (Medeiros & Beatson, 2022).

After its publication, experts highlighted several inconsistencies and discrepancies embedded within the text of the AIDA. One concern highlighted was the Act’s ambiguity; for example, the proposed legislation fails to clearly define which systems fall under the category of “high-impact” and what constitutes “material harm” (Medeiros & Beatson, 2022). This type of ambiguity is bound to complicate the interpretation and enforcement of the law, and could potentially subject researchers, businesses, and private individuals to substantial penalties. Furthermore, a lack of specific definitions within AIDA could potentially enable Innovation, Science and Economic Development Canada to implement expansive AI regulations without open and transparent public deliberation.

Recommendations 

Based on the gaps that have already been identified in AI laws in Canada, the development of clear legal definitions and guidelines for AI-related matters would be an important starting point. For example, the Canadian Copyright Act could incorporate specific definitions and provisions for AI-generated works, distinguishing between AI-assisted and AI-generated content, to clarify the status of works produced with the significant intervention of AI systems (Innovation, Science and Economic Development Canada, 2023). Furthermore, certain legal measures must also be aimed at AI developers; for instance, AI developers can be mandated by the law to implement a system for tracking the use of copyrighted material in training datasets and ensuring fair compensation to original creators. Another measure aimed at AI developers could be to disclose the datasets they use to train their AI systems, especially when these systems are utilized for commercial purposes. This disclosure should be made without compromising data privacy or proprietary information, possible through a regulatory body overseeing AI development and use. These, of course, are long-term solutions that will require approximately 2-3 years to actually come into force as any changes to patent or copyright law have certain time lags associated with

them (Huber, 2023). However, the incorporation of AI in the patent system is something that must be considered, particularly due to the unprecedented rate at which AI continues to evolve and the line between content generated by machine learning and real human beings continues to become blurred.

Another key recommendation is to create a centralized digital licensing platform overseen by a dedicated state or central body, where creators can list their works available for AI training under predetermined terms and conditions. From this platform, AI developers will be able to obtain the necessary licenses to streamline the process and ensure legal clarity. Additionally, the establishment of a compliance monitoring body with the authority to audit and enforce the use of copyrighted materials in AI is also an important step towards ensuring adherence to licensing agreements. These measures will enable copyright holders, AI developers, and legal experts to work towards developing fair remuneration models that reflect the value contributed by copyright works to AI training datasets through the implementation of fixed fees, revenue sharing models, or usage-based payments.

It is also important to establish and promote clear ethical standards for AI use and development practices, and ensure accountability for businesses that do not comply with these standards. For instance, collaborating with regulatory bodies such as the Standards Council of Canada for the development of ethical standards for AI use in content creation. Businesses that are certified (i.e. that meet these standards) would signal compliance with ethical use of IP and respect for copyright laws. Another way to enhance and promote ethical standards could be through public awareness campaigns delivered through workshops or online resources, which could serve to educate both AI developers and the general public on the importance of ethical AI development and its impact on IP rights. Additionally, encouraging stakeholder engagement through ongoing dialogue between AI developers, IP owners, and policymakers through forums and roundtables, could aid in continuously refining ethical guidelines to keep pace with the rapidly-evolving nature of AI technologies.

Finally, fostering collaboration between AI developers and creators is crucial for bridging the gap between IP law and innovation. This could be facilitated by the establishment of innovation incubators focused on collaborations between AI developers and IP creators. To incentivize developers and creators to collaborate, the Government could offer tax incentives and grants to projects that demonstrate effective collaboration, by focusing on the development of AI technologies that respect and enhance the value of IP. Furthermore, such collaborative relationships can only exist and complement one another by ensuring transparency. This could be brought about by investing in the research and development of blockchain technology for IP management, such as creating blockchain-based systems for tracking and managing IP rights and transactions in a transparent and secure manner (FasterCapital, 2017).

Conclusion 

The rapid advancement of generative AI technologies present challenges and opportunities for IP laws in Canada. As we stand at the precipice of a new era in content creation, innovation, and digital rights management, it is imperative that Canada’s legislative framework evolves in tandem with these technological strides. The interplay between AI-generated content and IP rights necessitates a nuanced approach to regulation – one that fosters innovation while protecting creators’ rights. Policymakers, industry leaders, and the academic community must collaborate to refine Canada’s AI governance model, ensuring it remains adaptable, equitable, and effective in the face of AI’s evolving landscape.


Attard-Frost, B. (2022). Once a leader, Canada’s AI strategy is now a fragmented laggard, writes doctoral student. Faculty of Information (iSchool) | University of Toronto. https://ischool.utoronto.ca/news/once-a-leader-canadas-artificial-intelligence-strategy-is now-a-fragmented-laggard/

CCH Canadian Ltd. v. Law Society of Upper Canada. (2004). SCC Cases. Retrieved April 2, 2024, from https://scc-csc.lexum.com/scc-csc/scc-csc/en/item/2125/index.do

Government of Canada. (2024). Guide on the use of generative AI. Canada.ca. Retrieved April 2, 2024, from https://www.canada.ca/en/government/system/digital-government/digital-government-inn ovations/responsible-use-ai/guide-use-generative-ai.html#toc-2

FasterCapital. (2017, November 9). Blockchain: How to Utilize Blockchain Technology for Intellectual Property Management. Retrieved April 2, 2024, from https://fastercapital.com/content/Blockchain--How-to-Utilize-Blockchain-Technology-for -Intellectual-Property-Management.html

Huber, N. (2023, June 13). Rapid advances in AI set to upend intellectual property. Financial Times. Retrieved April 2, 2024, from https://www.ft.com/content/b7b3a881-92bb-4c10-85d8-007607477ccf

Innovation, Science and Economic Development Canada. (2023, December 1). Consultation paper: Consultation on Copyright in the Age of Generative Artificial Intelligence. Innovation, Science and Economic Development Canada. Retrieved April 2, 2024, from https://ised-isde.canada.ca/site/strategic-policy-sector/en/marketplace-framework-policy/ consultation-paper-consultation-copyright-age-generative-artificial-intelligence

ITALIE, H. (2023, September 20). John Grisham, George R.R. Martin and other authors sue OpenAI for copyright infringement. Los Angeles Times. Retrieved April 2, 2024, from https://www.latimes.com/world-nation/story/2023-09-20/john-grisham-george-r-r-martin and-other-authors-sue-openai-for-copyright-infringement

Leung, T. (n.d.). General introduction. Canadian Intellectual Property Law. https://digitaleditions.library.dal.ca/cdn-ip-law/chapter/introduction/

Medeiros, M., & Beatson, J. (2022). Bill C-27: Canada’s first artificial intelligence legislation has arrived. Canada | Global law firm | Norton Rose Fulbright. https://www.nortonrosefulbright.com/en-ca/knowledge/publications/55b9a0bd/bill-c-27-c anadas-first-artificial-intelligence-legislation-has-arrived

Olijnyk, Z. (2022, January 31). Artificial Intelligence set to challenge Canada’s patent and copyright laws: Ip lawyers. Canadian Lawyer. https://www.canadianlawyermag.com/practice-areas/intellectual-property/artificial-intelli gence-set-to-challenge-canadas-patent-and-copyright-laws-ip-lawyers/363559

Vargas, S. (2022, September 21). How AI-generated art is changing the concept of art itself. Los Angeles Times. Retrieved April 2, 2024, from https://www.latimes.com/projects/artificial-intelligence-generated-art-ownership-bias-dall -e-midjourney/

Samyukta Srinivasan is a third-year student undertaking a double major in Political Science and International Relations. She chose this topic as it encapsulates her interest in the fields of law and public policy, as well as their intersection with constantly-evolving technologies. The dynamic evolution of AI presents intriguing challenges to legal frameworks globally, with its impact on IP law serving as a prime example. As she continues her academic journey and pursue studies in Law in the future, she hopes to engage further with complex issues such as this in order to contribute meaningfully to the development of inclusive, equitable legal frameworks that address the challenges posed by technologies not only to IP issues, but also to human rights in general.

2024 Digital Policy Award Winner: Kylla Castillo


The Union of Automated and Manual Methods of Detection

Cognitive AI’s Role in Policy Making for Mitigating Misinformation in Canada

By Kylla Castillo

Introduction

In January 2024, the World Economic Forum named misinformation and disinformation as the top risk the world will face within the next two years. In this report they credit this ranking to the rise and accessibility of artificial intelligence (AI) and its ability to create ‘synthetic’, or fake content that’s hard to distinguish from what’s real. This proliferation of AI misinformation is a major concern, especially with elections taking place across the globe in the next two years from Indonesia to the US, and to here in British Columbia. Misinformation poses a serious threat to the democratic process and can easily change the course of elections as well as lead to increasing political polarisation. This presents policy makers here in Canada with a difficult issue, how can we effectively identify and prevent the spread of AI misinformation, especially during elections? Current policies, such as the Digital Charter Implementation Act, 2022, don’t adequately address this dilemma. Because of this, to find a solution, two studies on cognitive AI methods in mitigating misinformation were consulted. From this it was shown how important it is to implement these methods into places most vulnerable to AI misinformation such as social media sites.

Background 

The Covid-19 pandemic saw a boom in the adoption of AI, particularly amongst businesses through 2020-2021 (McKendrick, 2021). This was due to the pandemic causing a skills shortage which saw businesses scrambling to meet demand using AI’s automation and support capabilities. While this adoption of AI came in response to the pandemic, according to PwC, 86% of companies say AI is here to stay. However it wasn’t until late 2022 to early 2023 that AI went mainstream amongst the public due to programs like ChatGPT (Shrivastava, 2023). ChatGPT launched in November 2022 and gained 100 million monthly users within two months – in comparison, Tiktok took nine months to reach the same point (Hu, 2023). This widespread proliferation of AI usage has led to problems with the spread of misinformation. This is because programs like ChatGPT have been found to give false information and support it with plausible fake sources (Moran, 2023). In addition to this, according to a 2023 study, AI can also generate convincing disinformation in the form of social media posts that’s hard to distinguish as generated by the common person (Spitale et al., 2023, 1). This is a problem as it poses a threat to democracy, particularly during elections, due to AI potentially swaying election results or undermining the legitimacy of elected government through the use of misinformation. This is an issue that even the founder of OpenAI, ChatGPT’s parent company, Sam Altman is worried about, hoping to put in place safeguards to combat the dilemma (Swenson, 2024).

Research Overview 

When evaluating possible policy options on AI and misinformation, two studies were used to inform recommendations. The first, a 2020 study from the Journal of Experimental Political Science, used three experiments to evaluate the extent AI is capable of generating credible sounding content and influence perceptions of foreign policy. Experiment 1 used three versions of GPT-2: the 355 million, 724 million, and 1.5 billion parameter models. Generated text from these different versions of GPT-2 and human-written text from the New York Times on the same story were then shown to respondents who rated the credibility of the story out of four. From the 355M model, a majority of respondents found all of the texts credible. Meanwhile respondents found the 744M and 1.5B models as credible as the human-written texts. In experiment 2, respondents were shown AI-generated and human-written articles on the same topic that were either neutral, liberal, or conservative. A disclaimer was also added to some of the AI-generated articles. Respondents, who’s political ideologies were found beforehand, then rated the article’s credibility out of four. From this, respondent’s were found to judge articles that aligned with their views as more credible. Furthermore, a decline in credibility was more prominent for Democrats than Republicans when a disclaimer was put on an article that aligned with their views, however the articles weren’t found to change attitudes towards immigration for either side. Lastly, experiment 3 followed experiment 1 without any human intervention, only minimal text cleaning from a custom program. 600 respondents (200 per model) then read one of the AI-generated texts and rated its credibility. Here it was found that the 774M and 1.5B model produced much more convincing text than the 355M model.

The second, a 2022 article from IT Professional, argues for the importance of using both AI itself and cognitive psychology to mitigate the spread of misinformation online. Here, the researchers examined challenges faced by automated and manual fact-checking methods such as developer bias for automated methods and impracticality for manual methods. From there, they argue that combining both automated and manual methods is needed, particularly through the use of their cognitive AI framework. This framework outlines how, through the use of cognitive AI’s deep learning algorithms, suspicious information can be detected and notify the user before they share it. In addition to this, suspicious information can also be flagged manually, and from here “nudging” techniques like warning messages, the providing of trusted sources, or crowdsourcing techniques can help the user reevaluate the credibility of that information, potentially preventing the spread of misinformation. This use of “nudging” techniques come from cognitive psychology studies where they’re found to be effective in encouraging a person to reevaluate their biases, and therefore mitigate the spread and consumption of misinformation, hence their application within the cognitive AI framework.

Current Policies 

Currently in Canada, the major bills to watch in regards to AI and misinformation are Bill C-76 (the Elections Modernization Act) and Bill C-27 (the Digital Charter Implementation Act, 2022). Bill C-27 introduces three proposed acts, the one focused on for this case being the Artificial Intelligence and Data Act. This act introduces rules that regulate the creation and use of AI systems through ensuring responsible AI development and usage, establishing an AI and Data Commissioner to help oversee compliance of the act, and outlining clear consequences for unlawful or reckless use or development of AI. This act, although addressing AI, doesn’t sufficiently address the issue of the spread of AI misinformation. This is because it focuses more on the commerce and data privacy side of AI and while it does touch on harmful uses of AI, it’s very minimal and doesn’t delve specifically into misinformation and its spread.

Bill C-76 meanwhile focuses on modernizing regulations around elections. While this bill covers some aspects of misinformation such as prohibiting the publication of false statements, it is not adequately equipped to keep up with the fast paced nature of AI misinformation due to focusing on traditional methods of misinformation dissemination. Because of this, it needs further modification to cover this new threat.

Recommendations 

When creating policies on AI and misinformation, it’s important to note how crucial it is to use both AI based and human based methods in tandem to not only flag misinformation, but to prevent its spread. As seen in the 2020 study, humans have a difficult time distinguishing between AI-generated and human-written content, all three versions of GPT-2 used in experiment 1 and 3 were shown to be capable of producing human-like writing, even without the use of human intervention for quality checking as seen in experiment 3. However, experiment 2 showed that adding disclaimers to articles lead to the decrease in credibility perception by those who are more liberal. This suggests value in adding cues or “nudging” techniques to prompt questioning the validity of the content someone is reading.

This approach is further supported by the 2022 IT Professional article which argued for the use of a cognitive AI framework. This framework, which uses manual and automated AI methods to flag potential misinformation, also promotes the use of “nudging” techniques such as warning messages and showing alternative sources to consider before sharing. This “nudging” technique, as said in the article, is found in cognitive psychology research to be effective in mitigating the spread of misinformation, but due to the volume of content online it needs to be paired with AI systems to be used broadly.

Due to these findings, it’s recommended to make new policies or modifications to existing policy like C-76 that take into account the value of enforcing a cognitive AI framework be present on social media platforms, especially during times of information sensitivity like elections. Using the cognitive AI framework, AI misinformation can be both identified and flagged to prevent the spread of misinformation. This approach will not only help mitigate the dissemination of AI misinformation, but also will make people aware of these dangers and what they look like, potentially shifting the conversation around information on the internet in a productive way where it becomes commonplace for people to always question what they’re reading online.

Conclusion 

Artificial intelligence and its misinformation capabilities pose a great threat to the integrity of our political systems as well as the stability of our population. Because of this and its fast proliferation, it is imperative that the Canadian government address this currently unregulated and potentially devastating area of activity in an effective and timely manner. While artificial intelligence and the effect of its misinformation capabilities is yet to be seen here in Canada – especially in regards to elections – action must be taken before those consequences arise. This action must come in the form of researched back, impactful policy making, like the suggestion given in this brief, in order to ensure the protection of Canadian democracy here on and in the future.


Digital Charter Implementation Act, 2022 (2022) 

Elections Modernization Act (2018) 

The Global Risks Report 2024. (2024, January). World Economic Forum. https://www3.weforum.org/docs/WEF_The_Global_Risks_Report_2024.pdf

Home Tech Ef ect 2024 AI Business Predictions. (n.d.). PwC. Retrieved March 27, 2024, from https://www.pwc.com/us/en/tech-effect/ai-analytics/ai-predictions.html

Hu, K. (2023, February 2). ChatGPT sets record for fastest-growing user base - analyst note. Reuters. Retrieved March 27, 2024, from https://www.reuters.com/technology/chatgpt-sets-record-fastest-growing-user-base-analyst-no te-2023-02-01/

Kreps, S., McCain, R. M., & Brundage, M. (2020, November 20). All the News That’s Fit to Fabricate: AI-Generated Text as a Tool of Media Misinformation. Journal of Experimental Political Science, 9(1), 104-115. https://www.cambridge.org/core/journals/journal-of-experimental-political-science/article/all the-news-thats-fit-to-fabricate-aigenerated-text-as-a-tool-of-media-misinformation/40F27F06 61B839FA47375F538C19FA59

McKendrick, J. (2021, September 27). AI Adoption Skyrocketed Over the Last 18 Months. Harvard Business Review. Retrieved March 27, 2024, from https://hbr.org/2021/09/ai-adoption-skyrocketed-over-the-last-18-months

Moran, C. (2023, April 6). ChatGPT is making up fake Guardian articles. Here's how we're responding | Chris Moran. The Guardian. Retrieved March 27, 2024, from https://www.theguardian.com/commentisfree/2023/apr/06/ai-chatgpt-guardian-technology-ris ks-fake-article

Shrivastava, R. (2023, December 27). How ChatGPT And Billions In Investment Helped AI Go Mainstream In 2023. Forbes. Retrieved March 27, 2024, from https://www.forbes.com/sites/rashishrivastava/2023/12/27/how-chatgpt-and-billions-in-invest ment-helped-ai-go-mainstream-in-2023/?sh=7516e0467176

Spitale, G., Biller-Andorno, N., & Germani, F. (2023, June 28). AI model GPT-3 (dis)informs us better than humans. Science Advances, 9(26), 1-9. https://www.science.org/doi/10.1126/sciadv.adh1850

Swenson, A. (2024, January 16). How ChatGPT maker OpenAI says it plans to prevent 2024 election misinformation - National | Globalnews.ca. Global News. Retrieved March 27, 2024, from https://globalnews.ca/news/10230891/openai-chatgpt-election-misinformation-prevention/

V, I., & Thampi, S. M. (2022, November 30). Cognitive AI for Mitigation of Misinformation in Online Social Networks. IT Professional, 24(5), 37-45. https://ieeexplore.ieee.org/document/9967399

Kylla Castillo is a fourth-year Filipino-Canadian student majoring in Political Science at the University of British Columbia. Growing up with two older brothers working in technology, Kylla developed an interest in the way emerging technology intersects with our civil liberties early into her university career. After graduation, she hopes to pursue this interest further by studying the way law and technology interact.