<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Publishing DTD v1.0 20120330//EN" "http://jats.nlm.nih.gov/publishing/1.0/JATS-journalpublishing1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:mml="http://www.w3.org/1998/Math/MathML" article-type="research-article" xml:lang="en">
<front>
<journal-meta>
<journal-id journal-id-type="publisher-id">IR</journal-id>
<journal-title-group>
<journal-title>Information Research</journal-title>
</journal-title-group>
<issn pub-type="epub">1368-1613</issn>
<publisher>
<publisher-name>University of Bor&#x00E5;s</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="publisher-id">ir30iConf47278</article-id>
<article-id pub-id-type="doi">10.47989/ir30iConf47278</article-id>
<article-categories>
<subj-group xml:lang="en">
<subject>Research article</subject>
</subj-group>
</article-categories>
<title-group>
<article-title>Steering the AI world: an exploratory comparison of AI Acts in the EU and Canada</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author"><name><surname>Zhu</surname><given-names>Ruiyi</given-names></name>
<xref ref-type="aff" rid="aff0001"/></contrib>
<contrib contrib-type="author"><name><surname>Tsai</surname><given-names>Tien-I</given-names></name>
<xref ref-type="aff" rid="aff0002"/></contrib>
<aff id="aff0001"><bold>Ruiyi Zhu</bold> is an undergraduate student at the University of British Columbia, BC, Canada. She can be contacted at <email xlink:href="zry0826@student.ubc.ca">zry0826@student.ubc.ca</email></aff>
<aff id="aff0002"><bold>Tien-I Tsai</bold> is an Associate Professor in the Department of Library and Information Science, National Taiwan University, Taipei, Taiwan. She can be contacted at <email xlink:href="titsai@ntu.edu.tw">titsai@ntu.edu.tw</email></aff>
</contrib-group>
<pub-date pub-type="epub"><day>06</day><month>05</month><year>2025</year></pub-date>
<pub-date pub-type="collection"><year>2025</year></pub-date>
<volume>30</volume>
<issue>i</issue>
<fpage>555</fpage>
<lpage>564</lpage>
<permissions>
<copyright-year>2025</copyright-year>
<copyright-holder>&#x00A9; 2025 The Author(s).</copyright-holder>
<license license-type="open-access" xlink:href="https://creativecommons.org/licenses/by-nc/4.0/">
<license-p>This is an Open Access article distributed under the terms of the Creative Commons Attribution-NonCommercial 4.0 International License (<ext-link ext-link-type="uri" xlink:href="http://creativecommons.org/licenses/by-nc/4.0/">http://creativecommons.org/licenses/by-nc/4.0/</ext-link>), permitting all non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.</license-p>
</license>
</permissions>
<abstract xml:lang="en">
<title>Abstract</title>
<p><bold>Introduction.</bold> As artificial intelligence (AI) continues to grow rapidly, governments are implementing legislative frameworks to address its risks and opportunities. This paper provides a comparative analysis of AI Acts in the European Union (EU) and Canada, focusing on two legislative efforts: the EU artificial intelligence act (EU AI Act) and Canada&#x2019;s artificial intelligence and data act (AIDA).</p>
<p><bold>Method.</bold> A summative approach of qualitative content analysis was used to examine the scope, risk classification, and regulatory strategies employed in the EU AI Act and Canada&#x2019;s AIDA. This study highlights similarities and differences in their approaches to managing AI&#x2019;s societal impacts.</p>
<p><bold>Results.</bold> Both Acts provide positive directions and encourage responsible AI by addressing AI-related risks and opportunities. The analysis further explores the challenges, such as the definition of AI, enforcement mechanisms, and the inclusion of ethical considerations.</p>
<p><bold>Conclusion.</bold> By drawing on these cases, the paper illustrates how regulatory steering can ensure responsible AI development and deployment in different geopolitical contexts. This paper offers insights into the evolving nature of AI governance and contributes to the broader discourse on balancing innovation with societal safeguards. </p>
</abstract>
</article-meta>
</front>
<body>
<sec id="sec1">
<title>Introduction</title>
<p>Artificial intelligence (AI)-based technology has revolutionized industries and society, affecting the general public and the government by enabling advancements in various fields such as medical diagnosis, resume screening, and evaluating learning outcomes. Meanwhile, serving roles from filtrating existing data to generating new content, AI is also imposing a new challenge in the realm of information science. Due to its ability to efficiently collect, analyse, and visualise substantial personal information and use them to create new products through specialized algorithms, there is an ever-expanding demand for solid information management and protection to promote the ethical development of explainable AI, ensuring a higher level of data and privacy security. As a result, AI ethics, a field that <italic>&#x2018;studies the ethical principles, rules, guidelines, policies, and Acts that are related to AI&#x2019;</italic> (<xref rid="R17" ref-type="bibr">Siau &#x0026; Wang, 2020</xref>, p. 74), has gained the attention of many scholars and stakeholders. In recent years, a number of released ethics guidelines <italic>&#x2018;comprise normative principles and recommendations aimed to harness the &#x2018;disruptive&#x2019; potentials of new AI technologies&#x2019;</italic> (<xref rid="R7" ref-type="bibr">Hagendorff, 2020</xref>, p. 99), which led to extensive research on topics such as the perceived moral issues with AI, the features of an ethical AI, and to what extent the respective ethical values are implemented in the practice of application of AI systems.</p>
<p>However, some researchers also argue that
<disp-quote>
<p><italic>using ethics as a substitute for law risks its abuse and misuse. This significantly limits what ethics can achieve and is a great loss to the AI field and its impacts on individuals and society</italic> (<xref rid="R16" ref-type="bibr">Ress&#x00E9;guier &#x0026; Rodrigues, 2020</xref>, p. 1).</p>
</disp-quote></p>
<p>suggesting the urgency of forming formal legislation in order to create unified and solid standards for AI use. As shown in prior research on policy, making comparisons is an essential tool for navigating the world, which should take into account the distinct development of different countries (<xref rid="R4" ref-type="bibr">Cagdas Artantas, 2023</xref>, p. 190). Therefore, studying AI-related laws of multiple jurisdictions is significant for providing us with a comprehensive overview of AI development within different social contexts. The research aims to answer the following questions: What are the general features of recently compiled AI Acts in the early-mid 2020s? Specifically, what are the countries&#x2019; attitudes towards AI technology? What aspects of AI do the regulations focus on?</p>
</sec>
<sec id="sec2">
<title>Method</title>
<p>The current study adopts a summative approach of qualitative content analysis, which involves a comparison of the original laws rather than using a formative approach to identify themes during the legislation process. The qualitative content analysis was conducted through the researchers&#x2019; interpretation of the texts to identify the themes or patterns as suggested by (<xref rid="R10" ref-type="bibr">Hsieh and Shannon 2005</xref>).</p>
<p>In order to set the criteria for AI regulation case selection, a wide variety of sources were consulted during mid-2024. Since AI is a rapidly evolving topic, the current study collected data after 2020. The primary sources, including both the original texts of the Acts and the companion document or Act summary available on the official website of the corresponding country, were found through Google Search by searching the title of each specific law. The secondary source and supplementary articles, including scholars&#x2019; perceptions on the general concept of AI law and methodology of AI categorization, were found through keyword searches on several general databases such as the Web of Science and Scopus.</p>
<p>When identifying AI regulation cases for the current analysis, it can be noticed that most countries express concerns regarding AI by addressing it in regulations from specific aspects, such as digital platform transparency, environmental impact, risk management, and financial instruments and exchange (<xref rid="R1" ref-type="bibr">Anand, 2024</xref>; <xref rid="R13" ref-type="bibr">Kamiya &#x0026; Keate, 2024</xref>). As of mid-2024, many countries are still at the conceiving step of creating their legislation, while some are relatively conservative and are waiting to draw from others&#x2019; experiences. For example, the US currently has compiled only a <italic>Blueprint for an AI Bill of Rights</italic>. Although it has state-level legislation, its feasibility is limited as compliance issues can arise from overlapping or conflicting regulations (<xref rid="R15" ref-type="bibr">Plotinsky &#x0026; Cinelli, 2024</xref>). Nations including the UK, Mexico, and Chile have introduced <italic>artificial intelligence (Regulation) bills</italic>, which are still awaiting approval from the legislatures and executives before coming into effect as law. These regulations are not included in this research because such documents are still subjected to further modifications, and thus, they can neither stand for a country&#x2019;s final decision on whether or not to create the federal AI legislation nor provide a reliable date of entry into force. Similarly, due to modern debate revolving around the definition and classification of AI, Japan, like the US and the UK, also chose to <italic>&#x2018;respect companies&#x2019; voluntary governance and provide non-binding guidelines to support it, while imposing transparency obligations on some large digital platforms&#x2019;</italic> (<xref rid="R6" ref-type="bibr">Habuka, 2023</xref>). Additionally, China has published an <italic>interim regulation on the management of generative artificial intelligence services</italic> that entered into force in August 2023. Nevertheless, the nature of an interim regulation that takes effect immediately may not be the same as an act. The paper starts the discussion by examining AI Acts that have followed the regular development procedure; thus, only AI Acts were included as cases of investigation.</p>
<p>According to the above navigation and considerations, the selection scope of the current study includes AI Acts published in the 2020s and the scale of the Acts at the federal level, focusing only on federal-level legislation that has already been published or is scheduled to enter into force by the end of 2025, of which original texts can be accessed on the official government websites, which means they have a certain level of completeness and have been reviewed by the executive for at least multiple times. The cases selected for the current study are the EU artificial intelligence Act from the European Union and the artificial intelligence and data Act (AIDA) from Canada.</p>
<p>Based on (<xref rid="R6" ref-type="bibr">Habuka&#x2019;s 2023</xref>) classification of AI regulations, two general approaches towards AI regulations include: (1) a <italic>&#x2018;sector-specific and soft-law-based&#x2019;</italic> approach that applies to countries like the US, the UK, and Japan, and (2) a <italic>&#x2018;holistic and hard-law-based&#x2019;</italic> approach that has been taken by countries and political unions like EU and Canada. The selected cases in this study fall into the latter category, characterized by their capability of applying to all AI applications across all sectors and applications, as well as having regulative authority that lies with the central government (<xref rid="R9" ref-type="bibr">Hilliard, 2022</xref>).</p>
</sec>
<sec id="sec3">
<title>Findings and discussion</title>
<sec id="sec3_1">
<title>Overview of AI Acts</title>
<p><xref ref-type="table" rid="T1">Table 1</xref> provides a summary of the basic features of the AI Acts, which are intended to be used to compare general information, such as the types of AI being addressed, the objectives of the laws, the affected parties, and the implementation methods. The data are summarized from the government websites and texts of the original Acts, thus standing only for how the laws are expected to function in ideal situations. Since there are limited records or studies on the laws&#x2019; effectiveness and defects in practice, their real-life performances are not considered here.</p>
<table-wrap id="T1">
<label>Table 1.</label>
<caption><p>An overview of AI Acts in EU and Canada (Sources: EU (2024) and AIDA (2003))</p></caption>
<table>
<thead>
<tr>
<th align="left" valign="top"></th>
<th align="center" valign="top">European Union</th>
<th align="center" valign="top">Canada</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left" valign="top">Name of Legislation</td>
<td align="center" valign="top">EU Artificial Intelligence Act</td>
<td align="center" valign="top">The Artificial Intelligence and Data Act (AIDA)</td>
</tr>
<tr>
<td align="left" valign="top">Governmental Agencies Responsible for Implementing the Acts</td>
<td align="center" valign="top"><list list-type="bullet">
<list-item><p>The AI Office will be established, sitting within the Commission.</p></list-item></list></td>
<td align="center" valign="top"><list list-type="bullet">
<list-item><p>Minister of Innovation, Science, and Industry</p></list-item>
<list-item><p>AI and Data Commissioner</p></list-item></list></td>
</tr>
<tr>
<td align="left" valign="top"># of Clauses/Chapters</td>
<td align="center" valign="top">13 Chapters</td>
<td align="center" valign="top">41 Clauses</td>
</tr>
<tr>
<td align="left" valign="top">Types of AI&#x2019;s /being Addressed</td>
<td align="center" valign="top">High-Risk AI Systems General Purpose AI (GPAI)</td>
<td align="center" valign="top">High-Impact AI Systems</td>
</tr>
<tr>
<td align="left" valign="top">Major Objectives</td>
<td align="center" valign="top"><list list-type="bullet">
<list-item><p>Address risks to health, safety, and fundamental rights.</p></list-item>
<list-item><p>Protects democracy, rule of law, and the environment.</p></list-item>
<list-item><p>Create a balanced regulatory environment that fosters innovation.</p></list-item></list></td>
<td align="center" valign="top"><list list-type="bullet">
<list-item><p>Reassure Canadians that the government has a thoughtful plan to manage the emerging technology and maintain trust in a growing area of the economy.</p></list-item>
<list-item><p>Reassure actors in the AI ecosystem in Canada that the aim of the Act is not to entrap good faith actors or to chill innovation, but to regulate the most powerful uses of this technology that pose the risk of harm.</p></list-item></list></td>
</tr>
<tr>
<td align="left" valign="top">Parties Responsible for Complying the Acts</td>
<td align="center" valign="top"><list list-type="bullet">
<list-item><p>Providers of high-risk AI systems</p></list-item>
<list-item><p>Users (deployers) of high-risk AI systems have some obligations, though less than providers (developers).</p></list-item>
<list-item><p>Providers of GPAI (General Purpose AI) models</p></list-item></list></td>
<td align="center" valign="top">Businesses who:
<list list-type="bullet">
<list-item><p>Design or develop a high-impact AI system</p></list-item>
<list-item><p>Make a high-impact AI system available for use</p></list-item>
<list-item><p>Manage the operation of an AI system</p></list-item></list></td>
</tr>
<tr>
<td align="left" valign="top">Examples of Measures being Implemented</td>
<td align="center" valign="top"><list list-type="bullet">
<list-item><p>Establish a risk management system.</p></list-item>
<list-item><p>Conduct data governance, ensuring that training, validation, and testing datasets are relevant and representative.</p></list-item>
<list-item><p>Provide instructions for use to downstream deployers.</p></list-item>
<list-item><p>Design their high-risk AI system to allow deployers to implement human oversight.</p></list-item></list></td>
<td align="center" valign="top"><list list-type="bullet">
<list-item><p>Identify and address risks with regard to harm and bias.</p></list-item>
<list-item><p>Document appropriate use and limitations.</p></list-item>
<list-item><p>Adjust the measures as needed.</p></list-item>
<list-item><p>Ensure ongoing monitoring of the system.</p></list-item></list></td>
</tr>
<tr>
<td align="left" valign="top">Penalties</td>
<td align="center" valign="top"><list list-type="bullet">
<list-item><p>Warnings and Non-Monetary Measures</p></list-item>
<list-item><p>Monetary Measures</p></list-item></list></td>
<td align="center" valign="top"><list list-type="bullet">
<list-item><p>Administrative monetary penalties (AMPs)</p></list-item>
<list-item><p>Regulatory Offences</p></list-item></list></td>
</tr>
<tr>
<td align="left" valign="top">Name of Legislation</td>
<td align="center" valign="top">EU Artificial Intelligence Act</td>
<td align="center" valign="top">The Artificial Intelligence and Data Act (AIDA)</td>
</tr>
<tr>
<td align="left" valign="top">Governmental Agencies Responsible for Implementing the Acts</td>
<td align="center" valign="top"><list list-type="bullet">
<list-item><p>The AI Office will be established, sitting within the Commission.</p></list-item></list></td>
<td align="center" valign="top"><list list-type="bullet">
<list-item><p>Minister of Innovation, Science, and Industry</p></list-item>
<list-item><p>AI and Data Commissioner</p></list-item></list></td>
</tr>
<tr>
<td align="left" valign="top"># of Clauses/Chapters</td>
<td align="center" valign="top">13 Chapters</td>
<td align="center" valign="top">41 Clauses</td>
</tr>
<tr>
<td align="left" valign="top">Types of AI&#x2019;s /being Addressed</td>
<td align="center" valign="top">High-Risk AI Systems General Purpose AI (GPAI)</td>
<td align="center" valign="top">High-Impact AI Systems</td>
</tr>
<tr>
<td align="left" valign="top"></td>
<td align="center" valign="top"></td>
<td align="center" valign="top"><list list-type="bullet">
<list-item><p>True Criminal Offences</p></list-item></list></td>
</tr>
<tr>
<td align="left" valign="top">Date of Entry into Force</td>
<td align="center" valign="top"><list list-type="bullet">
<list-item><p>August 1st, 2024.</p></list-item>
<list-item><p>After entry into force, the AI Act will apply by the following deadlines:</p></list-item>
<list-item><p>Six months for prohibited AI systems.</p></list-item>
<list-item><p>12 months for GPAI.</p></list-item>
<list-item><p>24 months for high-risk AI systems under Annex III.</p></list-item>
<list-item><p>36 months for high-risk AI systems under Annex I.</p></list-item></list></td>
<td align="center" valign="top"><list list-type="bullet">
<list-item><p>No sooner than 2025.</p></list-item></list></td>
</tr>
</tbody>
</table>
</table-wrap>
<p>The latest versions of both AI Acts were published between 2023 and mid-2024, which marks the great importance attached to this modern technology. Upon comparing the general features of both Acts, a high acceptability of AI development with a positive attitude toward AI can be identified. To be geared to increasing international competitiveness and productivity, governments are guiding AI innovation in a positive direction and encouraging the responsible adoption of AI technologies. Under such context, the created Acts share some similar features which are introduced and discussed in the following sections.</p>
</sec>
<sec id="sec3_2">
<title>Holding a positive attitude toward AI</title>
<p>The AI Acts generally show a positive attitude toward AI&#x2019;s potential to facilitate the evolution of new-age technology. Recognizing the unique nature of the AI ecosystem, the laws indicate the motivation of adapting to the shifting landscape by encouraging exploring the optimization of usage scenarios and implying the tendency to cooperate with different parties&#x2014;like industry associations, enterprises, education, and research institutions, public cultural bodies, and relevant professional bodies&#x2014; in areas such as the establishment of data resources, applications, and risk prevention. On the other hand, the regulations are relatively conservative about enforcing punishment. For example, when implementing monetary penalties, many clauses use general phrases such as <italic>&#x2018;not more than...&#x2019;</italic> (AIDA, 2023) or <italic>&#x2018;up to...&#x2019;</italic> (EU AI Act, 2024) a certain amount rather than specifying the punishment standards for different situations, while others need to refer to existing laws and administrative regulations for detailed solutions (Management of Generative AI, 2023).</p>
</sec>
<sec id="sec3_3">
<title>Focusing on AI developers rather than end-users</title>
<p>Upon considering the influence levels different parties have when interacting with AI, the regulations display another noticeable trend&#x2014;both Acts focus on the behavioral accountability of AI developers and deployers rather than end-users. This decision may need to work together with the idea of AI ethics, as without relevant rules, users are expected to self-consciously use AI in proper contexts by following the moral principles and guidelines after the AI developers state any restrictions on how the systems are meant to be used and their limitations. While it is reasonable to focus on AI developers at this stage, the rights and responsibilities of end-users may also be important issues to be considered in the future.</p>
</sec>
<sec id="sec3_4">
<title>Ensuring fairness and inclusivity</title>
<p>A common concern expressed by AI Acts is the prejudice related to information screening as they all include clauses that touch on a similar matter&#x2014;during processes such as algorithm design, the selection of training data, model generation, and optimization, and the provision of services, effective measures should be employed to prevent the creation of discrimination such as by race, ethnicity, faith, nationality, region, gender, age, profession, or health. Potential biased output occurs when there is an unjustified and adverse differential impact based on unfair data selection, which is also considered one of the main sources of physical harm, psychological harm, damage to property, and economic loss to an individual or broadly across groups. For example, the past analysis of a well-known facial recognition system showed evidence of bias against women and people of color (<xref rid="R3" ref-type="bibr">Buolamwini et al., 2018</xref>), which reveals how the protection of human rights becomes an urgent issue as technology may learn from the mistakes in past data and repeat it unconsciously.</p>
</sec>
<sec id="sec3_5">
<title>Classifying AI Systems based on its risks</title>
<p>Considering the lack of a universal classification of AI impact and risk, the EU and Canada propose their own assessment standards in the regulations. When determining which AI are high-impact, AIDA considers particularly systems that affect access to employment, influence human behavior at scale, are critical to health and safety, and are used for identification and inference (AIDA Companion document, 2023). Meanwhile, the EU AI Act defines high-risk AI systems as those used as a safety component or a product covered by specific EU laws and those applied to fields including non-banned biometrics, critical infrastructure, education and vocational training, employment, and workers management, etc. (Summary of the EU AI Act, 2024). It can be seen how the addressed AI technologies reach far beyond narrow AI systems, which rely on specific algorithms to perform simple tasks within a limited domain like recognition or translation. High- impact and high-risk AI systems thereby possess the capability to affect further human decisions and administration processes in different fields. Overall, generative AI shows similar characteristics to the high-impact and high-risk AI mentioned in the Canadian and EU AI Acts by having significant influences on public information and privacy security.</p>
</sec>
<sec id="sec3_6">
<title>Unclear and unaddressed elements</title>
<p>Despite the regulations showcasing the effort of limiting harmful behaviors and promoting systematical AI development, there are several elements that have been neglected and require further clarification.</p>
<sec id="sec3_6_1">
<title>The scopes of the AI Acts</title>
<p>While the laws clearly apply to AI developers and servers in relevant industries and fields, it is ambiguous to what extent they can restrict AI use in government institutions, which problem contradicts the objective of the regulations. Stating the specific type of systems that will be used by the federal department can help improve the transparency of the legal code implementation process, which is crucial for showing the public a trustworthy regulatory environment and avoiding variation in AI act standards or deviation from international accountability measures.</p>
</sec>
<sec id="sec3_6_2">
<title>The definition of AI</title>
<p>When analyzing Japan&#x2019;s AI regulation, (<xref rid="R13" ref-type="bibr">Kamiya and Keate 2024</xref>) point out that there is no legally recognized definition of <italic>&#x2018;AI&#x2019;</italic> in Japan, and it is difficult to define its scope strictly <italic>because &#x2018;AI is an abstract concept that includes AI systems themselves as well as machine-learning software and programs.&#x2019;</italic> The same predicament applies to other nations as well since the definition of this terminology is inconsistent across fields. This issue may pose challenges by leading to situations such as when <italic>&#x2018;the same AI systems may fall under one law but not the other, potentially limiting individuals&#x2019; recourse for rights violations&#x2019;</italic> (<xref rid="R14" ref-type="bibr">Muhammad &#x0026; Yow, 2023</xref>, p. 512). As a result, establishing a universal definition that addresses fundamental concerns and specific techniques associated with AI is essential to ensure coherence and hold relevant organizations accountable.</p>
</sec>
</sec>
</sec>
<sec id="sec4">
<title>Conclusion</title>
<p>While the development path of AI is still unpredictable, the emergence of federal-level regulations in recent years reveals some methods different nations take towards steering innovative yet unstable technology by ensuring human rights are thoroughly considered and protected at each stage of the lifecycle of an AI system. The <italic>&#x2018;holistic and hard-law-based&#x2019;</italic> approach taken by the EU and Canada through <italic>&#x2018;[setting] forth obligations&#x2014;such as for governance, transparency, and security for at least high-risk AI&#x2014;with significant sanctions in case of violation&#x2019;</italic> (<xref rid="R6" ref-type="bibr">Habuka, 2023</xref>), is able to manage all types of business simultaneously with the same set of rules. Meanwhile, when keep improving the soundness of the law, legislators should also establish measures to prepare societies for the upcoming challenges in the near future, by when <italic>&#x2018;issues such as whether algorithms could be protected by free speech rights and artifacts produced by AI that could be protected by copyright laws&#x2019;</italic> (<xref rid="R12" ref-type="bibr">Jia &#x0026; Zhang, 2022</xref>, p. 60) may become typical examples. The close coordination between AI legislation and AI ethics theory is indispensable for preparing the public for the dynamic change during the age of technology evolution.</p>
<p>Overall, the current study aims to initiate the conversation based on a few selective cases and provide a preliminary framework to explore and examine AI Acts. Given the rapid advancement of AI technology and the growing recognition of the need for effective AI regulations, it is anticipated that more AI Acts will soon be developed and enacted. For instance, after we finished the current data analysis, Brazil officially approved its bill in December 2024, establishing a defined legal framework for the nation&#x2019;s AI use. Future research should expand the scope to encompass a wider range of AI regulations for more comprehensive analysis. While AI Acts may further develop to clarify the scope and the definition as well as address more on the rights and responsibilities of different stakeholders (including end-users), AI Acts may need to retain flexibility so it can adapt to technological advancement. This adaptability may help foster an evolving AI governance framework that ensures the responsible and sustainable development of AI.</p>
</sec>
</body>
<back>
<ack>
<title>Acknowledgements</title>
<p>The authors are thankful for the support from the National Science and Technology Council, Taiwan (NSTC 112-2410-H-002-015-SS2).</p>
</ack>
<ref-list>
<title>References</title>
<ref id="R1"><element-citation publication-type="other"><person-group person-group-type="author"><name><surname>Anand</surname><given-names>N.</given-names></name></person-group><year>2024</year><comment>March 8</comment><article-title>US Federal AI Legislation in 2024: The Current Landscape</article-title><source>Holistic AI</source><ext-link ext-link-type="uri" xlink:href="https://www.holisticai.com/blog/us-federal-ai-legislations">https://www.holisticai.com/blog/us-federal-ai-legislations</ext-link></element-citation></ref>
<ref id="R2"><element-citation publication-type="book"><person-group person-group-type="author"><name><surname>Banafa</surname><given-names>A.</given-names></name></person-group><year>2024</year><chapter-title>Generative AI and other types of AI</chapter-title><source>Transformative AI (1<sup>st</sup> ed.)</source><fpage>33</fpage><lpage>39</lpage><publisher-name>River Publishers</publisher-name><ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1201/9781032669182-7">https://doi.org/10.1201/9781032669182-7</ext-link></element-citation></ref>
<ref id="R3"><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Buolamwini</surname><given-names>J.</given-names></name><name><surname>Gebru</surname><given-names>T.</given-names></name></person-group><year>2018</year><article-title>Gender shades: Intersectional accuracy disparities in commercial gender classification</article-title><source>Proceedings of Machine Learning Research</source><volume>81</volume><fpage>1</fpage><lpage>15</lpage><comment>Allen, Kate</comment></element-citation></ref>
<ref id="R4"><element-citation publication-type="book"><person-group person-group-type="author">Cagdas <name><surname>Artantas</surname><given-names>O.</given-names></name></person-group><year>2023</year><source>Promotion of green electricity in Germany and Turkey: A comparison with reference to the WTO and EU law (1st 2023.;1;1st 2023; ed.)</source><publisher-name>Springer Nature Switzerland</publisher-name><ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1007/978-3-031-44760-0">https://doi.org/10.1007/978-3-031-44760-0</ext-link></element-citation></ref>
<ref id="R5"><element-citation publication-type="other"><person-group person-group-type="author"><collab>Government of Canada</collab></person-group><article-title>The Artificial Intelligence and Data Act (AIDA) &#x2013; Companion document</article-title><year>2023</year><comment>March 13</comment><ext-link ext-link-type="uri" xlink:href="https://ised-isde.canada.ca/site/innovation-better-canada/en/artificial-intelligence-and-data-act-aida-companion-document">https://ised-isde.canada.ca/site/innovation-better-canada/en/artificial-intelligence-and-data-act-aida-companion-document</ext-link></element-citation></ref>
<ref id="R6"><element-citation publication-type="other"><person-group person-group-type="author"><name><surname>Habuka</surname><given-names>H.</given-names></name></person-group><year>2023</year><comment>February 14</comment><article-title>Japan&#x2019;s Approach to AI Regulation and Its Impact on the 2023 G7 Presidency</article-title><source>Center for strategic &#x0026; international studies</source><ext-link ext-link-type="uri" xlink:href="https://www.csis.org/analysis/japans-approach-ai-regulation-and-its-impact-2023-g7-presidency">https://www.csis.org/analysis/japans-approach-ai-regulation-and-its-impact-2023-g7-presidency</ext-link></element-citation></ref>
<ref id="R7"><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Hagendorff</surname><given-names>T.</given-names></name></person-group><year>2020</year><article-title>The ethics of AI ethics: An evaluation of guidelines</article-title><source>Minds and Machines (Dordrecht)</source><volume>30</volume><issue>1</issue><fpage>99</fpage><lpage>120</lpage><ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1007/s11023-020-09517-8">https://doi.org/10.1007/s11023-020-09517-8</ext-link></element-citation></ref>
<ref id="R8"><element-citation publication-type="other"><person-group person-group-type="author"><collab>High-level summary of the AI Act</collab></person-group><year>2024</year><comment>February 27</comment><article-title>EU Artificial Intelligence Act</article-title><ext-link ext-link-type="uri" xlink:href="https://artificialintelligenceact.eu/high-level-summary/#weglot_switcher">https://artificialintelligenceact.eu/high-level-summary/#weglot_switcher</ext-link></element-citation></ref>
<ref id="R9"><element-citation publication-type="other"><person-group person-group-type="author"><name><surname>Hilliard</surname><given-names>A.</given-names></name></person-group><year>2022</year><comment>August 6</comment><article-title>Regulating AI: The Horizontal vs Vertical Approach</article-title><source>Holistic AI</source><ext-link ext-link-type="uri" xlink:href="https://www.holisticai.com/blog/regulating-ai-the-horizontal-vs-vertical-approach">https://www.holisticai.com/blog/regulating-ai-the-horizontal-vs-vertical-approach</ext-link></element-citation></ref>
<ref id="R10"><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Hsieh</surname><given-names>H.-F.</given-names></name><name><surname>Shannon</surname><given-names>S.E.</given-names></name></person-group><year>2005</year><article-title>Three approaches to qualitative content analysis</article-title><source>Qualitative Health Research</source><volume>15</volume><issue>9</issue><fpage>1277</fpage><lpage>1288</lpage></element-citation></ref>
<ref id="R11"><element-citation publication-type="other"><person-group person-group-type="author"><collab>Interim Measures for the Management of Generative Artificial Intelligence Services</collab></person-group><year>2023</year><comment>July 10</comment><article-title>The State Council the People&#x2019;s Republic of China</article-title><ext-link ext-link-type="uri" xlink:href="https://www.gov.cn/zhengce/zhengceku/202307/content_6891752.htm">https://www.gov.cn/zhengce/zhengceku/202307/content_6891752.htm</ext-link></element-citation></ref>
<ref id="R12"><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Jia</surname><given-names>K.</given-names></name><name><surname>Zhang</surname><given-names>N.</given-names></name></person-group><year>2022</year><article-title>Categorization and eccentricity of AI risks: A comparative study of the global AI guidelines</article-title><source>Electronic Markets</source><volume>32</volume><issue>1</issue><fpage>59</fpage><lpage>71</lpage><ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1007/s12525-021-00480-5">https://doi.org/10.1007/s12525-021-00480-5</ext-link></element-citation></ref>
<ref id="R13"><element-citation publication-type="other"><person-group person-group-type="author"><name><surname>Kamiya</surname><given-names>M.</given-names></name><name><surname>Keate</surname><given-names>J.</given-names></name></person-group><year>2024</year><comment>July 1</comment><article-title>AI Watch: Global regulatory tracker - Japan</article-title><source>WHITE &#x0026; CASE</source><ext-link ext-link-type="uri" xlink:href="https://www.whitecase.com/insight-our-thinking/ai-watch-global-regulatory-tracker-japan#:~:text=As%20noted%20above,%20at%20this,and%20principles%20announced%20in%20Japan.">https://www.whitecase.com/insight-our-thinking/ai-watch-global-regulatory-tracker-japan#:~:text=As%20noted%20above,%20at%20this,and%20principles%20announced%20in%20Japan.</ext-link></element-citation></ref>
<ref id="R14"><element-citation publication-type="other"><person-group person-group-type="author"><name><surname>Muhammad</surname><given-names>A. E.</given-names></name><name><surname>Yow</surname><given-names>K.</given-names></name></person-group><year>2023</year><article-title>Demystifying Canada&#x2019;s Artificial Intelligence and Data Act (AIDA): The good, the bad and the unclear elements</article-title><source>Paper presented at the 510&#x2013;515</source><ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1109/CCECE58730.2023.10288878">https://doi.org/10.1109/CCECE58730.2023.10288878</ext-link></element-citation></ref>
<ref id="R15"><element-citation publication-type="other"><person-group person-group-type="author"><name><surname>Plotinsky</surname><given-names>D.</given-names></name><name><surname>Cinelli</surname><given-names>G. M.</given-names></name></person-group><year>2024</year><comment>April 9</comment><article-title>Existing and proposed federal AI regulation in the United States</article-title><source>Morgan Lewis</source><ext-link ext-link-type="uri" xlink:href="https://www.morganlewis.com/pubs/2024/04/existing-and-proposed-federal-ai-regulation-in-the-united-states">https://www.morganlewis.com/pubs/2024/04/existing-and-proposed-federal-ai-regulation-in-the-united-states</ext-link></element-citation></ref>
<ref id="R16"><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Ress&#x00E9;guier</surname><given-names>A.</given-names></name><name><surname>Rodrigues</surname><given-names>R.</given-names></name></person-group><year>2020</year><article-title>AI ethics should not remain toothless! A call to bring back the teeth of ethics</article-title><source>Big Data &#x0026; Society</source><volume>7</volume><issue>2</issue><fpage>205395172094254</fpage><ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1177/2053951720942541">https://doi.org/10.1177/2053951720942541</ext-link></element-citation></ref>
<ref id="R17"><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Siau</surname><given-names>K.</given-names></name><name><surname>Wang</surname><given-names>W.</given-names></name></person-group><year>2020</year><article-title>Artificial intelligence (AI) ethics: Ethics of AI and ethical AI</article-title><source>Journal of Database Management</source><volume>31</volume><issue>2</issue><fpage>74</fpage><lpage>87</lpage><ext-link ext-link-type="uri" xlink:href="https://doi.org/10.4018/JDM.2020040105">https://doi.org/10.4018/JDM.2020040105</ext-link></element-citation></ref>
</ref-list>
</back>
</article>