Beyond the Binary: Evaluating the Complexities of AI Regulation in the U.S.

– Executive Summary

Introduction

This thesis commences by examining the interaction between AI and democracy, with a specific emphasis on the necessity of well-informed public engagement, ethical utilization of AI, and strong regulatory measures to alleviate potential adverse effects on society and democratic procedures.[1] The thesis then examines the U.S. government’s role in regulating AI. The study performs a comparative analysis, utilizing the EU AI Law as the standard, juxtaposing it with the U.S. government’s existing regulatory framework.[2] Additionally, it investigates the impact of cultural factors on regulatory strategies.[3]

Purpose

An examination of existing U.S. and EU AI regulation is crucial for comprehending and influencing the U.S. government’s involvement in AI legislation.[4] This study provides valuable insights into how each entity manages the delicate balance between fostering innovation and implementing regulations, while also shedding light on the societal and ethical consequences of AI.[5] The EU AI Act’s emphasis on basic rights and transparency offers a framework that the U.S. may use to strengthen its regulatory framework, guaranteeing that the advancement of AI aligns with democratic principles and the public’s welfare.[6] This research reveals the presence of cultural and economic factors that impact regulation.[7] It emphasizes the importance of rules that are both culturally sensitive and economically viable.

Methods

This thesis utilizes a comparative analytical framework, reviewing legal texts, policy documents, and cultural studies literature.[8] It involved a detailed examination of the provisions of the EU AI Act, its approach to risk categorization, and transparency requirements, as set against the U.S.’s fragmented regulatory environment.[9]

Results

The EU AI Act represents a holistic regulatory approach, characterized by its focus on transparency, protection of fundamental rights, and risk-based framework for AI classification.[10] The Act emphasizes the need for AI systems to be trustworthy and operate in accordance with EU values, setting out stringent requirements for high-risk applications.[11] The U.S., conversely, lacks a comprehensive federal AI law, relying instead on sector-specific guidelines and principles that reflect its market-driven regulatory philosophy.[12]

Conclusions

The EU’s regulatory model is grounded in a precautionary principle, advocating a balance between innovation and ethical considerations.[13] It is underpinned by a commitment to digital sovereignty and a desire to set global standards.[14] The U.S. approach is marked by its emphasis on fostering innovation and competitiveness, which could benefit from a more harmonized and ethically conscious regulatory framework inspired by the EU AI Act.[15]

Recommendations

The U.S. government should develop a federal-level regulatory framework for AI that effectively incorporates ethical protections, enhances transparency, and supports innovation.[16] An ideal framework could include the EU’s rigorous risk assessment approach for AI systems, guaranteeing their responsible implementation while thoughtfully considering any social consequences.[17] It is crucial to create clear and indisputable ethical rules and standards to which AI developers and users must adhere, thus ensuring that technological breakthroughs are in accordance with moral obligations.[18] The framework should also actively strive to promote public confidence in AI by implementing increased openness and creating strong accountability measures.[19] By taking this action, the U.S. government will reinforce the public’s trust in AI systems, guaranteeing their further adoption and use.[20]

Moreover, it is essential for the U.S. government to actively participate in international cooperation to influence and establish worldwide principles and regulations for the governance of artificial intelligence.[21] By engaging in such cooperation, the U.S. may actively contribute to and benefit from a diverse range of international experience.[22] This will help create a cohesive strategy for regulating AI that incorporates a combination of worldwide ideals and advancements in technology.[23] Global participation is crucial for both standardization and the fostering of an environment that supports innovation within a globally accepted ethical framework.[24]

Detailed Analysis

The EU AI Act is structured to ensure the ethical deployment of AI, with the ambition of setting global norms.[25] The Act’s stringent transparency and accountability measures are designed to build public trust and ensure AI’s alignment with societal values.[26] In contrast, the U.S. regulatory landscape is characterized by its focus on innovation, with sector-specific guidelines providing a piecemeal approach to AI governance.[27]

The EU AI Act introduces a risk-based regulatory framework, classifying AI systems in different categories based on the level of risk they pose, and applying corresponding controls.[28] High-risk applications are subject to strict oversight, while minimal-risk AI is allowed more freedom, fostering innovation.[29] Lacking a comparable federal statute, the U.S. could consider adopting a similar risk-based approach to ensure a balanced oversight of AI technologies.

In terms of fostering innovation and economic growth, the EU AI Act recognizes the dual need to protect citizens and promote a thriving AI sector.[30] The EU AI Act includes provisions to support startups and small enterprises, recognizing the role of innovation in economic development.[31] The U.S., with its entrepreneurial spirit and robust tech industry, stands to benefit from a regulatory environment that safeguards innovation while ensuring ethical compliance.[32]

On the matter of digital sovereignty, the EU AI Act asserts the EU’s autonomy in the digital space, aiming to govern AI according to its values and standards.[33] This concept underscores the importance for the U.S. of developing a domestic policy that reflects its own values and aspirations to global leadership.[34] Implementing such a strategy would not only confirm the American position on maintaining control over digital resources, but also enhance the country’s ability to impact global standards and practices in the field of artificial intelligence.[35]

The EU AI Act’s provisions for high-risk AI applications reflect a citizen-centric view, where stringent checks and potential sanctions for non-compliance underscore a serious commitment to ethical AI.[36] The U.S. could take a cue from this approach, considering the public’s wariness of AI and the need for greater regulatory clarity to foster responsible innovation.[37] Implementing such strategies might bolster confidence and responsibility in the AI industry, thereby connecting technology progress with the social and ethical demands of the American people.[38]

For startups, the EU AI Act can be seen as a framework that supports ethical AI development from inception, though it poses challenges for smaller entities which may struggle with compliance costs.[39] The U.S. startup ecosystem, similarly vibrant, would benefit from clear and supportive AI regulations that balance innovation with ethical standards.[40] Adopting a tiered regulatory approach would make it possible to reduce financial costs while maintaining compliance with ethical standards.[41] It would provide a supportive climate for emerging AI ventures.

The EU AI Act’s influence is poised to extend beyond its borders, potentially shaping global AI policy akin to the GDPR’s impact on privacy laws.[42] As such, the U.S. must engage with these emerging standards to maintain its competitiveness and influence in the global AI landscape.[43] Being proactive in engaging with and adapting to these standards will not only protect American interests but also guarantee that U.S. firms are in a strong position to take the lead in the ethical development and use of AI technology.[44]

Lastly, the thesis explores national characteristics influencing AI policy in both regions. The EU’s approach is informed by its economic structure, human capital, and legal norms, favoring comprehensive, protective regulation.[45] The U.S. prioritizes economic and technological leadership, suggesting that its AI policy could be refined to align with international standards while bolstering ethical practices.[46]

In sum, the thesis proposes that the U.S. government should forge a regulatory path that ensures AI’s ethical use and public trust while encouraging innovation within a secure and transparent framework.[47] This path should be informed by international developments and cognizant of the cultural and societal values that underpin American governance.[48] The U.S. may take the lead in supporting a balanced approach to AI by creating a legislative framework that is flexible enough to adapt to global trends and align with national values.[49] This strategy will prioritize both technical progress and ethical responsibility.


[1] Martin Hasal et al., “Chatbots: Security, Privacy, Data Protection, and Social Aspects,” Concurrency and Computation: Practice and Experience 33, no. 19 (2021): 1–13, https://doi.org/10.1002/cpe.6426.

[2] “Artificial Intelligence 2023 Legislation,” National Council of State Legislatures, last modified January 12, 2024, https://www.ncsl.org/technology-and-communication/artificial-intelligence-2023-legislation.

[3] Lee Rainie, Janna Anderson, and Jonathan Albright, “The Future of Free Speech, Trolls, Anonymity and Fake News Online,” Pew Research Center, March 29, 2017, https://www.pewresearch.org/internet/2017/03/29/the-future-of-free-speech-trolls-anonymity-and-fake-news-online/.

[4] Victor Li, “What Could AI Regulation in the U.S. Look Like?,” American Bar Association, June 14, 2023, https://www.americanbar.org/groups/journal/podcast/what-could-ai-regulation-in-the-U.S.-look-like/.

[5] Stefan Calimanu, “Why the U.S. Leads the World in Entrepreneurship and Innovation,” ResearchFDI, May 17, 2023, https://researchfdi.com/resources/articles/why-the-U.S.-leads-the-world-in-entrepreneurship-and-innovation/.

[6] “The Act Texts,” EU Artificial Intelligence Act, https://artificialintelligenceact.eu/the-act/.

[7] Alex Engler, “The EU and U.S. Diverge on AI Regulation: A Transatlantic Comparison and Steps to Alignment,” Brookings, April 25, 2023, https://www.brookings.edu/articles/the-eu-and-U.S.-diverge-on-ai-regulation-a-transatlantic-comparison-and-steps-to-alignment/.

[8] “The Act Texts.”

[9] Engler, “The EU and U.S. Diverge on AI Regulation.”

[10] “The Act Texts.”

[11] “The Act Texts.”

[12] “U.S. State-by-State AI Legislation Snapshot,” BCLP [Bryan Cave Leighton Paisner LLP], accessed July 29, 2023, https://www.bclplaw.com/en-US/events-insights-news/2023-state-by-state-artificial-intelligence-legislation-snapshot.html.

[13] Kristel De Smedt and Ellen Vos, “The Application of the Precautionary Principle in the EU,” in The Responsibility of Science, ed. Harald A. Mieg (Cham, Switzerland: Springer International Publishing, 2022), 163–86, https://doi.org/10.1007/978-3-030-91597-1_8.

[14] Tambiama André Madiega, “Digital Sovereignty for Europe,” European Parliament, February 7, 2020, https://www.europarl.europa.eu/thinktank/en/document/EPRS_BRI(2020)651992.

[15] Gary Shapiro, “Why Smart AI Regulation Is Vital for Innovation and U.S. Leadership,” TechCrunch, June 23, 2023, https://techcrunch.com/2023/06/23/why-smart-ai-regulation-is-vital-for-innovation-and-u-s-leadership/.

[16] Victor Li, “What Could AI Regulation in the U.S. Look Like?,” June 14, 2023, https://www.americanbar.org/groups/journal/podcast/what-could-ai-regulation-in-the-U.S.-look-like/.

[17] “The Act Texts.”

[18] James Manyika, Jake Silberg, and Brittany Presten, “What Do We Do about the Biases in AI?” Harvard Business Review, October 25, 2019, https://hbr.org/2019/10/what-do-we-do-about-the-biases-in-ai.

[19] Johann Laux, Sandra Wachter, and Brent Mittelstadt, “Trustworthy Artificial Intelligence and the European Union AI Act: On the Conflation of Trustworthiness and Acceptability of Risk,” Regulation & Governance, 2023, 1–30, https://doi.org/10.1111/rego.12512.

[20] Bilal Alhayani et al., “WITHDRAWN: Effectiveness of Artificial Intelligence Techniques against Cyber Security Risks Apply of IT Industry,” Materials Today: Proceedings, March 2021, S2214785321016722, https://doi.org/10.1016/j.matpr.2021.02.531.

[21] World Development Report 2016: Digital Dividends, (Washington, DC: World Bank, 2016), https://www.worldbank.org/en/publication/wdr2016.

[22] Chandler C. Morse, “EU AI Regulation: A Call for Global Action,” Workday (blog), June 20, 2023, https://blog.workday.com/en-U.S./2023/eu-ai-regulation-a-call-for-global-action.html.

[23] Rodrigo Nieto-Gómez and Daniel Araya, “Renewing Multilateral Governance in the Age of AI,” Centre for International Governance Innovation, November 9, 2023, https://www.cigionline.org/articles/renewing-multilateral-governance-age-ai/.

[24] Nieto-Gómez, “Renewing Multilateral Governance in the Age of AI.”

[25] “Europe Fit for the Digital Age: Artificial Intelligence,” European Commission, April 21, 2021, https://ec.europa.eu/commission/presscorner/detail/en/IP_21_1682.

[26] “The Act Texts.”

[27] BCLP, “U.S. State-by-State AI Legislation Snapshot.”

[28] Andreas Liebl and Till Klein, AI Act: Risk Classification of AI Systems from a Practical Perspective (Munich: Initiative for Applied Artificial Intelligence, 2023), https://aai.frb.io/assets/files/AI-Act-Risk-Classification-Study-appliedAI-March-2023.pdf.

[29] David Luyt, “EU AI Act High-Risk AI System Classification,” Michalsons (blog), June 23, 2023, https://www.michalsons.com/blog/eu-ai-act-high-risk-ai-system-classification/66553.

[30] “The Act Texts.”

[31] European Commission, “Europe Fit for the Digital Age.”

[32] Shapiro, “Why Smart AI Regulation Is Vital for Innovation and U.S. Leadership.”

[33] Madiega, “Digital Sovereignty for Europe.”

[34] Ronald W. Del Sesto and Trina Kwon, “The United States’ Approach to AI Regulation: Key Considerations for Companies,” Morgan Lewis, May 22, 2023. https://www.morganlewis.com/pubs/2023/05/the-united-states-approach-to-ai-regulation-key-considerations-for-companies.

[35] Gary Shapiro, “Why Smart AI Regulation Is Vital for Innovation and U.S. Leadership,” TechCrunch, June 23, 2023, https://techcrunch.com/2023/06/23/why-smart-ai-regulation-is-vital-for-innovation-and-u-s-leadership.

[36] “The Act Texts.”

[37] Shiraz Jagati, “AI’s Black Box Problem: Challenges and Solutions for a Transparent Future,” Cointelegraph, May 5, 2023, https://cointelegraph.com/news/ai-s-black-box-problem-challenges-and-solutions-for-a-transparent-future.

[38] Lee Rainie et al., AI and Human Enhancement: Americans’ Openness Is Tempered by a Range of Concerns (Washington, DC: Pew Research Center, 2022), https://www.pewresearch.org/internet/2022/03/17/how-americans-think-about-artificial-intelligence/.

[39] Douglas R. Nemec and Laura M. Rann, “AI and Patent Law: Balancing Innovation and Inventorship,” Skadden, April 2023, https://www.skadden.com/insights/publications/2023/04/quarterly-insights/ai-and-patent-law.

[40] Eva de Valk, Transparency and Responsibility in Artificial Intelligence. Deloitte Netherlands, 2019. www2.deloitte.com/content/dam/Deloitte/nl/Documents/innovatie/deloitte-nl-innovation-bringing-transparency-and-ethics-into-ai.pdf.

[41] Pieter Verdegem, ed., AI for Everyone?: Critical Perspectives (London: University of Westminister Press, 2021), https://www.jstor.org/stable/j.ctv26qjjhj.

[42] Justin Burack, “AI and GDPR: Data Protection and Transparency in Focus,” A Path for Europe (blog), June 16, 2020, https://pathforeurope.eu/ai-and-gdpr-data-protection-and-transparency-in-focus/.

[43] “Does America Have to Follow International Laws?” HG.org, accessed July 31, 2023, https://www.hg.org/legal-articles/does-america-have-to-follow-international-laws-35594.

[44] Yana Lapach, “Disparate Impacts in AI Implementations,” 2021.AI, October 16, 2020, https://2021.ai/disparate-impacts-in-ai-implementations/.

[45] Mauritz Kop, “EU Artificial Intelligence Act: The European Approach to AI,” Transatlantic Antitrust and IPR Developments, no. 2 (2021): 1–11, https://law.stanford.edu/publications/eu-artificial-intelligence-act-the-european-approach-to-ai/.

[46] Frank De Jonghe, “Why Successful Adoption of AI Will Build on a Foundation of Trust,” EY, June 22, 2021, https://www.ey.com/en_U.S./consulting/why-successful-adoption-of-ai-will-build-on-a-foundation-of-trust.

[47] Ajay Bhalla, “Consumer Trust Is the Key to Realising AI’s Full Potential,” World Economic Forum, August 20, 2020, https://www.weforum.org/agenda/2020/08/consumer-trust-ai-potential/.

[48] Rémi Bourgeot, “How AI Giants Try to Influence the Design of European Regulation,” The Conference Board, June 14, 2023, https://www.conference-board.org/publications/ai-giants-european-regulation.

[49] Morse, “EU AI Regulation.” 

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top