AI Regulation and Governance made simple (yes, that's possible!) with Emma Haywood

Read on for the key principles of AI Regulation and Governance and an understanding of how different jurisdictions around the world are balancing the risks and opportunities of AI.

AI Regulation and Governance made simple (yes, that's possible!) with Emma Haywood

I had the most fascinating discussion with Emma Haywood, Principal Consultant at Bloomworks Legal last week during our Live session on AI Regulation and Governance. Emma took us on a world tour (and journey back in time) to understand how we got to where we are today when it comes to regulating AI, together with the key concepts to look out for. 

This is an edited and abridged version of our conversation adapted from the transcript. You can catch the full recording here

Lorna Khemraz (LK): Hi Emma, thank you so much for joining us today. I've been particularly excited about preparing for this session given the dynamic nature of AI, which is central to our work at Flank. We specialize in bringing AI solutions to B2B organizations, helping automate functions to enhance self-service capabilities. Given this focus, questions about AI regulation and governance are increasingly relevant. It's an honor to have you, Emma, with your extensive background in software and a deep passion for AI and governance. Could you introduce yourself to our audience and share a bit about your journey?

Emma Haywood (EH): Thanks for the warm welcome, Lorna. Hello, everyone. My name is Emma Haywood, and I am a technology lawyer with a rich background in private practice and as a senior in-house lawyer at an AI-driven healthcare company. I now run my consultancy, Bloomworks Legal, which specializes in legal support for innovation-driven projects. I’ve been deeply engaged with AI, recently as part of the research group at the Center for AI and Digital Policy in Washington, D.C., a nonprofit focusing on technology policy that underscores fundamental rights and democratic values. That’s a really interesting counterpart to my more technical work as a technology focused lawyer. I’m excited for today’s discussion, which promises to be both interesting and enlightening.

LK: We're thrilled to have you here, Emma. For anyone tuning in now, could you outline what they should expect to learn from today’s session?

EH: Certainly. Today’s discussion isn't intended as a primer on the EU AI Act or on starting points for addressing bias in AI, as there are many resources already covering those topics. Instead, what we really want to do today is answer the question, how do we get to where we are now on AI regulation and AI governance? This session is perfect for those who have some familiarity with developments like the EU AI Act but lack a comprehensive understanding of how these elements fit into a broader context. We'll delve into the bigger picture, helping attendees from various roles understand and contextualize the regulatory landscape, identify key trends, and anticipate how these might influence AI deployment in their organizations.

LK: That’s an excellent framing, Emma. It’s crucial to understand the broader implications. Earlier this week, I discussed how AI impacts various fields, illustrating the importance of keeping up with rapid changes. Perhaps a good starting point would be to clarify what we mean by AI regulation and AI governance, as these terms are often used interchangeably but represent different concepts.

EH: Absolutely, Lorna. While often mentioned together, AI regulation and AI governance address different aspects of AI management. AI regulation refers to the creation and enforcement of legal rules that dictate how AI is developed and used, with legal consequences for non-compliance. In contrast, AI governance encompasses a broader set of policies, values, frameworks, and best practices aimed at minimizing AI risks and maximizing its benefits, without necessarily involving mandatory legal standards. Both areas are grounded in AI ethics, which addresses the broader moral questions related to AI use—what is right and wrong, the ethical boundaries, and the priorities in AI development and application. These ethical considerations are foundational and influence both regulatory and governance frameworks.

LK: So, AI regulation involves the legal framework, the actual rules that need to be adhered to, while AI governance is about how these rules translate into practical applications within organizations?

EH: Precisely. In AI governance, non-compliance might not necessarily lead to legal consequences but could result in other repercussions, such as reputational damage or ethical issues. Unlike laws that impose fines or criminal penalties for breaches, governance is more about the ethical and moral standards we adopt in handling AI.

LK: Right. Could you then walk us through the evolution of AI regulation? AI really entered into people's consciousness in a very distinct way with that GPT coming along. But AI itself isn't new, right? So, what's been the journey to where we are now? 

EH: Absolutely, AI isn’t new, nor is the concept of regulating it. The noticeable surge in public awareness, particularly with the advent of technologies like GPT, doesn't mark the beginning of AI. In reality, AI has been integrated into various sectors since the 1950s. Over the decades, we've seen foundational efforts to implement basic guidelines ensuring responsible AI usage. For instance, the OECD AI Principles adopted in 2019, which were endorsed by almost 50 countries, are pivotal. These principles advocate for trustworthy AI by emphasizing human rights and democratic values, promoting AI's use in sustainable development and wellbeing, while ensuring fairness and respect for civil and human rights. One critical aspect repeatedly highlighted is the transparency and explainability of AI systems—helping people understand how AI decisions are made and how they can be challenged. Another significant principle focuses on the security and safety of AI systems, addressing the necessity to manage associated risks effectively.

In 2021, the UNESCO recommendations on AI ethics followed, providing a broader international agreement on baseline principles for AI governance. Adopted by nearly 200 countries, these recommendations aim to guide nations in shaping their own AI policies and legislation, reinforcing themes like transparency, human dignity, and environmental protection. These principles and recommendations are not only foundational for current legislative efforts such as the EU AI Act but also influence AI regulations globally, including in the US and China.

LK: What do you see as the catalyst that has driven governments worldwide to consider serious legislation for AI? 

EH: Social media has been a significant catalyst. Governments are keen to learn from past regulatory failures in social media to prevent similar harms and ethical failings with AI. High-profile incidents like the Cambridge Analytica scandal, where Facebook data was used for political manipulation. I'm also thinking of things like eroding children's rights, mental health challenges that come out of social media, privacy harms. These events have underscored the urgency of preemptive legislation, driving efforts at various stages to create legislation that promotes responsible development and use. This has inspired a trend toward more proactive legislation, aiming to properly regulate AI technologies from the outset, rather than scrambling to catch up after the fact.

Social media has been a significant catalyst. Governments are keen to learn from past regulatory failures in social media to prevent similar harms and ethical failings with AI.

LK: Maybe I'll get controversial here, but what is ‘proper’ regulation in this context? This is a never-before-seen type of technology, in the same way that social media has been a completely new experience for the whole world. So it's very difficult to predict the actual impact. So how are legislators approaching this? Is it regulating the actual foundational models or a combination of, as you were talking about earlier, the regulation and the governance piece to try and contain potential impacts on the world at large? 

EH: You're absolutely right. It's a complex field with multiple approaches. We’ll take a look at approaches around the world during this session. One effective strategy is the rights-based approach, where regulation is centered on protecting fundamental human rights. This includes ensuring fairness, privacy, and non-discrimination and often builds on existing laws. However, as we delve into the nuances of AI, some jurisdictions are beginning to amend these laws to specifically address AI-related risks. Globally, we see a mix of rights-based and product safety approaches, like in the EU AI Act, which combines elements of both to minimize various risks associated with AI.

LK: I have a question from the audience related to this: Given the calls from experts like Sam Altman for more regulation, do you think governments are doing enough to regulate AI?

EH: That’s a very juicy question. Defining ‘enough’ is difficult because the impacts of AI are hard to predict fully. While many principles for responsible AI development are internationally recognized, translating these into comprehensive laws like the EU AI Act is still in progress. Different jurisdictions are experimenting with various approaches to find the best balance. For example, the EU AI Act includes provisions that balance protecting fundamental rights with mitigating other risks by requiring measures like AI testing, impact assessments, and human oversight in decision-making processes.

Defining ‘enough’ is difficult because the impacts of AI are hard to predict fully. While many principles for responsible AI development are internationally recognized, translating these into comprehensive laws like the EU AI Act is still in progress. Different jurisdictions are experimenting with various approaches to find the best balance.

LK: Could you provide specific examples to what you're describing right now, are we talking, for example, about training AI on people's medical information, that kind of thing? Can you give us some examples so that we can contextualise? .

EH: Yeah, exactly. So it's how we train the AI, and that could involve things like infringing IP, or building in biases that then lead to outcomes that are discriminatory. So if you're using AI to make decisions, for example, about people's access to healthcare or education, that is obviously potentially a significant risk to fundamental rights. So one of the things that we see, just as a specific example in the EU's AI Act, is that certain uses of AI are completely banned altogether. If you really look closely at those, those are uses that are most damaging to fundamental rights. So things like AI being used to exploit people's vulnerabilities, or manipulate their behaviour, or different forms of social practices. So you do see specific examples of fundamental rights being protected, but alongside that, there's this layer of, you know, what are you actually using AI for? What level of risk is involved? And then we're not going to ban that use, we're going to suggest or mandate, so this is a law, measures you can take to mitigate the risk. 

So there you'll see things like requiring AI testing, or impact assessment, or human oversight for AI-based decisions. So there's this interesting kind of layer of different things. You've got prohibitions, and then you've got mitigating measures that can be taken. So that's where the combination of rights-based and product-based approaches is blending those two things together to try to protect against the most serious harms, whilst also providing mitigations for product-based risks that come out of how we use AI.

LK: That really helps contextualize the discussion. This makes me think of the analogy of a knife, which can be both useful and harmful, made me think about how we regulate potential threats while acknowledging their utility. It seems like this is a similar approach we take with AI, right?

EH: Absolutely, the knife analogy is apt because it captures the essence of a technology-focused regulatory approach. In some places like China, the focus is on regulating specific types of AI—much like regulating types of knives rather than their use. This includes laws targeting algorithms that recommend content or technologies like deepfakes. It's an approach that categorizes AI types and sets specific rules for each, distinct from how these technologies might be used.

LK: In that context, is there an overlap between AI and privacy regulations in China, or are they treated as distinct areas?

EH: They're quite distinct, because China has taken this really targeted approach and really focussed on specific AI functionalities rather than a broad framework. This is emblematic of a larger trend where traditional privacy laws are paralleled by specific AI regulations, each addressing different aspects of AI's impact. This method ensures that general laws like privacy still apply, but with additional layers that address the unique challenges posed by AI technologies.

LK: Interesting. So, what's the situation like in other parts of the world?

EH: Moving to the UK, the approach there can be described as sector-based. Rather than a comprehensive AI law, regulation is tailored by sector regulators who apply core principles like transparency and accountability within their specific domains. This fosters innovation but can lead to regulatory gaps which might necessitate targeted legislation in the future.

In the US, we're observing a patchwork of regulatory initiatives. Notably, President Biden's executive order on AI, which directs government agencies on AI but doesn’t bind private sectors directly. This includes a mix of rights-based measures, product safety guidelines, and sector-specific best practices, embodying a holistic yet fragmented approach to AI regulation.

These examples illustrate that there isn't a one-size-fits-all approach to AI regulation. Instead, various jurisdictions adopt a blend of strategies to navigate the complexities and rapidly evolving nature of AI technology. The global landscape is dynamic, with different ideas and legal frameworks continuously developing to address both the potential and the challenges of AI.

LK: It's a tough question, but in your view, is there a jurisdiction that's handling AI regulation particularly well? Who’s the ‘A student’? 

EH: The European Union certainly stands out due to its proactive efforts in formulating the EU AI Act. They aimed to be pioneers in this space and succeeded in crafting comprehensive legislation quickly, which is commendable given the complexity and rapid evolution of AI technologies. So, they deserve high marks for effort and the scope of their achievement.

However, while the EU AI Act is a significant milestone, it's recognized within the policy community that it isn't perfect. Crafting a flawless regulation in such a short time, under immense political pressure, and achieving consensus among numerous member states on controversial issues is nearly impossible. Criticisms exist, particularly around the Act’s potential to stifle innovation. Some argue that it could drive major AI players away from the EU to more lenient regulatory environments like the US.

LK: That's insightful. Regarding the EU AI Act, a viewer asked about the significance of the carve-outs for military and law enforcement. How substantial are these exceptions?

EH: Carve-outs in legislation like the EU AI Act are indeed substantial and often controversial. They represent a complex balancing act—on one side, there's the need to push forward with a unified regulatory framework, and on the other, the necessity to accommodate diverse national security interests that require exceptions. This balancing often results in tensions between upholding AI ethics and governance and crafting enforceable laws that are universally acceptable. The controversy mainly stems from differing views on human rights and democratic values versus practical, security-driven needs of states. It’s a delicate issue and remains a point of contention as to how these exceptions will be implemented and their long-term impact on AI regulation.

These topics illustrate the ongoing struggle to align ethical AI use with pragmatic regulatory frameworks, a challenge that is not unique to the EU but is echoed in various forms across global jurisdictions.

LK: We've received an intriguing question that nicely segues into your next topic. With the context you've provided on how various jurisdictions handle AI regulation, do you think this reflects different risk tolerances among nations? Are there any notable insights or trends in how countries are more or less risk-averse?

EH: Yes, definitely. There's a huge geopolitical aspect to all of this, which I find really fascinating. There's a significant drive among nations to establish regulatory environments that not only safeguard against risks but also promote innovation. For instance, the European Union is aggressively promoting its regulatory standards globally, aiming to influence how AI is managed worldwide. Conversely, China is positioning itself to be an AI superpower by 2030, implementing stringent regulations that serve both its political and technological ambitions. This race to craft the most favorable innovation climate is palpable and reflects varying levels of risk tolerance across different regions.

A compelling example is the situation with TikTok in the US, highlighting the tension between fostering technological advancements and protecting national interests and data sovereignty. These considerations are crucial as they directly impact how AI technologies are regulated and managed globally.

LK: It’s indeed a delicate balance. On one hand, there's a strong push for innovation, with countries eager to lead the next technological breakthrough. On the other, they have to manage the inherent risks that come with new technologies. The geopolitical aspect adds a layer of complexity to this balancing act.

EH: It is, absolutely. There's a lot to watch on that. I think that would be one of the really hot topics to keep an eye on, as to how regulation stems from and then leads into that geopolitical battle for AI supremacy. 

...the situation with TikTok in the US, highlights] the tension between fostering technological advancements and protecting national interests and data sovereignty. These considerations are crucial as they directly impact how AI technologies are regulated and managed globally.

LK: Moving on to enforcement, we have a relevant question: How can developed and developing regions collaborate to create effective AI regulation, particularly in places with limited enforcement capabilities?

EH: Enforcement is indeed a significant challenge, especially as the strongest voices in AI regulation have traditionally been from developed countries. However, developing nations are increasingly engaging in this space. For instance, the African Union is actively developing a continent-wide AI strategy, demonstrating a growing commitment to participate in global AI governance discussions. This collaboration is crucial because it incorporates diverse perspectives in shaping foundational AI concepts, as highlighted by international principles endorsed by entities like UNESCO and the OECD.

Enforcement complexity arises from the need for specialized expertise to both understand AI technology and enforce regulations effectively. In the EU, we see efforts to designate specific authorities to oversee AI regulation, but there's a scramble to acquire the necessary skilled personnel. The challenge is finding experts who not only grasp the technical aspects of AI but can also interpret and apply the law. This issue isn't confined to developing countries; it's a global challenge.

As the EU AI Act approaches implementation, similar to the GDPR, there's anticipation about how enforcement will unfold—who will face penalties first and how severe those penalties will be. The effectiveness of any regulation heavily relies on the ability to enforce it. Addressing the gap in enforcement expertise is essential for maintaining the pace with advancing AI technologies and ensuring that laws have the intended impact. This is an ongoing effort, and it's crucial that as our capabilities develop, so does our capacity to enforce these regulations effectively.

LK: It feels like there is so much going on, all the time, that we can never get on top of it all. We've covered legislation, we've covered governance, we've covered geopolitical aspects of this whole piece. What would you say are some of the hot topics worth covering in the remaining time? 

EH: Indeed, the pace at which AI is evolving can be daunting. It’s essential to narrow down to areas that are most pertinent to your interests or professional role rather than attempting to stay abreast of everything. For instance, understanding the geopolitical impact of AI regulation is crucial. The book Digital Empire by Anu Bradford discusses this in depth, especially how different nations are positioning themselves within the global AI landscape.

Another significant trend is the 'Brussels effect,' where regulations crafted in the EU, like the GDPR, set a precedent that other regions tend to follow. This phenomenon is now visible with AI regulations, where countries are closely observing the EU AI Act to shape their own laws. This influence extends beyond Europe, with countries like Canada and Japan adapting their strategies based on these standards. There's a practical aspect to this as many companies operate globally and prefer a uniform approach to compliance, especially to maintain access to the vast EU market.

LK: That's an interesting concept, almost like a 'copycat effect' where EU standards become a global benchmark.

EH: Exactly. Watching how the 'Brussels effect' unfolds will be key, particularly as standards set by the EU are increasingly adopted worldwide. This not only simplifies global operations for multinational companies but also raises the overall standards of AI governance internationally.

To wrap up, while it's not necessary for everyone to delve deep into specifics like the EU AI Act, understanding key concepts such as transparency and accountability in AI usage is crucial. These principles are fundamental to ensuring that AI is used ethically and responsibly. For instance, ensuring that AI operations are transparent and that there are mechanisms for accountability when things go wrong is essential for building trust and managing risk. These concepts should guide how organizations implement AI technologies and reflect in how risks are managed through contracts and compliance strategies.

Overall, the focus should be on integrating these overarching principles into daily operations and decision-making processes to ensure that AI technologies are leveraged safely and responsibly across sectors.

LK: That’s an excellent point. A focused approach can indeed relieve a lot of the pressure. We don’t need exhaustive knowledge about global AI legislation to be effective. Also, I understand you have a resource list that could be very helpful to our audience.

EH: Yes, I've compiled a list that includes books, podcasts, newsletters, and trackers that I typically share with my clients. I'd be glad to share it with attendees of today’s LinkedIn Live. It will also be featured in the Flank newsletter (AI Unplugged), or you can direct message me on LinkedIn, and I’ll send it over. It's a great way to grasp the broader context of AI regulation, tailored to your preferences, whether you're into books or articles.

LK: That sounds fantastic. I’m particularly keen on book recommendations. Emma, as we wrap up, do you have any final thoughts or advice for our audience?

EH: Absolutely, my key advice would be not to feel overwhelmed by the vastness of AI regulations. Focus instead on understanding the general principles, like those from UNESCO and the OECD, which are quite straightforward and underpin much of the global approach to AI. These principles are accessible and foundational for anyone looking to understand the ethical considerations in AI. If anyone needs direct links to these resources, they’re included in the resource list I mentioned.

LK: Thank you, Emma. Your insights today have been incredibly enlightening. I’m sure our audience found this discussion as informative as I did.

EH: Likewise, thank you so much for having me, and thanks to Flank.

Hungry for more? We’ve got some very easily digestible articles explaining buzz concepts like GenAI, LLMs and RAG

If you’re curious about how we’re disrupting the world of work, find out more about Flank at getflank.ai, or book at demo.

For Emma's wonderful list of resources, you can connect with her on LinkedIn or sign up to our monthly Newsletter, AI Unplugged.