Navigating AI Policy: Key Insights and Practical Tips for Enterprises
AI Policy in enterprise: the key considerations to bear in mind and some guidance as to where to start.
This article is adapted from the transcript of our LinkedIn Live where Anatol Poyer-Sleeman (from Founders' Law) and I chatted about balancing risk and opportunity with AI, including more specifically, how to approach building an AI policy.
Lorna Khemraz (LK): I'm thrilled to have Tol from Founders' Law here for this session on AI policy. Let me start by introducing who we are. At Flank, formerly Legal OS, we bring AI agents that offer 24/7 on-demand expert support to organizations. These AI colleagues integrate into your existing enterprise tech stack, resolving requests from commercial teams instantly and independently. With the rapid advancements in AI, especially Generative AI, AI policy has become a top priority for many enterprises. Today, we'll discuss why AI policy is essential and how to approach it effectively. Tol, can you tell us a bit about yourself and the work Founders' Law does?
Anatol Poyer-Sleeman (Tol): Sure, Lorna. I'm a commercial tech lawyer specializing in tech law, IP, and data privacy. My work intersects with areas like social media, digital platforms, and AI. Founders' Law focuses on startups, scale-ups, and high-growth companies. We act as externalized general counsel and have many clients using or bridging the AI space.
LK: Thanks, Tol. It's fascinating to see your diverse background. Can you share a bit about your journey to becoming an AI law specialist?
Tol: Absolutely. I started my career in law but wanted to stay local, so I joined the police force, where I worked on high-profile cases and also trained in specialist areas like hostage negotiation. After a decade, I transitioned to the automotive repair business, which was a significant change but also very rewarding. Eventually, I returned to law, focusing on tech and AI, which combined my interests in law and emerging technologies.
LK: That's an incredible journey. Now, let's dive into AI. Why do organizations need to think about AI policy, and what risks are we looking to mitigate?
Tol: Every business today has someone experimenting with AI tools, whether they realize it or not. These tools can significantly enhance productivity but also pose risks, especially if they're unregulated. For instance, if employees use free AI tools without understanding the implications, sensitive information could be exposed. It’s similar to using unauthorized email accounts, which can lead to security breaches. Having a policy in place ensures these tools are used safely, protecting both the business and its data.
LK: So, what are the main risks? Are we talking mainly about loss of control over confidential information and personal data?
Tol: Yes, that's a big part of it. When using free AI tools, the information fed into them can be ingested and used to train the model, potentially exposing proprietary and confidential data. For example, if someone in HR drafts letters using a free AI tool, sensitive employee information could be compromised. Businesses need to check the terms of service and opt out of data training where possible, or use paid versions that offer better data protection.
LK: What should organizations consider when deciding whether to use AI tools?
Tol: The starting point should be to protect information. Any business has confidential and personal data that needs safeguarding. It’s essential to ensure that any AI tool used complies with data protection regulations like GDPR. Additionally, businesses should consider the legal implications, such as whether the AI tool’s use aligns with contractual obligations.
LK: Can you provide examples of how AI is being used in ways that people might not be aware of?
Tol: Certainly. AI isn't just about chatbots. For instance, social media platforms use AI to build profiles on users and target advertisements. Some AI services can even identify website visitors and suggest personalized marketing strategies. This level of data processing can be quite intrusive and operates at a level many people don’t realize.
LK: What about the ethical concerns, especially when AI is used for decision-making without human oversight?
Tol: This is a critical issue. AI tools can make decisions that have significant impacts, such as in credit scoring or job applications. GDPR mandates human review for decisions with legal or significant effects. So, organizations must ensure they have the right checks and balances in place to comply with these regulations.
LK: What practical steps should organizations take to implement AI governance?
Tol: First, identify a stakeholder responsible for AI governance. This could be someone from data privacy, information security, or legal functions. Review and update existing policies to include AI tools. Provide secure, corporate-approved AI tools to prevent employees from using unregulated free versions. And ensure robust data mapping to understand what data is being processed and how.
LK: You mentioned principles-based governance earlier. Can you elaborate on that?
Tol: A principles-based approach focuses on broad guidelines rather than detailed rules, which can quickly become outdated. For instance, stating a commitment to human creativity and dignity in AI use can provide a flexible framework that evolves with technology and regulations. This approach can be more effective than trying to list every permissible and prohibited action.
LK: Finally, what are your top three takeaways for organizations looking to implement AI policy?
Tol: Have some sort of governance policy in place as soon as possible. A principles-based approach should be adopted rather than detailed prohibitions, as these tools are too available and useful to ban outright. Provide secure, approved AI tools to enable safe use and harness AI’s benefits while managing risks effectively.
LK: Thanks, Tol. This has been an insightful discussion. If anyone has further questions, feel free to reach out to us. Thank you for joining us today.
Tol: Thank you, Lorna. It’s been a pleasure.
Are you interested in finding out more? We take a look at AI regulation and governance here.
Want to find out about our AI agents? See here.