The state of AI policy and regulation Copy link
This founders’ guide to AI policy and regulation is a resource specifically aimed at helping startup founders and operators understand the AI policy and legal landscape in key target markets and embrace responsible AI practices to reap the benefits of this technology and its applications early.
The responsible development and use of AI has emerged as a headline-grabbing topic since generative AI applications were released to a mass public audience in 2022. Copy link
Policymakers across the globe have been grappling with the technology’s economic and societal implications; assessing whether the necessary governance is in place for the technology to be developed and deployed responsibly now and into the future.
But what does it mean to be ‘developing and deploying AI responsibly’, and how can founders do the right thing? Copy link
It can be easy to participate in the AI policy debate with broad-brush arguments focused on AI transforming the world as we know it, without addressing the real and current risks and opportunities of the technology.
Founders play a crucial role in realising AI’s benefits, and need to understand the risks and regulatory landscape to do so. Copy link
Whether it is by understanding the rules, needs and attitudes around AI in your target market, or ensuring your team understands the capabilities and implications of AI: thinking about AI policy and regulation early makes both ethical and business sense. It will give you the confidence that your product does not backfire with current or future end-users.
We fundamentally have a choice over the kinds of AI we can build, and the effects it has. The role of regulation is to help steer the industry away from bad outcomes on the small scale (e.g. bias, deepfakes) and the large (e.g. the concentration of power in a small number of hands, large-scale disruptions). Regulation should help create incentives where they wouldn’t otherwise exist — companies can’t reliably act as self-regulators but founders can help shape the policy work that improves the safety and societal benefit of AI systems. Copy link
Matt Boulos Copy link
Head of Policy & Safety, Imbue
We’ve developed this guide with the support of Milltown Partners, as well as a number of expert advisers, founders and advocacy groups across the countries and regions covered. With clear analysis, expert insights, tips and best practices for startup founders, we hope this guide gives you the confidence and tools you need to start your journey towards responsible AI practice.
This site and its contents are provided for general information only and their provision does not constitute legal advice or create any form of lawyer-client relationship. You should not rely on this site and its contents as an alternative to legal advice.
About this guide Copy link
This Founders’ guide to AI policy and regulation is written for founders and leaders of startup and scaleup companies. Copy link
If you are interested in developing, procuring or deploying AI as part of your product or market strategy - or are already doing so - then this guide is for you.
The guide covers some of the world’s most investment-friendly markets for AI products to help you navigate the complex and ever-evolving AI regulatory landscape: examining policymakers’ priorities in those key markets and explaining the wider AI ecosystem in each jurisdiction. We have split out the AI policy arena in three broad categories:
- Policymakers the Government departments in charge of driving policy initiatives and designing regulatory frameworks, consulting with regulators and stakeholders like you.
- Enforcers the regulators presently tasked with enforcing existing regulatory frameworks in key use cases and sectors.
- Innovation Champions the allies for founders in supporting and advocating for regulatory frameworks and wider policy initiatives that work for you.
The world of AI policy and regulation is a patchwork of laws, guidance and principles that differ between markets, reflective of the speed at which the technology is developing. Copy link
While the EU has chosen to develop AI-specific legislation, many other markets are incorporating rules on AI into existing laws on privacy, copyright and regulated sectors such as finance, law, healthcare, insurance. Because of this, founders need to be alive to the relevance of these various frameworks and how they might apply to your product. Around the world, there are also different regulatory approaches to data access and copyright, which you might want to track if you are planning to develop your own AI models. For building large language models, you might be best setting up shop in a market with a more permissive copyright regime (e.g. Singapore, Japan), whereas markets with clear data access requirements (e.g. the EU) might provide more clarity for the development of narrower datasets and AI models.
For each market covered, this guide signposts further information on how to get involved, or where to turn to obtain further advice. Copy link
Founders frequently face the challenge of driving innovation while navigating a landscape of complex and evolving regulations. This balance is particularly delicate in the AI sector, where rules are often ambiguous and subject to change. To succeed, startups must remain agile and proactive, adapting to new regulations without stifling their creative and technical progress. Copy link
Megan Capiak Copy link
Managing Partner, Foundry Ten
Top tips for founders Copy link
If you only had a few minutes to understand the current state-of-play in AI regulation around the world, the key message is: it’s complicated, and it’s moving fast. Copy link
However, there are some common themes and priorities across the markets covered in this guide. We have distilled those themes into some top tips for founders, which will be valuable regardless of the market you are operating in. Note that this is not legal advice; rather examples of good practice.
The regulation of artificial intelligence is often thought of as an unprecedented challenge, but it’s no different to any transformative science that has come before. Fairness, ethics, transparency and safety have always been considerations for regulators when deploying science and technology to improve lives and societies. What’s different now is the sheer ubiquity of the application of AI and how many sectors it touches. That means more opportunities for companies across sectors to support governments in building intelligently designed sector-specific AI regulation. Copy link
Sarah Gates Copy link
Director of Public Policy, Wayve
-
Understand the privacy implications of your AI products before bringing them to market, and invest in good data practices.
It is no surprise that some of the earliest AI enforcement action was on privacy grounds, given the often unintentional inclusion of personal data in among the vast amounts of data often used to train AI tools, as well as the privacy risks in deploying certain AI applications like facial recognition technology. Build your product or service with privacy in mind at both the training and deployment stages:
- Adhere to relevant data protection legislation;
- Minimise the amount of personal data you’re using;
- Anonymise or pseudonymise data where possible;
- Be clear about your data practices to establish trust with end-users, clients and regulators including through a public-facing privacy policy on your website.
Ensure your products are responsible from the get-go. It is much easier to begin with a product that is compliant by design rather than have to reverse-engineer it down the line. Think about how you demonstrate the trustworthiness of the AI you’re building; regulations are likely to tighten; expectations are likely to get more specific.
Ben Lyons Head of Policy and Public Affairs, Darktrace
-
Protect your intellectual property, and make sure you’re not stealing others’.
As a founder, you want to ensure you’re protecting the key intellectual property (IP) your company is built on through relevant agreements, and conversely, avoid using third-party (IP) without clarifying that you have the right to do so. When developing and using AI, the value or provenance of content or information used as training data isn’t always clear, so make sure you have strategies to appraise the value of your ideas, as well as to trace content back to its origin.
-
Consider the context.
AI technologies are only as good as the datasets they are trained and tested on. To avoid discriminatory or otherwise detrimental outcomes for your company or users, you should prioritise time to detect potential biases and monitor areas for improvement. Be clear with customers what your AI tools can and can’t do.
Document your processes. Context-specific scorecards and transparency around model training, data usage, your AI’s outputs and related issues are becoming more common in legislative proposals. If you’re training models, be ready to discuss your dataset and methodology, and how your system reaches specific outcomes — this will likely become a regulatory requirement.
Matt Boulos Head of Policy & Safety, Imbue
-
Self-assess and establish which regulatory developments are most relevant to you.
Conduct some high-level risk mapping to determine which AI risks might be most relevant to your business. Take time to understand the current AI regulatory landscape and how it may apply to your products and operations - both for internal and external use. Collaborate with legal experts, join industry groups to represent your interests (many are listed in this guide), and actively participate in those policy discussions that are most crucial to your startup’s success. Look into AI regulatory sandboxes to get involved in if you’re operating in a sector or use case where there is some uncertainty. Early and consistent engagement with regulators also presents opportunities to shape future regulations, positioning yourself and your company as a proactive leader in the field.
Startups should consider and monitor regulatory risk, and, as they grow, deploy manpower or capital to address it. They should also actively engage with regulators early on. Regulators are typically just trying to protect consumers from being harmed - an aim that should not conflict with your business. The more the regulator understands about the benefits of your technology, the better.
James Clough CTO and co-founder, RobinAI
Find the right regulatory partner who understands your business positioning and can guide you through the evolving complexity of AI regulation. They should be able to help you navigate and implement solutions in an international context. Industry associations can be helpful if you don't have the time or resources to represent your interests yourself.
Lyline Lim Head of Impact, Photoroom
-
Establish a responsible AI framework.
Work with your team and external counsel where necessary to develop principles and practices that reflect your company values, ensure compliance with the law, and minimise potential liability exposure to customers and third parties. Early investment in compliance infrastructure such as data governance, documentation and ethical review processes, personnel policies and sensible terms of use for customers can save startups significant time and resources in the long run. Continually assess whether your AI tool is functioning in the way you intended it to, and consult with stakeholders impacted by your AI use. Don’t forget to establish clear lines of accountability to identify the people who are responsible (and liable) for ensuring these values are implemented and acted upon.
Startups should proactively incorporate ethical considerations into their AI development process, including fairness, transparency, and accountability. This not only helps with regulatory compliance but also builds trust with users and investors.
Megan Capiak Managing Partner, Foundry Ten
-
Train your team in the responsible development and use of AI.
Having frameworks and principles in place is a great starting point, but you still need to make them work in practice. Provide your teams with AI ethics-related training to educate them about the implications of AI systems. This will instil a culture of confidence in the technology and its capabilities, and will help you anticipate issues early. Establish what the gaps are in your and your team’s understanding, your understanding of your customer base’s expectations, and how you plan to address any gaps.
The era of ‘move fast and break things’ is over. Don’t be afraid to invest in AI but challenge yourself on what you plan to do about compliance and educate yourself from the outset. Do your homework now - it will pay off in the long run.
Agata Hidalgo European Affairs Lead, France Digitale