The United States - Federal Copy link
Tier 1: The Overview Copy link
State of Play Copy link
The headline: The United States are the global leader in AI innovation, with a generally permissive regulatory framework and high capital risk tolerance making it an attractive investment location for tech startups. The US’s complex and highly politicised policymaking landscape means that comprehensive AI legislation coming out of Washington DC is not forthcoming, leaving a fragmented regulatory environment split between Federal-level non-binding directives and State-level legislation.
The context: The AI and wider technology policy landscape at the Federal level is divided between a number of power bases including the President, Congress, the judiciary and regulatory bodies such as the Federal Trade Commission (FTC). Regulators and the courts have the power to set policy direction and precedent through litigation, which can give them a more active role in the policymaking process than their equivalent structures in the UK or EU. Live examples include ongoing litigation relating to AI training data such as the New York Times’ case against OpenAI’s ChatGPT. Although not regulation per se, the outcome of these lawsuits will have an effect at least as significant on the US AI landscape as the laws discussed in this section, as it will determine whether training generative AI models on scraped data is "fair use" under US copyright law.
The rules: There is currently no Federal-level AI legislation, nor is there likely to be the bipartisan political will to progress an AI Bill through Congress any time soon. Instead, a patchwork of policy guidance documents has emerged over the course of the Biden-Harris Administration covering topics such as the use of AI in Government and AI risk management. President Biden’s 2023 Executive Order (EO) on AI is the most significant Federal policy intervention to date. Not only is it the first Federal intervention to include specific provisions on governing generative and frontier AI, but it also has binding elements, making use of a Presidential Directive on defence preparedness to require developers of foundation models over a certain level of capability to disclose safety and technical information to the Government. Even so, without legislative backing the EO could be directly repealed. President-elect Trump has stated he would repeal the EO.
What this means for founders Copy link
State-level AI legislation has forged ahead where Federal regulation has stalled, with various States considering or enacting laws to regulate the use of AI by the private sector. Colorado and Utah led this process, as the first two States to pass AI legislation designed to broadly regulate commercial use of AI and protect consumers. Colorado’s law introduces an EU AI Act-style "high-risk AI" designation, while Utah’s legislation sets out the liability burden for consumer harms caused by generative AI systems in commercial use.
When deciding to launch or scale in the US, founders will need to ensure they are compliant with any relevant state AI laws, including in areas interacting with AI like privacy legislation. In the absence of regulatory coordination mechanisms, this can be a tricky exercise for founders. The national conversation on AI has so far focused on consumer protection policies and addressing the monopolistic power of big tech. Under the Biden presidency, FTC and DOJ action against big tech and AI companies have focused on competition issues and potential monopolistic practices, and the FTC has expressed concern with data privacy and confidentiality practices in the context of AI model training. FTC Chair Lina Khan will likely be removed by the incoming Trump administration, although it is not clear yet how that will impact the agency’s direction. Founders should stay aware of developments in this area, as it is likely to be influential for setting precedents and shaping the AI ecosystem at the Federal level in the absence of specific AI legislation.
Forward Look Copy link
President-elect Trump is likely to repeal the Executive Order. Copy link
The 2024 Republican manifesto has pledged to scrap Biden’s EO on AI. This could have important ramifications for startups, as the guidelines represent a direct Government engagement opportunity for companies looking to build trust through cooperating with State AI governance initiatives. Industry associations and internal structures will become increasingly important mechanisms for Government engagement on AI, if the EO is rescinded.
It is not yet clear what the Trump administration’s AI policy plan is. Copy link
During Trump’s campaign, he said relatively little on AI policy. Close ally Elon Musk will likely be influential. Musk’s position on AI regulation has shifted back and forth: In March 2023 he signed an open letter urging AI labs to pause training for six months to focus on developing and implementing shared AI safety protocols. He also endorsed a controversial Californian state bill (SB 1047) that would have required AI developers to conduct safety testing on some of their own models in August 2024. At the same time, Musk’s AI company xAI’s model Grok, is designed to have fewer guardrails than major competitors and has been under scrutiny for its data protection practices.
Globally, the United States generally offers a favourable environment for AI startups. Its regulatory approach tends to be more flexible and innovation-driven, with significant support for technological development. Despite years of efforts, Congress has failed to pass any meaningful legislation that would regulate the technology industry. Agencies like the Federal Trade Commission and the National Institute of Standards and Technology provide guidance but do not impose overly restrictive rules, making it easier for founders to innovate without excessive regulatory burden. Copy link
Megan Capiak Copy link
Managing Partner, Foundry Ten
Policymakers Copy link
Enforcers Copy link
Innovation Champions Copy link
The United States - States Copy link
California Copy link
Tier 1: The Overview Copy link
State of Play Copy link
The headline: California is the undisputed centre of AI and tech innovation, home to 32 of the top 50 most promising privately-held AI companies. California is also known for setting high regulatory barriers and has recently turned its attention to AI, with the Governor signing eighteen AI-related laws in the 2024 session and the State legislature likely to introduce further AI-related Bills in the future.
The context: California is the key jurisdiction of operation for innovative companies of all sizes, boasting an economy that would rank as the fifth-largest in the world (ahead of the UK) if the State were its own country. Silicon Valley remains the centre of gravity for US AI funding. Even setting aside the major success story of OpenAI, the Bay Area still attracted seven times more investment into generative AI between 2019-2024 than New York, the next biggest hub in the US.
The rules: California introduced thirty-eight AI-related Bills during the 2024 legislative session. The most controversial - SB 1047, which would have placed binding rules on the developers of the most powerful frontier AI models - was vetoed by the Governor on the grounds that targeting the technology itself rather than its potential use cases did not adequately mitigate the risks of AI going wrong or being misused. However, the Governor did sign eighteen AI-focused Bills from the 2024 session, including AB 2013 and SB 942, which will set new transparency standards for sharing model training data and disclosing synthetic content respectively.
What this means for founders Copy link
Founders will need to be aware of the impact of these new laws, as well as familiarising themselves with the State’s laws in other AI-relevant areas of regulatory intervention such as California’s stringent data privacy legislation. Founders should also be aware of future AI-related Bills coming out of California, which may focus on current risks from specialised use cases for the technology rather than on the theoretical catastrophic risks that were the focus of SB 1047.
Forward Look Copy link
We could see a "California Effect" develop in US AI regulation. Copy link
The size and depth of California’s tech market means that its newly-enacted AI rules will likely cause repercussions for AI developers and deployers across the US, despite the veto of SB 1047. Firms will need to comply with the new regulations in order to build and use in-scope AI systems in the State, regardless of where they are headquartered. Equally, the State legislature could look to reintroduce all or parts of SB 1047 in future sessions. As a result, we expect the State to become something of a standard-setter on AI guardrails in the coming years, in lieu of Federal-level legislation.
New York Copy link
Tier 1: The Overview Copy link
State of Play Copy link
The headline: New York State’s AI credentials extend beyond the Big Apple, driven by a pro-tech and innovation Governor, Kathy Hochul, who recently approved a $275m flagship statewide investment in collaborative research, development and job creation called the AI Empire Consortium.
The context: The Consortium forms one part of a broader innovation strategy for the State that includes significant investments in chips and green technology, life sciences and advanced manufacturing. The strategy also has an emphasis on technology commercialization and innovation across industries including biotechnology, new materials and nanotechnology. Meanwhile, the New York City AI Action Plan launched in October 2023 aims to establish a comprehensive framework for AI governance that it claims is the first of its kind in any US city.
The rules: In the absence of federal legislation, New York has been one of the leading US states to propose and implement regulations on AI. Enacted rules include a policy requiring State agencies to ensure that all decisions made by AI that impact on the public have human oversight, and New York City has introduced a law requiring employers to conduct audits of automated employment decision tools to check for bias.
What this means for founders Copy link
New York is undeniably one of the world’s most attractive jurisdictions to set up a business working with AI. It boasts a range of grants and programmes to support tech startups with early stage funding and is nurturing a growing talent pipeline. Its regulatory regime is evolving fast, which founders should keep a close eye on, and its enforcement approach has been focused on providing safeguards for users and the public. As part of this, you may find that adopting transparency as a key principle will work to your benefit when you operate in NYC.
Despite New York’s expanding tech scene, startups often find the state’s regulatory environment challenging to navigate. New York is notorious for its rigorous privacy and consumer protection laws, coupled with stringent enforcement practices. Recent high-profile cases have highlighted how regulators pursue companies for data violations, imposing steep penalties for noncompliance and reinforcing the State's strong commitment to consumer protection. For AI companies, this can mean grappling with extensive legal requirements that may hinder innovation and growth if not carefully managed. Copy link
Megan Capiak Copy link
Managing Partner, Foundry Ten
Forward Look Copy link
We can expect further industry guidelines and regulation to be issued. Copy link
These are most likely to be in specific sectors (e.g. insurance) to prevent discriminatory outcomes from the use of AI, and a proposed "Robot Tax" for businesses displacing workers with AI and licensing of high risk AI systems. Senate Internet & Tech Chair Kristen Gonzalez is developing her AI agenda, which is expected to include regulations on AI chatbots. Legislation related to AI usage liability and watermarks is also anticipated in 2025.
For New York City, the AI Action Plan’s implementation will kick into gear over the course of 2024-2025. Copy link
There are myriad initiatives underway to raise public awareness of AI and to establish clearer guidelines for AI developers and deployers in public services.
Policymakers Copy link
Enforcers Copy link
Innovation Champions Copy link
Texas Copy link
Tier 1: The Overview Copy link
State of Play Copy link
The headline: As the fourth largest state in the US for VC investment, Texas is a vibrant hub for VC investment and startup success. Texas’ tech sector contributes more than $469 billion to the State's economy, representing nearly 20% of the total GDP. Its three major cities - Austin, Houston, and Dallas - have formed what is known as the Texas Tech Triangle, with each city serving as a major hub for technological advancement.
The context: The State has a high degree of innovative and entrepreneurial spirit, with no income taxes for individuals or corporations. Despite its AI regulatory environment, which is generally considered more permissive than California’s, Texas is ultimately unlikely to be able to displace California’s market size and innovation ecosystem for founders.
The rules: Texas has a reasonably developed approach to AI policy. Its AI-relevant rules are strongest on data privacy and deepfake regulation, requiring consent before companies process sensitive data and prohibiting the production and dissemination of political and sexual deepfakes.
What this means for founders Copy link
Texas’ favourable investment and policy context makes it an attractive place for founders. The Tech Triangle is underpinned by AI-specific Innovation Champions such as Dallas AI and the Austin AI Alliance.
Forward Look Copy link
Enforcing the new data privacy laws is at the top of the agenda. Copy link
The Data Privacy and Security Act entered into force in July 2024, granting individuals sweeping rights over the use of their data. The State Attorney General Ken Paxton has indicated his intention to thoroughly enforce the new law in collaboration with the Consumer Protection Division of the Office of the Attorney General.
AI-related legislation is expected to be introduced in the 2025 legislative session. Copy link
A draft comprehensive AI Bill is expected to be released in November 2024 ahead of the 2025 legislative session, which may pave the way for the introduction of further AI regulations.
Timelines Copy link
Q4 2024: The Texas AI Advisory Council, established in February 2024 by Governor Greg Abbott, is expected to publish a report on the use of AI in the State government. Potential policy proposals could include designating a State agency to hold the pen on AI policy, creating a separate AI policy-making office and further measures on AI and data privacy & cybersecurity.