The United States - Federal Copy link

Tier 1: The Overview Copy link

State of Play Copy link

The headline: The United States are the global leader in AI innovation, with a generally permissive regulatory framework and high capital risk tolerance making it an attractive investment location for tech startups. The new administration under President Trump is still in the process of establishing a comprehensive policy agenda. As a result, the AI policy landscape continues to be split between Federal-level Executive directives and State-level legislation.

The context: Under Trump’s administration, policymaking is being concentrated more heavily in the Executive branch of government. However, the courts still have the power to set policy direction and precedent through litigation, which can give them a more active role in the policymaking process than their equivalent structures in the UK or EU. The outcome of the first AI & copyright case, which went in favour of news publisher Thomas Reuters, is a case in point: in the absence of Federal legislation, the precedents set by cases of this type could determine whether, for instance, training generative AI models on scraped data is considered legal under the "fair use" principle of US copyright law.

The rules: There is currently no Federal-level AI legislation, nor is there likely to be the bipartisan political will to progress an AI Bill through Congress any time soon. The Trump administration has repealed Biden’s Executive Order on AI, which had set some reporting requirements on the developers of powerful generative AI models, and has instituted a new Executive Order that seeks to break down the barriers to AI innovation and development. So far, this signals the administration’s intention to take a generally deregulatory approach to AI policy and regulation, but it is too soon to tell whether this logic will hold.

What this means for founders Copy link

State-level AI legislation has forged ahead where Federal regulation has stalled. Seven hundred AI-related bills were introduced at State level in 2024, with some pieces of AI regulation on private companies enacted in key jurisdictions such as California (see the California section below). More regulation is already being introduced in the 2025 sessions. Founders will need to track state-level developments carefully (dedicated trackers are available from e.g. the International Association for Privacy Professionals). When considering whether to launch or scale in the US, founders will also need to ensure that their products and services are compliant with any relevant state AI-related laws.

Forward Look Copy link

The new administration is pursuing a markets-driven approach to securing critical technologies supply chains, which could cause some economic uncertainty. Copy link

Trump has taken aim at the effectiveness of state-driven innovation funding initiatives like the CHIPS Act, introduced by the Biden administration, instead instituting tariffs on imports in key sectors to encourage markets to move their operations to US soil. The early implementation of this trade policy has created some uncertainty in the markets which, if it persists, could impact the US’s status as a stable hub for public and private investment. At this stage, it is too soon to tell what the wider implications for the US’s AI and innovation sector could be.

Globally, the United States generally offers a favourable environment for AI startups. Its regulatory approach tends to be more flexible and innovation-driven, with significant support for technological development. Despite years of efforts, Congress has failed to pass any meaningful legislation that would regulate the technology industry. Agencies like the Federal Trade Commission and the National Institute of Standards and Technology provide guidance but do not impose overly restrictive rules, making it easier for founders to innovate without excessive regulatory burden. Copy link
Megan Capiak Copy link
Managing Partner, Foundry Ten

Tier 2: The Details Copy link

Policy and Legislative Landscape Copy link

INITIATIVE

DESCRIPTION

In January 2025, President Trump signed an executive order establishing a policy to "sustain and enhance America's global AI dominance” and revoking existing AI policies that “act as Barriers to American AI innovation”.

The new order requires key administration officials to develop and submit actions plan within 180 days to achieve these policy goals.

The order also revokes Biden’s Executive Order 14110, sets out to review “all policies, directives, regulations, orders, and other actions taken” as a result of that Order and suspend any that don’t “promote human flourishing, economic competitiveness, and national security”.

In June 2024, the Supreme Court issued a landmark ruling that overturned the Chevron doctrine, which had until then required courts to defer to Federal agencies' interpretations of ambiguous laws. Courts will now instead rely on their own interpretations. This decision might mean there are more opportunities to challenge unfavourable agency interpretations in court, particularly in areas like data privacy, algorithmic bias, or AI safety regulations. While there may be short-term uncertainty, the ruling could lead to more consistent and predictable AI regulatory environments in the long run as court interpretations become established.

Policymakers Copy link

NAME

DESCRIPTION

President Trump signed an executive order in January 2025 establishing the Department of Government Efficiency (DOGE). The order renames the United States Digital Service (USDS) as the United States DOGE Service and places it within the Executive Office of the President. 

DOGE will operate until July 4, 2026 within USDS. Each federal agency was required to form a four-person DOGE Team within 30 days to coordinate with USDS. The USDS Administrator will lead a Software Modernization Initiative to improve government technology systems, with agency heads required to provide full access to unclassified records and systems. 

Elon Musk is the de facto head of DOGE.

NIST is a non-regulatory agency which is a part of the US Department of Commerce and is home to the US AI Safety Institute since it was created in 2023. Over the years NIST has undertaken a number of AI initiatives, including developing a risk management framework to better manage risks posed to individuals, organisations, and society by AI. NIST has also released Special Publication 1270, Towards a Standard for Identifying and Managing Bias in Artificial Intelligence as well as a Privacy Framework which outlines an approach for organisations to manage privacy risks. NIST has ongoing research into the explainability of AI systems and a team aiming to develop measurement methods and best practices to support the implementation of explainable AI.

The Trump administration’s DOGE has targeted NIST, and specifically the US AISI, in its mass firings of probationary employees.

The Office for Science and Technology Policy (OSTP) was given a broad mandate by Congress to advise the President on the effects of science and technology on domestic and international affairs. It is the direction-setting body at the federal level.

Michael Kratsios’s nomination to become the next director of the OSTP was recently approved. President Trump has also nominated Ethan Klein to become associate director.

The United States Department of Energy (DOE) oversees US national energy policy and energy production, as well as energy-related research and energy conservation, including nuclear. The DOE has an office of Critical and Emerging Technology which is running the Frontiers in AI for Science, Security, and Technology (FASST) initiative in addition to a number of other ongoing AI-related projects 

The DOE also runs the AI Advancement Council, which was founded in  2022, to oversee implementation of the DOE AI Strategy and address issues relating to the ethical use and development of AI systems.

The US Equal Employment Opportunity Commission is a Federal agency established in 1964 to administer and enforce civil rights laws against workplace discrimination. In collaboration with the Department of Justice, the EEOC have clearly laid out how employers’ use of AI and other automated systems can result in discrimination against job applicants and employees with disabilities, which is part of their wider Artificial Intelligence and Algorithmic Fairness Initiative.

Enforcers Copy link

NAME

DESCRIPTION

The FTC is an independent agency of the United States Government tasked with protecting consumers and promoting competition by enforcing antitrust laws and preventing unfair, deceptive, or anticompetitive business practices. The FTC mainly devotes resources to segments of the economy where consumer spending is high such as health care, pharmaceuticals, professional services, food, energy, and certain high-tech industries like computer technology and Internet services.

In January 2024 the FTC launched an Inquiry into Generative AI Investments and Partnerships. They have also taken an interest in other areas where consumers might be affected by AI such as the recent rule to combat fake and AI generated reviews and testimonials by prohibiting their sale or purchase and allow the agency to seek civil penalties against known violators, as well as data privacy and confidentiality in the context of AI model training. It recently issued a joint statement on AI competition with the Department of Justice, European Commission and UK Competition and Markets Authority.

The DOJ Antitrust Division enforces US antitrust law and works closely with the FTC. The FTC and DOJ recently agreed that the FTC will oversee competitive concerns relating to major software companies, while the DOJ will investigate the conduct of chip manufacturers.

The FTC may refer evidence of criminal antitrust violations to the DOJ as the DOJ has exclusive jurisdiction over US Federal criminal antitrust investigations and prosecutions. The DOJ also has sole antitrust jurisdiction in certain industries, such as telecommunications, banks, railroads, and airlines.

In 2024, the Division issued a joint statement on AI competition with the Federal Trade Commission, European Commission and UK Competition and Markets Authority, and has also recently undertaken a number of antitrust cases against big tech companies such as Google.

The FCC works alongside the FTC to protect consumers in areas particularly related to communications. 

The FCC has taken a particular interest in AI as it relates to communications issues, as shown through its decision to implement rules around AI-Generated Robocalls and disclosure rules for the use of AI in Political Ads

Innovation Champions Copy link

NAME

DESCRIPTION

FOUNDER FOCUS

An independent Federal agency supporting science and engineering in the US through research funding and grants to America's universities. The NSF funds AI development research and houses the Technology, Innovation and Partnerships Directorate, which invests in a number of key sectors through its investment roadmap.

Find funding opportunities for AI development  

DARPA’s mission is to invest in breakthrough technologies for national security purposes. It has run a number of programmes related to AI such as the Explainable Artificial Intelligence and AI Forward programmes.

Keep an eye out for potential future funding opportunities 

An NSF initiative, the Pilot aims to increase access to AI-related infrastructure resources including compute, data and pre-trained models. 

Apply for access advanced computing or explore their readily available models or data sets 

TechNet is a national bipartisan network of technology CEOs and senior executives promoting growth in the innovation economy. 

Become a member to join their network and lobbying initiatives  

The United States - States Copy link

California Copy link

Tier 1: The Overview Copy link

State of Play Copy link

The headline: California is the undisputed centre of AI and tech innovation, home to 32 of the top 50 most promising privately-held AI companies. Despite being a heavily Democratic state, California’s reputation as a standard-setter for regulation in the US (the so-called “California Effect”) is unlikely to diminish as a result of the new political forces in Washington, due to the size and importance of its tech sector.

The context: California is the key jurisdiction of operation for innovative companies of all sizes, boasting an economy that would rank as the fifth-largest in the world (ahead of the UK) if the State were its own country. Silicon Valley remains the centre of gravity for US AI funding. Even setting aside the major success story of OpenAI, the Bay Area still attracted seven times more investment into generative AI between 2019-2024 than New York, the next biggest hub in the US.

The rules: California introduced thirty-eight AI-related Bills during the 2024 legislative session. The most controversial - SB 1047, which would have placed binding rules on the developers of the most powerful frontier AI models - was vetoed by the Governor on the grounds that targeting the technology itself rather than its potential use cases did not adequately mitigate the risks of AI going wrong or being misused. However, the Governor did sign eighteen AI-focused Bills from the 2024 session, of which the most relevant to founders are AB 2013 and SB 942, which set new transparency standards for sharing model training data and disclosing synthetic content respectively.

What this means for founders Copy link

Founders will need to be aware of the impact of these new laws, as well as familiarising themselves with the State’s laws in other AI-relevant areas of regulatory intervention such as California’s stringent data privacy legislation.

 

Founders should also be aware of the passage of relevant AI-related Bills through the Californian legislature in 2025, such as AB 412 (regarding the use and disclosure of copyrighted datasets in AI training data); AB 1018 (regarding the use of AI to automate decision-making in certain contexts); SB 11 (regarding the use and disclosure of synthetic content); and SB 53 (regarding the protection of whistleblowers working on frontier AI).

Forward Look Copy link

A "California Effect" is developing in US AI regulation. Copy link

The size and depth of California’s tech market means that AI developers and deployers operating in California will need to comply with its newly-enacted AI rules, regardless of where they are headquartered. As a result, AI-related rules enacted in California are likely to become the de facto standard across the US, in lieu of Federal regulation. Even vetoed Bills originating in California can create waves beyond the State’s borders: SB 1047 (which would have regulated the development and deployment of frontier AI models), was introduced but ultimately vetoed last autumn, but has now been re-introduced in a heavily watered-down form in both California itself (as SB 53) and in New York (as AB 06453 -see the New York section).

Tier 2: The Details Copy link

Policy and Legislative Landscape Copy link

INITIATIVE

DESCRIPTION

Introduces the requirement to publish a summary of training data (including information such as the data’s source and whether it was pre-processed before it was used for training) alongside the release of new generative AI models or after the "substantial modification" of an existing system (including fine-tuning or any other update that materially changes the functionality or performance of the system), with exceptions in certain cases such as security models.  

Introduces requirements for developers of generative AI models with over 1 million monthly users to offer a detection tool for AI-generated content, automatically tag AI-generated content as synthetic with a "latent disclosure" (e.g. a watermark or metadata), and offer users the option to also include a "clear and conspicuous" disclosure in the content. Developers are liable for a $5,000 fine if they fail to do so.

Requires developers and deployers of automated decision systems (ADS) that make "consequential decisions" to conduct performance evaluations documenting accuracy and potential discriminatory impacts. Starting 2027, deployers must provide individuals with disclosure when ADS is used, opt-out options, and appeal processes. Systems affecting over 5,999 people must undergo third-party audits every three years beginning 2030. Violations face civil penalties up to $25,000 per violation. Applies to decisions in employment, education, housing, healthcare, financial services, and more.

Requires GenAI developers serving Californians to document all copyrighted training materials and their owners, respond to copyright owners' inquiries within 7 days (or 30 days if confirming non-use), and maintain records for the system's commercial life plus 10 years. Developers can avoid request-handling requirements by publishing training data documentation freely online. Copyright owners can seek $1,000 per violation (or actual damages), injunctive relief, and legal fees if developers fail to comply, with each day beyond the response deadline counting as a separate violation.

Defines "artificial intelligence" and "synthetic content". It establishes that using synthetic content to impersonate someone is "false personation" under criminal law, with intent inferred if a reasonable person would believe the content is genuine. For civil liability, synthetic voices or likenesses reasonably believed to be genuine are legally treated as the actual person's. By December 2026, entities providing AI that creates synthetic content must warn consumers about liability for misuse, facing penalties up to $25,000 daily for violations. The Judicial Council must develop rules by January 2027 for courts to assess AI-generated evidence.

The reintroduced but watered down SB 1047. Establishes "CalCompute," a state-operated public cloud computing cluster to advance safe, ethical AI development through a 14-member consortium tasked with building infrastructure within UC by 2027. It also creates whistleblower protections for employees of foundation model developers who report critical risks, defined as foreseeable dangers that could result in death/injury to 100+ people or $1B+ in damages. The law applies to developers who have trained models costing at least $100M in compute and includes provisions for anonymous reporting, protection from retaliation, and legal remedies including attorney's fees for successful plaintiffs.

Policymakers Copy link

NAME

DESCRIPTION

The Governor’s policy positions are an important indicator of which legislation is likely to be passed or vetoed. The current Governor, Gavin Newsom, has spoken out in favour of AI developers during his tenure and takes a pro-innovation stance on tech regulation, seeking to leverage California’s attractiveness for AI investors and developers. 

Enforcers Copy link

NAME

DESCRIPTION

Has a wide range of enforcement powers over State laws such as AI regulation, including the ability to bring civil action against non-compliant companies and to collect fines for regulatory violations.

Oversees the implementation of the strengthened California Consumer Privacy Act, which grants California citizens the right to know what personal data is being collected by businesses and to opt out of having their information shared or sold. 

Innovation Champions Copy link

NAME

DESCRIPTION

FOUNDER FOCUS

A shared carry private/public venture capital fund, providing 50% of its General Partner carry to the University of California. 

Join their community for funding, mentorship and guidance, networking with University of California alumni 

Y Combinator (YC) is the most famous American technology startup accelerator and venture capital firm, which runs a California-based startup accelerator programme

Keep an eye out to apply for the 2025 batch which provides funding, development, and networking

TCA venture group is a collection of angel investors based in Southern California, with 400 members and five angel funds to support early-stage startups.

Apply for funding, or check out their available education/advisory resources

New York Copy link

Tier 1: The Overview Copy link

State of Play Copy link

The headline: New York State’s AI credentials extend beyond the Big Apple, driven by a pro-tech and innovation Governor, Kathy Hochul, who recently approved a further $90 million public-private investment into the State’s flagship AI Empire Consortium computing facility, set up last year with $275 million of funding for collaborative research, development and job creation.  

The context: Home to 35 AI unicorns, which have collectively raised $17 billion, New York is home to a thriving AI startup and scale-up community, with a particular emphasis on AI application and commercialisation across industries including biotechnology, new materials and nanotechnology. Meanwhile, the New York City AI Action Plan launched in October 2023 aims to establish a comprehensive framework for AI governance that it claims is the first of its kind in any US city.

The rules: In the absence of federal legislation, New York has been one of the leading US states to propose and implement regulations on AI. Enacted rules include a policy requiring State agencies to ensure that all decisions made by AI that impact on the public have human oversight, and New York City has introduced a law requiring employers to conduct audits of automated employment decision tools to check for bias.

What this means for founders Copy link

New York is undeniably one of the world’s most attractive jurisdictions to set up a business working with AI. It boasts a range of grants and programmes to support tech startups with early stage funding and is nurturing a growing talent pipeline. Its regulatory regime is evolving fast, which founders should keep a close eye on, and its enforcement approach has been focused on providing safeguards for users and the public. As part of this, you may find that adopting transparency as a key principle will work to your benefit when you operate in NYC.

Despite New York’s expanding tech scene, startups often find the state’s regulatory environment challenging to navigate. New York is notorious for its rigorous privacy and consumer protection laws, coupled with stringent enforcement practices. Recent high-profile cases have highlighted how regulators pursue companies for data violations, imposing steep penalties for noncompliance and reinforcing the State's strong commitment to consumer protection. For AI companies, this can mean grappling with extensive legal requirements that may hinder innovation and growth if not carefully managed. Copy link
Megan Capiak Copy link
Managing Partner, Foundry Ten

Forward Look Copy link

Further AI regulation could be on the horizon. Copy link

A Bill to institute a so-called “Robot Tax” has been proposed for the second time in the current legislative session, which would add costs to businesses that displace workers through AI and automation. Additionally, Assemblymember Alex Bores has introduced an almost like-for-like version of California’s “SB 1047” AI Safety Bill, which was vetoed by Governor Gavin Newsom in September 2024 (see California section). A06453 (or “the RAISE Act”) sets rules on the safe development and deployment of “frontier” AI models: although founders are unlikely to be directly in scope unless they are developing powerful foundation models, industry groups have warned that instituting AI model regulation could have a chilling effect on downstream innovation.

Tier 2: The Details Copy link

Policy and Legislative Landscape Copy link

INITIATIVE

DESCRIPTION

Launched in October 2023 by New York City Mayor Eric Adams, this set of roughly 40 policy initiatives is designed to protect residents against harm like discrimination resulting from AI. It includes development of standards for AI purchased by city agencies and new mechanisms to gauge the risk of AI used by city departments.

Whilst it is primarily geared towards public agencies, the plan sets a clear direction and expectation of what New York City expects from the development and deployment of AI tools.

In December 2021, New York City passed the first law in the US requiring employers to conduct bias audits of AI-enabled tools used for employment decisions. 

Also branded the ‘AI Bias Law’, this law prohibits employers and employment agencies from using an automated employment decision tool unless:

  • the tool has been subject to a bias audit within one year of the use of the tool,
  • information about the bias audit is publicly available, and
  • certain notices have been provided to employees or job candidates.

In April 2023 the Department of Consumer and Worker Protection (DCWP) published the law’s Final Rule. The law is now in effect, and enforcement began on July 5, 2023. A helpful explainer can be found here.

Signed into law in 2019, the SHIELD Act prioritises proactive data security measures, requiring companies to safeguard information before breaches occur, with stringent compliance enforcement.

The SHIELD Act applies to AI products when a security breach occurs, meaning an unauthorised acquisition of computerised data that compromises the security, confidentiality, or integrity of private information.

The "Responsible AI Safety and Education Act" (RAISE Act) establishes regulatory requirements for advanced AI systems. The bill targets "frontier models" and requires "large developers" to implement safety protocols preventing "critical harm”. This legislation primarily affects major AI labs developing cutting-edge systems rather than startups. Inspired by California’s SB 1047, which did not pass due to concerns it it could have a downstream impact on AI innovation, including founders using/modifying big AI models.

Enacts robot tax act; imposes a tax on certain businesses when people are displaced from their employment due to certain technologies based on the sum of any taxes or fees imposed by the state or any political subdivision thereof computed based on a displaced employee's wage. Likely to impact startups using AI to automate workflows and therefore displacing people.

Policymakers Copy link

NAME

DESCRIPTION

Proposed legislative measures intended to protect against untrustworthy and fraudulent uses of AI. Presented as part of her FY 2025 Executive Budget, the bill, if passed, would amend existing penal, civil rights and election laws—establishing a private right of action for voters and candidates impacted by deceptive AI-generated election materials and criminalising certain AI uses. 

Mayor Adams has been the driving force behind The New York City Artificial Intelligence Action Plan, which introduces a set of phased actions the city will undertake to help agencies evaluate risks and determine whether or not a tool is the right technology to deliver better positive outcomes for New Yorkers. However, in September 2024, Adams was indicted on federal charges of bribery, fraud, and soliciting illegal foreign campaign donations, and his future is uncertain.

Enforcers Copy link

NAME

DESCRIPTION

The Attorney General enforces laws to protect consumers, tenants, patients, workers, investors, and charitable donors in the State of New York. The office also coordinates statewide civil and criminal investigations.

As part of its role in monitoring the effectiveness of different regulatory frameworks, the Office of the New York State Attorney General (OAG) hosted a symposium on the effectiveness of the current legislative framework for AI.  The OAG published a symposium report recommending that further evidence is needed around effective algorithmic auditing frameworks, including watermarking.

Tasked with enforcing Local Law 144 of 2021, which requires employers and employment agencies that use "automated employment decision tools" to provide notification and conduct and publish bias audits of the tools in advance of their use in employment decisions in New York City.

Innovation Champions Copy link

NAME

DESCRIPTION

FOUNDER FOCUS

In recognition of the important role technology plays across all government services, Mayor Adams created the Office of Technology and Innovation (OTI) by signing Executive Order 3 in January 2022. This order combined the City's existing technology offices under the leadership of Chief Technology Officer Matthew Fraser.

This order marked an important shift in how the City of New York approaches technology and innovation. Instead of disparate offices operating in silos, OTI now represents a coordinated citywide approach to delivering public services to city residents.

Check out their business support offer

Funded by over $400 million in public and private investment, Empire AI is a partnership of public and private universities in New York that will establish an artificial intelligence computing centre at State University of New York’s University at Buffalo to facilitate statewide innovation, research, and development of AI technologies.

Follow the consortium’s latest updates here

Founded in 2011 and run by FirstMark Capital, Data Driven NYC is a Big Data & AI community with more than 20,000 members.

Join the Founder Guild

Tier 1: The Overview Copy link

State of Play Copy link

The headline: Texas’s star is rising in the area of AI innovation, with the State well-placed both geographically and politically to become a leading AI hub. Geographically, Texas’s size enables the State to house large-scale AI infrastructure to power the US’s AI sector; politically, its low-tax environment has made it a magnet for corporate relocation (including in the tech sector), while its Republican majority could strengthen its ability to guide AI policy thinking at the Federal level.

The context: As the fourth largest state in the US for VC investment, Texas is a vibrant hub for VC investment and startup success. Texas’ tech sector contributes more than $469 billion to the State's economy, representing nearly 20% of the total GDP. Its three major cities - Austin, Houston, and Dallas - have formed what is known as the Texas Tech Triangle, with each city serving as a major hub for technological advancement. The State has a high degree of innovative and entrepreneurial spirit, with no income taxes for individuals or corporations. The $500bn “Stargate” AI infrastructure project led by OpenAI, Softbank and Oracle will be based primarily in Texas, with at least 10 of the proposed data centres located in the State.

The rules: Texas has a reasonably developed approach to AI policy. Its AI-relevant rules are strongest on data privacy and deepfake regulation, requiring consent before companies process sensitive data and prohibiting the production and dissemination of political and sexual deepfakes.

What this means for founders Copy link

Texas’ favourable investment and policy context makes it an attractive place for founders. The Tech Triangle is underpinned by AI-specific Innovation Champions such as Dallas AI and the Austin AI Alliance. 

Forward Look Copy link

Texas’s comprehensive AI Bill could have ramifications beyond the State’s borders. Copy link

The introduction of a comprehensive piece of AI legislation by Rep. Giovanni Capriglione, a conservative Republican lawmaker and a successful small businessman, is an unusual shift in direction for a State that typically favours a light-touch regulatory approach. If the Bill survives its legislative journey, it will come into force on 1 September 2025: given the size of Texas’ AI sector and the Republican origin of the regulation, it could give rise to a “Texas effect”, where the legislation is picked up by other States or even on the Federal Level.

Timelines Copy link

Q4 2024: The Texas AI Advisory Council, established in February 2024 by Governor Greg Abbott, is expected to publish a report on the use of AI in the State government. Potential policy proposals could include designating a State agency to hold the pen on AI policy, creating a separate AI policy-making office and further measures on AI and data privacy & cybersecurity.

Tier 2: The Details Copy link

Policy and Legislative Landscape Copy link

INITIATIVE

DESCRIPTION

The Data Privacy and Security Act, which came into force on 1 July 2024, applies to businesses operating in Texas or producing products/services consumed by Texas residents, as well as to businesses which process or sell personal data. 

The Act outlines a number of consumer data privacy rights, such as the right to access personal data, the right to correct inaccuracies, the right to delete personal data, the right to obtain a portable copy of personal data, and the right to opt out of personal data processing for targeted advertising, sale, or profiling. The Act also outlines obligations for data controllers and processors to: 

  • Limit data collection to what's necessary
  • Implement reasonable security practices
  • Provide clear privacy notices
  • Conduct data protection assessments for certain processing activities
  • Obtain consent for processing sensitive data
  • Respond to consumer requests within 45 days
  • Assist controllers in fulfilling consumer requests and other obligations
  • Ensure confidentiality of personal data

Focus on mitigating risks from high-risk AI systems. Requires developers and deployers of high-risk AI systems to implement risk management policies, conduct impact assessments, provide transparency to consumers, and prevent algorithmic discrimination. Creates an AI Council and regulatory sandbox program to balance innovation and regulation. Prohibits specific uses of AI including manipulation, social scoring, and non-consensual biometric identification. Expands consumer rights related to personal data used in AI systems. Takes effect September 1, 2025. Likely to particularly impact businesses affecting education, employment, healthcare, housing, and government services.

Requires companies with at least $100 billion in revenue that use AI to provide services to individuals in Texas to publicly disclose names of all AI models used, descriptions of their functions, third parties providing input on implemented models, specific input provided by these parties, and changes made based on third-party input. Applies to AI used for answering questions, gathering or summarising information, generating content, or making lending/hiring decisions. Prohibits retaliation against individuals reporting violations, requires cooperation with the Attorney General for compliance verification, and classifies violations as deceptive trade practices. Takes effect September 1, 2025. Primarily impacts large tech companies operating AI systems in Texas.

Policymakers Copy link

NAME

DESCRIPTION

The AI Select Committee was set up in April 2024 to conduct a comprehensive review of the advancements in AI and other emerging technologies, as well as the economic, ethical, and societal implications. They published an interim report in May 2024.

The Texas DIR oversees technology solutions to state and local government entities with the mission of making public sector technology in Texas more secure, cost-effective, and forward-looking. 

In addition to providing administrative support to the AI Advisory Council, the DIR also operates an Artificial Intelligence User Group, which seeks to improve the delivery of public services, and hosts AI Day Events which bring together public servants and industry members.

Enforcers Copy link

NAME

DESCRIPTION

The Office is the regulator for the Texas Data Privacy and Security Act

Innovation Champions Copy link

INITIATIVE

DESCRIPTION

FOUNDER FOCUS

Dallas AI is a trade body with over 6,000 members, whose mission is to help Dallas become a vibrant hub for AI. 

Join for networking events, or follow their LinkedIn for their founder’s discussion group

The Austin AI Alliance aims to accelerate the growth of AI-focused companies and support AI adoption and use in various industries. 

Become a member for networking through monthly social and educational meetings

The Texas Association of Business seeks to protect Texas' pro-innovation tech environment. 

Lobby for AI policies through the Tech Means Business Coalition