The United States - Federal Copy link

Tier 1: The Overview Copy link

State of Play Copy link

The headline: The United States are the global leader in AI innovation, with a generally permissive regulatory framework and high capital risk tolerance making it an attractive investment location for tech startups. The US’s complex and highly politicised policymaking landscape means that comprehensive AI legislation coming out of Washington DC is not forthcoming, leaving a fragmented regulatory environment split between Federal-level non-binding directives and State-level legislation.

The context: The AI and wider technology policy landscape at the Federal level is divided between a number of power bases including the President, Congress, the judiciary and regulatory bodies such as the Federal Trade Commission (FTC). Regulators and the courts have the power to set policy direction and precedent through litigation, which can give them a more active role in the policymaking process than their equivalent structures in the UK or EU.  Live examples include ongoing litigation relating to AI training data such as the New York Times’ case against OpenAI’s ChatGPT. Although not regulation per se, the outcome of these lawsuits will have an effect at least as significant on the US AI landscape as the laws discussed in this section, as it will determine whether training generative AI models on scraped data is "fair use" under US copyright law.

The rules: There is currently no Federal-level AI legislation, nor is there likely to be the bipartisan political will to progress an AI Bill through Congress any time soon. Instead, a patchwork of policy guidance documents has emerged over the course of the Biden-Harris Administration covering topics such as the use of AI in Government and AI risk management. President Biden’s 2023 Executive Order (EO) on AI is the most significant Federal policy intervention to date. Not only is it the first Federal intervention to include specific provisions on governing generative and frontier AI, but it also has binding elements, making use of a Presidential Directive on defence preparedness to require developers of foundation models over a certain level of capability to disclose safety and technical information to the Government. Even so, without legislative backing the EO could be directly repealed. President-elect Trump has stated he would repeal the EO.

What this means for founders Copy link

State-level AI legislation has forged ahead where Federal regulation has stalled, with various States considering or enacting laws to regulate the use of AI by the private sector. Colorado and Utah led this process, as the first two States to pass AI legislation designed to broadly regulate commercial use of AI and protect consumers. Colorado’s law introduces an EU AI Act-style "high-risk AI" designation, while Utah’s legislation sets out the liability burden for consumer harms caused by generative AI systems in commercial use.

 

When deciding to launch or scale in the US, founders will need to ensure they are compliant with any relevant state AI laws, including in areas interacting with AI like privacy legislation. In the absence of regulatory coordination mechanisms, this can be a tricky exercise for founders. The national conversation on AI has so far focused on consumer protection policies and addressing the monopolistic power of big tech. Under the Biden presidency, FTC and DOJ action against big tech and AI companies have focused on competition issues and potential monopolistic practices, and the FTC has expressed concern with data privacy and confidentiality practices in the context of AI model training. FTC Chair Lina Khan will likely be removed by the incoming Trump administration, although it is not clear yet how that will impact the agency’s direction. Founders should stay aware of developments in this area, as it is likely to be influential for setting precedents and shaping the AI ecosystem at the Federal level in the absence of specific AI legislation.

Forward Look Copy link

President-elect Trump is likely to repeal the Executive Order. Copy link

The 2024 Republican manifesto has pledged to scrap Biden’s EO on AI. This could have important ramifications for startups, as the guidelines represent a direct Government engagement opportunity for companies looking to build trust through cooperating with State AI governance initiatives. Industry associations and internal structures will become increasingly important mechanisms for Government engagement on AI, if the EO is rescinded. 

It is not yet clear what the Trump administration’s AI policy plan is. Copy link

During Trump’s campaign, he said relatively little on AI policy. Close ally Elon Musk will likely be influential. Musk’s position on AI regulation has shifted back and forth: In March 2023 he signed an open letter urging AI labs to pause training for six months to focus on developing and implementing shared AI safety protocols. He also endorsed a controversial Californian state bill (SB 1047) that would have required AI developers to conduct safety testing on some of their own models in August 2024. At the same time, Musk’s AI company xAI’s model Grok, is designed to have fewer guardrails than major competitors and has been under scrutiny for its data protection practices.

Globally, the United States generally offers a favourable environment for AI startups. Its regulatory approach tends to be more flexible and innovation-driven, with significant support for technological development. Despite years of efforts, Congress has failed to pass any meaningful legislation that would regulate the technology industry. Agencies like the Federal Trade Commission and the National Institute of Standards and Technology provide guidance but do not impose overly restrictive rules, making it easier for founders to innovate without excessive regulatory burden. Copy link
Megan Capiak Copy link
Managing Partner, Foundry Ten

Tier 2: The Details Copy link

Policy and Legislative Landscape Copy link

INITIATIVE

DESCRIPTION

The Executive Order, released by the Biden Administration in October 2023, is the most recent and significant policy intervention at the Federal level. The Order comprises 11 sections, and is largely a call to Government agencies to begin thinking about how AI should be governed in their sectors, with a focus on defence and biosecurity. 

There are a few particularly relevant sections for founders:

  • Section 4: Ensuring the Safety and Security of AI Technology, which requires that companies creating genAI systems over 10^26 FLOPS to share their model data with the government pre-release
  • Section 5: Promoting Innovation and Competition, which outlines the US’ interest in attracting AI talent which includes streamlining processing times of visa petitions and applications, and creating conditions favourable to promote innovation and competition
  • Section 6: Supporting Workers, which covers labour-market effects of AI, including job displacement and AI preparedness.
  • Section 7: Advancing Equity and Civil Rights covers AI applications to the Criminal Justice System, Government benefits/ programmes, and hiring practices in the broader economy.
  • Section 8: Protecting Consumers, Patients, Passengers, and Students.
  • Section 9: Protecting Privacy, which seeks to mitigate privacy risks potentially exacerbated by AI.

Particular areas of note under these sections include Presidential Directives to: 

  • Set standards for red-teaming (i.e. resilience testing) before the public release of AI systems
  • Establish standards and best practices for detecting AI-generated content, such as through watermarking technologies
  • Shape AI’s potential to transform education by creating resources to support educators deploying AI-enabled educational tools, such as personalised tutoring in schools
  • Use existing authorities to expand the ability of highly skilled immigrants and nonimmigrants with expertise in critical areas to study, stay, and work in the United States

Some relevant EO milestones for founders are listed below, a full timeline can be found here. As noted above, the future of the EO will be dependent on the outcome of the November 2024 Presidential Election.

January 2024: Foundation model developers and deployers began providing the Government with model safety information (including the results of red-teaming).

January 2024: The National AI Research Resource (NAIRR) Pilot was established.

July 2024: The National Institute of Standards and Technology (NIST) developed AI risk management guidelines.

July 2024: The National Telecommunications and Information Administration (NTIA) provided advice on the appropriate policy direction for governing open-source AI models.

October 2024: Guidance on the use of AI in the criminal justice system, healthcare, drug development and education must be published.

April 2025: The National Science Foundation (NSF) must establish four new National AI Research Institutes.

The Blueprint for an AI Bill of Rights is a nonbinding outline of five principles to guide the development and use of automated systems to protect Americans' rights in the age of artificial intelligence which cover:

  • Safe and Effective Systems
  • Algorithmic Discrimination Protections
  • Data Privacy
  • Notice and Explanation
  • Human Alternatives, Consideration, and Fallback

The Blueprint emphasises the need to balance technological progress with civil rights and democratic values. It calls for automated systems to be developed with input from diverse stakeholders, undergo testing and monitoring, and protect against discrimination and privacy violations.

In June 2024, the Supreme Court issued a landmark ruling that overturned the Chevron doctrine, which had until then required courts to defer to Federal agencies' interpretations of ambiguous laws. Courts will now instead rely on their own interpretations. This decision might mean there are more opportunities to challenge unfavourable agency interpretations in court, particularly in areas like data privacy, algorithmic bias, or AI safety regulations. While there may be short-term uncertainty, the ruling could lead to more consistent and predictable AI regulatory environments in the long run as court interpretations become established.

The roadmap was produced by the Bipartisan Senate AI Working group in May 2024. While not binding, it does provide a helpful gauge of shared political priorities. The roadmap recognises AI's potential to revolutionise various sectors and boost the economy. While it is committed to supporting AI innovation through significant funding and research initiatives, it also has a strong focus on responsible development and deployment of AI systems including transparency, explainability, and rigorous testing especially for high-impact uses. Privacy and data protection are also cited as major concerns, with calls for a comprehensive Federal data privacy law.

Policymakers Copy link

NAME

DESCRIPTION

NIST is a non-regulatory agency which is a part of the US Department of Commerce and is home to the US AI Safety Institute since it was created in 2023. Over the years NIST has undertaken a number of AI initiatives, including developing a risk management framework to better manage risks posed to individuals, organisations, and society by AI. NIST has also released Special Publication 1270, Towards a Standard for Identifying and Managing Bias in Artificial Intelligence as well as a Privacy Framework which outlines an approach for organisations to manage privacy risks. NIST has ongoing research into the explainability of AI systems and a team aiming to develop measurement methods and best practices to support the implementation of explainable AI.

The Office for Science and Technology Policy (OSTP) was given a broad mandate by Congress to advise the President on the effects of science and technology on domestic and international affairs. It is the direction-setting body at the federal level, and was the driving force behind the Executive Order on AI, as well as the Blueprint for an AI Bill of Rights. 

The United States Department of Energy (DOE) oversees US national energy policy and energy production, as well as energy-related research and energy conservation, including nuclear. The DOE has an office of Critical and Emerging Technology which is running the Frontiers in AI for Science, Security, and Technology (FASST) initiative in addition to a number of other ongoing AI-related projects 

The DOE also runs the AI Advancement Council, which was founded in  2022, to oversee implementation of the DOE AI Strategy and address issues relating to the ethical use and development of AI systems.

The US Equal Employment Opportunity Commission is a Federal agency established in 1964 to administer and enforce civil rights laws against workplace discrimination. In collaboration with the Department of Justice, the EEOC have clearly laid out how employers’ use of AI and other automated systems can result in discrimination against job applicants and employees with disabilities, which is part of their wider Artificial Intelligence and Algorithmic Fairness Initiative.

Enforcers Copy link

NAME

DESCRIPTION

The FTC is an independent agency of the United States Government tasked with protecting consumers and promoting competition by enforcing antitrust laws and preventing unfair, deceptive, or anticompetitive business practices. The FTC mainly devotes resources to segments of the economy where consumer spending is high such as health care, pharmaceuticals, professional services, food, energy, and certain high-tech industries like computer technology and Internet services.

In January 2024 the FTC launched an Inquiry into Generative AI Investments and Partnerships. They have also taken an interest in other areas where consumers might be affected by AI such as the recent rule to combat fake and AI generated reviews and testimonials by prohibiting their sale or purchase and allow the agency to seek civil penalties against known violators, as well as data privacy and confidentiality in the context of AI model training. It recently issued a joint statement on AI competition with the Department of Justice, European Commission and UK Competition and Markets Authority.

The DOJ Antitrust Division enforces US antitrust law and works closely with the FTC. The FTC and DOJ recently agreed that the FTC will oversee competitive concerns relating to major software companies, while the DOJ will investigate the conduct of chip manufacturers.

The FTC may refer evidence of criminal antitrust violations to the DOJ as the DOJ has exclusive jurisdiction over US Federal criminal antitrust investigations and prosecutions. The DOJ also has sole antitrust jurisdiction in certain industries, such as telecommunications, banks, railroads, and airlines.

The Division recently issued a joint statement on AI competition with the Federal Trade Commission, European Commission and UK Competition and Markets Authority, and has also recently undertaken a number of antitrust cases against big tech companies such as Google.

The FCC works alongside the FTC to protect consumers in areas particularly related to communications. 

The FCC has taken a particular interest in AI as it relates to communications issues, as shown through its decision to implement rules around AI-Generated Robocalls and disclosure rules for the use of AI in Political Ads

Innovation Champions Copy link

NAME

DESCRIPTION

FOUNDER FOCUS

An independent Federal agency supporting science and engineering in the US through research funding and grants to America's universities. The NSF funds AI development research and houses the Technology, Innovation and Partnerships Directorate, which invests in a number of key sectors through its investment roadmap.

Find funding opportunities for AI development  

DARPA’s mission is to invest in breakthrough technologies for national security purposes. It has run a number of programmes related to AI such as the Explainable Artificial Intelligence and AI Forward programmes.

Keep an eye out for potential future funding opportunities 

An NSF initiative, the Pilot aims to increase access to AI-related infrastructure resources including compute, data and pre-trained models. 

Apply for access advanced computing or explore their readily available models or data sets 

TechNet is a national bipartisan network of technology CEOs and senior executives promoting growth in the innovation economy. 

Become a member to join their network and lobbying initiatives  

The United States - States Copy link

California Copy link

Tier 1: The Overview Copy link

State of Play Copy link

The headline: California is the undisputed centre of AI and tech innovation, home to 32 of the top 50 most promising privately-held AI companies. California is also known for setting high regulatory barriers and has recently turned its attention to AI, with the Governor signing eighteen AI-related laws in the 2024 session and the State legislature likely to introduce further AI-related Bills in the future.

The context: California is the key jurisdiction of operation for innovative companies of all sizes, boasting an economy that would rank as the fifth-largest in the world (ahead of the UK) if the State were its own country. Silicon Valley remains the centre of gravity for US AI funding. Even setting aside the major success story of OpenAI, the Bay Area still attracted seven times more investment into generative AI between 2019-2024 than New York, the next biggest hub in the US.

The rules: California introduced thirty-eight AI-related Bills during the 2024 legislative session. The most controversial - SB 1047, which would have placed binding rules on the developers of the most powerful frontier AI models - was vetoed by the Governor on the grounds that targeting the technology itself rather than its potential use cases did not adequately mitigate the risks of AI going wrong or being misused. However, the Governor did sign eighteen AI-focused Bills from the 2024 session, including AB 2013 and SB 942, which will set new transparency standards for sharing model training data and disclosing synthetic content respectively.

What this means for founders Copy link

Founders will need to be aware of the impact of these new laws, as well as familiarising themselves with the State’s laws in other AI-relevant areas of regulatory intervention such as California’s stringent data privacy legislation. Founders should also be aware of future AI-related Bills coming out of California, which may focus on current risks from specialised use cases for the technology rather than on the theoretical catastrophic risks that were the focus of SB 1047.

Forward Look Copy link

We could see a "California Effect" develop in US AI regulation. Copy link

The size and depth of California’s tech market means that its newly-enacted AI rules will likely cause repercussions for AI developers and deployers across the US, despite the veto of SB 1047. Firms will need to comply with the new regulations in order to build and use in-scope AI systems in the State, regardless of where they are headquartered. Equally, the State legislature could look to reintroduce all or parts of SB 1047 in future sessions. As a result, we expect the State to become something of a standard-setter on AI guardrails in the coming years, in lieu of Federal-level legislation.

Tier 2: The Details Copy link

Policy and Legislative Landscape Copy link

INITIATIVE

DESCRIPTION

Introduces the requirement to publish a summary of training data (including information such as the data’s source and whether it was pre-processed before it was used for training) alongside the release of new generative AI models or after the "substantial modification" of an existing system (including fine-tuning or any other update that materially changes the functionality or performance of the system), with exceptions in certain cases such as security models.  

Introduces requirements for developers of generative AI models with over 1 million monthly users to offer a detection tool for AI-generated content, automatically tag AI-generated content as synthetic with a "latent disclosure" (e.g. a watermark or metadata), and offer users the option to also include a "clear and conspicuous" disclosure in the content. Developers are liable for a $5,000 fine if they fail to do so.

Policymakers Copy link

NAME

DESCRIPTION

The Governor’s policy positions are an important indicator of which legislation is likely to be passed or vetoed. The current Governor, Gavin Newsom, has spoken out in favour of AI developers during his tenure and takes a pro-innovation stance on tech regulation, seeking to leverage California’s attractiveness for AI investors and developers. 

Enforcers Copy link

NAME

DESCRIPTION

Has a wide range of enforcement powers over State laws such as AI regulation, including the ability to bring civil action against non-compliant companies and to collect fines for regulatory violations.

Oversees the implementation of the strengthened California Consumer Privacy Act, which grants California citizens the right to know what personal data is being collected by businesses and to opt out of having their information shared or sold. 

Innovation Champions Copy link

NAME

DESCRIPTION

FOUNDER FOCUS

A shared carry private/public venture capital fund, providing 50% of its General Partner carry to the University of California. 

Join their community for funding, mentorship and guidance, networking with University of California alumni 

Y Combinator (YC) is the most famous American technology startup accelerator and venture capital firm, which runs a California-based startup accelerator programme

Keep an eye out to apply for the 2025 batch which provides funding, development, and networking

TCA venture group is a collection of angel investors based in Southern California, with 400 members and five angel funds to support early-stage startups.

Apply for funding, or check out their available education/advisory resources

New York Copy link

Tier 1: The Overview Copy link

State of Play Copy link

The headline: New York State’s AI credentials extend beyond the Big Apple, driven by a pro-tech and innovation Governor, Kathy Hochul, who recently approved a $275m flagship statewide investment in collaborative research, development and job creation called the AI Empire Consortium.

The context: The Consortium forms one part of a broader innovation strategy for the State that includes significant investments in chips and green technology, life sciences and advanced manufacturing. The strategy also has an emphasis on technology commercialization and innovation across industries including biotechnology, new materials and nanotechnology. Meanwhile, the New York City AI Action Plan launched in October 2023 aims to establish a comprehensive framework for AI governance that it claims is the first of its kind in any US city.

The rules: In the absence of federal legislation, New York has been one of the leading US states to propose and implement regulations on AI. Enacted rules include a policy requiring State agencies to ensure that all decisions made by AI that impact on the public have human oversight, and New York City has introduced a law requiring employers to conduct audits of automated employment decision tools to check for bias.

What this means for founders Copy link

New York is undeniably one of the world’s most attractive jurisdictions to set up a business working with AI. It boasts a range of grants and programmes to support tech startups with early stage funding and is nurturing a growing talent pipeline. Its regulatory regime is evolving fast, which founders should keep a close eye on, and its enforcement approach has been focused on providing safeguards for users and the public. As part of this, you may find that adopting transparency as a key principle will work to your benefit when you operate in NYC.

Despite New York’s expanding tech scene, startups often find the state’s regulatory environment challenging to navigate. New York is notorious for its rigorous privacy and consumer protection laws, coupled with stringent enforcement practices. Recent high-profile cases have highlighted how regulators pursue companies for data violations, imposing steep penalties for noncompliance and reinforcing the State's strong commitment to consumer protection. For AI companies, this can mean grappling with extensive legal requirements that may hinder innovation and growth if not carefully managed. Copy link
Megan Capiak Copy link
Managing Partner, Foundry Ten

Forward Look Copy link

We can expect further industry guidelines and regulation to be issued. Copy link

These are most likely to be in specific sectors (e.g. insurance) to prevent discriminatory outcomes from the use of AI, and a proposed "Robot Tax" for businesses displacing workers with AI and licensing of high risk AI systems. Senate Internet & Tech Chair Kristen Gonzalez is developing her AI agenda, which is expected to include regulations on AI chatbots. Legislation related to AI usage liability and watermarks is also anticipated in 2025.

For New York City, the AI Action Plan’s implementation will kick into gear over the course of 2024-2025. Copy link

There are myriad initiatives underway to raise public awareness of AI and to establish clearer guidelines for AI developers and deployers in public services. 

Tier 2: The Details Copy link

Policy and Legislative Landscape Copy link

INITIATIVE

DESCRIPTION

Launched in October 2023 by New York City Mayor Eric Adams, this set of roughly 40 policy initiatives is designed to protect residents against harm like discrimination resulting from AI. It includes development of standards for AI purchased by city agencies and new mechanisms to gauge the risk of AI used by city departments.

Whilst it is primarily geared towards public agencies, the plan sets a clear direction and expectation of what New York City expects from the development and deployment of AI tools.

In December 2021, New York City passed the first law in the US requiring employers to conduct bias audits of AI-enabled tools used for employment decisions. 

Also branded the ‘AI Bias Law’, this law prohibits employers and employment agencies from using an automated employment decision tool unless:

  • the tool has been subject to a bias audit within one year of the use of the tool,
  • information about the bias audit is publicly available, and
  • certain notices have been provided to employees or job candidates.

In April 2023 the Department of Consumer and Worker Protection (DCWP) published the law’s Final Rule. The law is now in effect, and enforcement began on July 5, 2023. A helpful explainer can be found here.

Signed into law in 2019, the SHIELD Act prioritises proactive data security measures, requiring companies to safeguard information before breaches occur, with stringent compliance enforcement.

The SHIELD Act applies to AI products when a security breach occurs, meaning an unauthorised acquisition of computerised data that compromises the security, confidentiality, or integrity of private information.

Policymakers Copy link

NAME

DESCRIPTION

Proposed legislative measures intended to protect against untrustworthy and fraudulent uses of AI. Presented as part of her FY 2025 Executive Budget, the bill, if passed, would amend existing penal, civil rights and election laws—establishing a private right of action for voters and candidates impacted by deceptive AI-generated election materials and criminalising certain AI uses. 

Mayor Adams has been the driving force behind The New York City Artificial Intelligence Action Plan, which introduces a set of phased actions the city will undertake to help agencies evaluate risks and determine whether or not a tool is the right technology to deliver better positive outcomes for New Yorkers. However, in September 2024, Adams was indicted on federal charges of bribery, fraud, and soliciting illegal foreign campaign donations, and his future is uncertain.

Enforcers Copy link

NAME

DESCRIPTION

The Attorney General enforces laws to protect consumers, tenants, patients, workers, investors, and charitable donors in the State of New York. The office also coordinates statewide civil and criminal investigations.

As part of its role in monitoring the effectiveness of different regulatory frameworks, the Office of the New York State Attorney General (OAG) hosted a symposium on the effectiveness of the current legislative framework for AI.  The OAG published a symposium report recommending that further evidence is needed around effective algorithmic auditing frameworks, including watermarking.

Tasked with enforcing Local Law 144 of 2021, which requires employers and employment agencies that use "automated employment decision tools" to provide notification and conduct and publish bias audits of the tools in advance of their use in employment decisions in New York City.

Innovation Champions Copy link

NAME

DESCRIPTION

FOUNDER FOCUS

In recognition of the important role technology plays across all government services, Mayor Adams created the Office of Technology and Innovation (OTI) by signing Executive Order 3 in January 2022. This order combined the City's existing technology offices under the leadership of Chief Technology Officer Matthew Fraser.

This order marked an important shift in how the City of New York approaches technology and innovation. Instead of disparate offices operating in silos, OTI now represents a coordinated citywide approach to delivering public services to city residents.

Check out their business support offer

Funded by over $400 million in public and private investment, Empire AI is a partnership of public and private universities in New York that will establish an artificial intelligence computing centre at State University of New York’s University at Buffalo to facilitate statewide innovation, research, and development of AI technologies.

Follow the consortium’s latest updates here

Founded in 2011 and run by FirstMark Capital, Data Driven NYC is a Big Data & AI community with more than 20,000 members.

Join the Founder Guild

Tier 1: The Overview Copy link

State of Play Copy link

The headline: As the fourth largest state in the US for VC investment, Texas is a vibrant hub for VC investment and startup success. Texas’ tech sector contributes more than $469 billion to the State's economy, representing nearly 20% of the total GDP. Its three major cities - Austin, Houston, and Dallas - have formed what is known as the Texas Tech Triangle, with each city serving as a major hub for technological advancement.

The context: The State has a high degree of innovative and entrepreneurial spirit, with no income taxes for individuals or corporations. Despite its AI regulatory environment, which is generally considered more permissive than California’s, Texas is ultimately unlikely to be able to displace California’s market size and innovation ecosystem for founders.

The rules: Texas has a reasonably developed approach to AI policy. Its AI-relevant rules are strongest on data privacy and deepfake regulation, requiring consent before companies process sensitive data and prohibiting the production and dissemination of political and sexual deepfakes.

What this means for founders Copy link

Texas’ favourable investment and policy context makes it an attractive place for founders. The Tech Triangle is underpinned by AI-specific Innovation Champions such as Dallas AI and the Austin AI Alliance. 

Forward Look Copy link

Enforcing the new data privacy laws is at the top of the agenda. Copy link

The Data Privacy and Security Act entered into force in July 2024, granting individuals sweeping rights over the use of their data. The State Attorney General Ken Paxton has indicated his intention to thoroughly enforce the new law in collaboration with the Consumer Protection Division of the Office of the Attorney General. 

A draft comprehensive AI Bill is expected to be released in November 2024 ahead of the 2025 legislative session, which may pave the way for the introduction of further AI regulations.

Timelines Copy link

Q4 2024: The Texas AI Advisory Council, established in February 2024 by Governor Greg Abbott, is expected to publish a report on the use of AI in the State government. Potential policy proposals could include designating a State agency to hold the pen on AI policy, creating a separate AI policy-making office and further measures on AI and data privacy & cybersecurity.

Tier 2: The Details Copy link

Policy and Legislative Landscape Copy link

INITIATIVE

DESCRIPTION

The Data Privacy and Security Act, which came into force on 1 July 2024, applies to businesses operating in Texas or producing products/services consumed by Texas residents, as well as to businesses which process or sell personal data. 

The Act outlines a number of consumer data privacy rights, such as the right to access personal data, the right to correct inaccuracies, the right to delete personal data, the right to obtain a portable copy of personal data, and the right to opt out of personal data processing for targeted advertising, sale, or profiling. The Act also outlines obligations for data controllers and processors to: 

  • Limit data collection to what's necessary
  • Implement reasonable security practices
  • Provide clear privacy notices
  • Conduct data protection assessments for certain processing activities
  • Obtain consent for processing sensitive data
  • Respond to consumer requests within 45 days
  • Assist controllers in fulfilling consumer requests and other obligations
  • Ensure confidentiality of personal data

Policymakers Copy link

NAME

DESCRIPTION

The AI Advisory Council, established following the enactment of Bill HB 2060 in 2023, has been tasked with publishing a report by the end of 2024 examining the use of AI in government as well as a review of automated decision systems.

The council comprises 7 members, including the State of Texas Chief Information Officer and Texas Department of Information Resources (DIR) Executive Director Amanda Crawford, as well as a number of respected academics and lawyers.

The AI Select Committee was set up in April 2024 to conduct a comprehensive review of the advancements in AI and other emerging technologies, as well as the economic, ethical, and societal implications. They published an interim report in May 2024.

The Texas DIR oversees technology solutions to state and local government entities with the mission of making public sector technology in Texas more secure, cost-effective, and forward-looking. 

In addition to providing administrative support to the AI Advisory Council, the DIR also operates an Artificial Intelligence User Group, which seeks to improve the delivery of public services, and hosts AI Day Events which bring together public servants and industry members.

Enforcers Copy link

NAME

DESCRIPTION

The Office is the regulator for the Texas Data Privacy and Security Act

Innovation Champions Copy link

INITIATIVE

DESCRIPTION

FOUNDER FOCUS

Dallas AI is a trade body with over 6,000 members, whose mission is to help Dallas become a vibrant hub for AI. 

Join for networking events, or follow their LinkedIn for their founder’s discussion group

The Austin AI Alliance aims to accelerate the growth of AI-focused companies and support AI adoption and use in various industries. 

Become a member for networking through monthly social and educational meetings

The Texas Association of Business seeks to protect Texas' pro-innovation tech environment. 

Lobby for AI policies through the Tech Means Business Coalition