A Plan for Artificial Intelligence
The next Congress will write the United States’ foundational rules for artificial intelligence. Decisions made over the next two years will shape every major industry for decades from safety standards and competition to workforce transition and government use. These choices must be made thoughtfully and by leaders who understand where the technology is going. Eric Jones intends to be in the room when those decisions occur. He will bring in the voices of those actually building and working with AI.
America’s AI strategy should be pro-innovation, however, real innovation requires clear rules, public trust, and a level playing field. Eric’s view is simple: the greatest long-term threat to American AI leadership is not regulation, it’s the absence of it. The real risk isn’t oversight. It’s the backlash that follows when oversight fails.
Eric believes in American AI dominance. He wants the foundational models, advanced chips, the data centers, and the top talent to be here in the United States, not in Beijing or overseas. That means major federal investment in AI research, expanding the CHIPS Act to include AI infrastructure, and STEM pipeline development at every level.
He also believes the industry needs clear, predictable, and accurate federal rules. The current vacuum of federal action is creating exactly the kind of uncertainty that makes long-term investment harder, not easier. A patchwork of 50 state regulations is not a favorable operating environment for anyone building at scale.
America’s approach to AI should be fundamental and include the following:
Mandatory safety plans and independent audits for frontier models, scoped to high-capability systems, not a blanket compliance burden on every business
A strong federal framework that preempts conflicting state laws, paired with real national standards, not preemption without protections
Robust antitrust enforcement in AI markets, including scrutiny of mergers that concentrate control over foundational models and support for interoperability standards
Worker transition investments, funded in part by companies that benefit most from automation because public support for AI depends on shared economic gains
Support for open-source AI development as a counterweight to closed ecosystems, without regulations that entrench incumbents
We’ve already seen the cost of fragmented regulation. In the US, a growing patchwork of state privacy laws has created inconsistent standards that increase costs and complexity for businesses while delivering uneven protections for consumers. Internationally, regimes like GDPR and the DSA add another layer of divergence.
The same dynamic is emerging even faster in AI. California, Texas, Colorado, Illinois, and others are moving ahead with their own rules. Without federal action, companies will spend the next decade navigating a fragmented compliance maze that slows innovation without meaningfully improving safety.
Eric’s position is clear: the United States needs a strong, national AI framework that sets real standards and creates a single, predictable baseline. That kind of clarity is not a constraint on innovation, it’s a competitive advantage for companies building for the long term. He intends to help build that framework. With input from the people who will actually have to operate under it.