The way to think about AI isn't to ask how to control it. It's to ask how it gets discovered.
When I read that the US and China are considering official discussions about artificial intelligence, with Treasury Secretary Scott Bessent leading the American side, my first thought wasn't about diplomacy or safety. It was about how institutions think about technology. They treat it as something to be managed, bounded, put in guardrails. But technology doesn't work that way. It gets discovered, not designed.
The more interesting question isn't whether these discussions will happen, or what they'll produce. It's what they reveal about how large organizations understand innovation. When you hear about "AI guardrails," you're hearing the language of control applied to something that resists control. It's like trying to put guardrails on a path that hasn't been built yet.
Look at the timing. Just weeks before these potential talks, the White House accused China of "industrial-scale" AI technology theft. And Treasury Secretary Bessent recently criticized Bernie Sanders for including Chinese researchers in an AI safety discussion, saying "the real threat to AI safety is letting any nation other than the United States set the global standard". This isn't safety cooperation. It's competition dressed up as cooperation.
The problem with thinking in terms of "guardrails" is that it assumes we know where the technology is going. We don't. The best things are often discovered while making them, not designed in advance. That's true for startups, for software, for essays-and for AI. You build something, see what it can do, then build the next thing based on what you learned. Trying to put guardrails around that process is like trying to write rules for how discoveries should happen.
I suspect what's really happening here is simpler. Both sides recognize AI is important. Both want to maintain their advantages. And both know that talking about safety sounds better than talking about competition. So they talk about guardrails while continuing to compete. The discussions themselves become part of the competition-a way to influence how the technology develops without actually cooperating.

This matters because it affects how AI gets built. If the people making decisions about "guardrails" are diplomats and treasury officials rather than builders, they'll create rules that make sense to institutions but not to technology. They'll think in terms of boundaries and controls rather than possibilities and discoveries.
There's a deeper question here about how we think about powerful technologies. When something seems dangerous, our instinct is to control it. But with technology still being discovered, control often means limiting discovery. The right approach might be the opposite: to encourage more building, more experimentation, more discovery-just with different incentives.
I don't know what practical "guardrails" would look like for someone building AI. The reports don't say. They talk about "a recurring set of conversations that could address the risks posed by AI models behaving unexpectedly" and "autonomous military systems". But conversations between diplomats about unexpected behavior won't help a startup founder trying to build something useful.
The test for whether these discussions matter won't be whether they produce diplomatic agreements. It will be whether they change how AI actually gets built. If the "guardrails" are just rules written by people who don't build things, they'll be ignored or worked around. If they reflect how technology actually develops, they might help.
For now, the most revealing thing is who's leading the discussions and what they've said recently. When the Treasury Secretary says the real threat is letting other countries set standards, and when accusations of theft come just before talks about cooperation, you're seeing the real priority. It's not safety. It's advantage.
The way to think about this isn't as a diplomatic breakthrough or a safety measure. It's as evidence of how institutions misunderstand technology. They see it as something to be controlled. Builders see it as something to be discovered. Until that gap closes, the discussions will be more about politics than about building better things.

